2,123
Views
1
CrossRef citations to date
0
Altmetric
Research Article

The Health Innovation Impact Checklist: a tool to improve the development and reporting of impact models for global health innovations

ORCID Icon, , , & ORCID Icon
Article: 2056312 | Received 16 Dec 2021, Accepted 18 Mar 2022, Published online: 22 Apr 2022

ABSTRACT

Donor financing is increasingly relying on performance-based measures that demonstrate impact. As new technologies and interventions enter the innovation space to address global health challenges, innovators often need to model their potential impact prior to obtaining solid effectiveness data. Diverse stakeholders rely on impact modeling data to make key funding and scaling decisions. With a lack of standardized methodology to model impact and various stakeholders using different modeling strategies, we propose that a universal innovation impact checklist be used to aid in transparent and aligned modeling efforts. This article describes a new Health Innovation Impact Checklist (HIIC) – a tool developed while evaluating the impact of health innovations funded under the Saving Lives at Birth (SL@B) program. SL@B, a global health Grand Challenge initiative, funded 116 unique maternal and newborn health innovations, four of which were selected for cost-effectiveness analyses (CEAs) within our evaluation. A key data source needed to complete a CEA was the lives saved estimate. HIIC was developed to help validate draft impact models from the SL@B donors and our own team’s additional modeling efforts, to ensure the inclusion of standardized elements and to pressure test assumptions for modeling impact. This article describes the core components of HIIC including its strengths and limitations. It also serves as an open call for further reviewing and tailoring of this checklist for applicability across global efforts to model the impact of health innovations.

Responsible Editor

Jennifer Stewart Williams

Background

Estimating the health and economic impacts of a policy, program or project is critical for informing and scaling innovative healthcare solutions [Citation1]. Donor funding and investments from multilateral, bilateral, and global health initiatives are increasingly relying on performance-based measures [Citation2–5]. In addition, donor demands are shifting from technical and academic outputs to impacts that measurably benefit society [Citation6]. To demonstrate efficiency and garner donor and private sector interest, global health implementing agencies need to be able to measure and report the impact of their interventions on health outcomes.

In the field of global health innovation, impact is not realized unless an innovation is successfully developed, taken to scale and demonstrates effectiveness [Citation7]. However, with rapidly changing health markets, rigorous evaluations for new interventions are often too costly and time-consuming to conduct relative to decision-making timelines [Citation8–10]. It is estimated that less than 5% of drug and/or technology innovations reach scale, while the rate of achieving scale is 14 years on average [Citation11]. With such high levels of uncertainty and wait times, innovators need predictive modeling to estimate the efficiency and future impact of their innovation, enabling donors and investors to make key funding and scaling decisions prior to the availability of widespread effectiveness data.

Within the field of maternal, newborn, and child health (MNCH), a few impact modelling tools already exist. The Lives Saved Tool (LiST), developed at the Johns Hopkins Bloomberg School of Public Health, has been used to estimate the impact of scaling up interventions in low- and middle-income countries [Citation12]. The tool uses ‘coverage’, or the population in need that receives an intervention, as a key input to calculate cause-specific mortality. The default coverage data for the tool comes from large-scale, nationally representative surveys such as Demographic and Health Surveys and Multiple Indicator Cluster Surveys [Citation12,Citation13]. PATH, a nonprofit global health organization, under its Innovation Countdown 2030 initiative modeled 11 interventions using LiST to learn that 6.6 million mothers and children could be saved between 2016 and 2030 if these innovations could be scaled up [Citation14]. PATH’s modeling strategy also pivoted around ‘coverage’ to determine how an innovation could help expand access to basic health services [Citation14].

Another non-profit global organization, Population Services International (PSI), developed its own modelling strategy for estimating the health impact of its product distribution and service delivery efforts [Citation15]. Unlike PATH, PSI sought to understand the impact of a single product or service delivered by the organization, and wanted cross-country and cross-program comparisons. As a result, it adopted a disability adjusted life year (DALY) measure to calculate the number of healthy years of life not lost to disability or death due to a PSI service [Citation15].

RTI International, with support from The Bill and Melinda Gates Foundation, has developed the Maternal and Neonatal Directed Assessment of Technology (MANDATE) model. MANDATE is a web-based tool to assess the impact of medical technologies on maternal, fetal and neonatal mortality in low-resource settings [Citation16]. The model allows users to adjust variables related to a technology’s availability, appropriate use and efficacy to estimate the potential number of maternal and newborn lives saved [Citation17,Citation18].

Grand Challenges Canada (GCC) and Results for Development’s (R4D) has another impact modeling approach. GCC developed simple spreadsheet models to estimate the number of lives saved and lives improved due to health innovations funded under the Saving Lives at Birth (SL@B) program [Citation7]. These interventions are novel and require the consideration of contextual factors and feasibility of scale-up as they have not demonstrated effectiveness at scale [Citation19]. GCC could not use LiST or MANDATE to model its innovations’ impact as these tools do not account for context-specific modalities, while PSI’s DALY approach was inconsistent with GCC’s lives saved estimate [Citation7]. GCC models are based on the innovation’s theory of change, which helps reveal the chain of events that connect the direct effect of the innovation to health outcomes. These chain of events under various scenario analyses, including different assumptions about the effectiveness of an intervention, form the key parameters that are included in the model [Citation7].

With an increase in the use of innovation impact modeling and the diverse range of modeling methodologies being adopted by various organizations (including PATH, PSI, GCC, and the MANDATE initiative), there is a growing need for standardization and quality assurance. Compared with clinical studies that report the effectiveness of an intervention, health intervention impact modeling takes into consideration broader, system-level factors such as the baseline health status of a population benefitting from an intervention, local service delivery capacity, as well as implementation-related issues affecting intervention coverage rates. This results in a diverse range of modeling approaches with varying impact metrics, which can be challenging to review and compare against one another. For example, PATH and GCC both modeled the projected impact of the same innovation (new inhaled formulation of oxytocin, a gold standard therapy for post-partum hemorrhage that currently requires refrigeration and administration by injection) across a similar timeframe but reached diverging estimates – PATH estimated 146,000 [Citation20] maternal lives saved between 2022 and 2030, while GCC estimated 27,000 [Citation7] lives saved between 2020 and 2030, globally. The lack of standardization in modeling makes it difficult to tease out specific assumptions used by both organizations, which generated differences in their model outcomes.

Promoting transparency and comparability in impact modeling can potentially stem from the use of reporting guidelines and checklists. Evidence suggests that an endorsement of guidelines by journals can facilitate improved reporting [Citation21]. Organizations using intervention impact modeling usually seek to project the impact of their interventions without necessarily having engaged in complex and large studies and can face challenges in data quality [Citation22]. Guidelines and checklists can assist organizations in ensuring a minimum standard of reporting.

Duke University was engaged in 2018 by the USA Agency for International Development (USAID) and GCC – two SL@B funding partners – to design and conduct an evaluation of the SL@B program to determine if it was achieving its intended impact. The SL@B program has issued 147 awards representing 116 unique innovations and 92 organizations addressing critical issues in maternal and newborn health (MNH) in low-resource settings [Citation23]. One component of the evaluation required estimating the potential impact of SL@B-funded innovations on maternal and neonatal mortality, which included reviewing impact models developed by GCC and R4D for four interventions. During this review process, Duke University’s Evidence Lab team referred to academic literature to find pre-existing and widely recognized tools and guidelines that could assist them in their validation efforts. Being unable to find one relevant and standardized tool, the team developed its own checklist for health impact models, henceforth referred to as the Health Innovation Impact Checklist (HIIC). Although initially developed to complement our review of GCC’s models, HIIC was further developed to provide a generalized reference or guide for various types of innovation impact models for our broader work with global innovators and attempts to consolidate and standardize multiple modeling approaches. The following section will introduce and explain the HIIC using some examples from innovations funded by SL@B.

The Health Innovation Impact Checklist (HIIC)

The HIIC is a tool to help review the standardized elements and pressure test assumptions of impact models. The checklist (see ) is a qualitative tool designed to review quantitative models and can be used by both reviewers and developers of health intervention impact models to help strengthen their analyses.

Table 1. Health Innovation Impact Checklist (HIIC)

HIIC is comprised of three sections: 1) Model Description, 2) Assumptions and 3) Scenarios, and each section consists of multiple categories, henceforth referred to as parameters. Each parameter or a measurable element or factor that forms part of the checklist, highlights a particular aspect of an impact model, which the HIIC user gauges as relevant or applicable to their model or not. A parameter, in the context of this checklist, can consist of a single indicator (e.g. the ‘time horizon’ parameter can be a single year (e.g. 2030)) or a range of estimates (e.g. the ‘efficacy’ parameter can comprise multiple studies that demonstrate the effectiveness of the innovation under different settings). HIIC does not dictate how to create an impact model, instead it enables the user to review their own model against each parameter. HIIC also requires reviewing sensitivities in model estimates. Sensitivity analysis helps determine the robustness of a model by examining to what degree the model results are affected by changes in inputs or assumptions. By requiring the user to identify and explain the various model parameters and confidence intervals/sensitivities of model outcomes, HIIC promotes transparency in results and comparability across different impact models.

The HIIC Model Description section highlights the basic components of intervention impact models, including the theory of change, or the chain of events that connect the direct effects of an innovation to health outcomes. Mapping out this chain of events helps reveal the key measures that determine an innovation’s potential impact. Identifying these measures helps the reviewer or modeler gauge the inherent assumptions their model make. For example, the direct effect of a newborn temperature measurement device, known as the BEMPU TempWatch [Citation24], would be an increase in the number of identified hypothermia cases that would not have been identified and treated in the absence of the TempWatch. The outcome, for example the number of newborn lives saved, will depend on many assumptions including but not limited to the number of newborns receiving access to the device, newborns using the device, and newborns receiving treatment after identification of hypothermia, all of which form the key measures to gauge the projected impact of BEMPU’s TempWatch [Citation24].

The HIIC description section also highlights the following: time horizon of the model or the number of years across which impact is being measured (SL@B used 2030 as the end year to project impact, corresponding with the Sustainable Development Goals’ (SDGs) timeline); target population that will use an intervention or for whom the intervention will have an effect (e.g. the population of interest in the case of the BEMPU TempWatch is low birth weight newborns who are more likely to develop hypothermia than normal weight babies, particularly in lower resource settings) [Citation24,Citation25]; and the study perspective, which determines from whose standpoint the modeling exercise is being conducted. The perspective of a model may be one or more of the following: societal, healthcare sector, health practitioners, patients, innovators, and funding agencies supporting the development of an innovation and others. The impact of an intervention cannot be realized in the same manner across different perspectives due to divergent interests, making it crucial to identify which perspective to model from the outset. For example, a narrow perspective, such as that of health providers’, will not account for the use of resources outside the health sector or the greater welfare to society, which will be captured under the societal perspective. Considering that economic resource availability and output of any society are limited, improving healthcare via novel innovations will require devoting more resources to health, which may necessitate forgoing benefits or opportunities in other sectors [Citation26].

The Assumptions section is subdivided along demographic factors, efficacy and fidelity of the innovation, and health system factors. Demographic factors refer to the context in which an innovation is being implemented and includes the disease burden parameter. The goal of this parameter is to question whether the model considers the baseline demographic factors, such as fertility trends, neonatal mortality or maternal mortality in the beneficiary population and models how these factors might change in the absence of innovation. Including trends in demographics and disease burden helps ensure that the model does not overestimate the potential future impact of the innovation.

The efficacy and fidelity to treatment sub-section includes the evidence base of the model and the protocols of real-world implementation of an intervention. The efficacy parameter gauges the effectiveness of an intervention during typical and perfect use and compares these against a counterfactual (hypothetical alternative to actual conditions minus the intervention) [Citation27]. The effectiveness of an innovation oftentimes is the single most important factor that determines how impactful the intervention will be, making this parameter’s quality of evidence base an imperative task. Randomized controlled trials and propensity score matching studies in peer-reviewed journals are typically the ideal form of evidence, but other sources such as intervention pilots or long-term studies can also be quoted with additional discounting to account for limitations. Likewise, reliance on non-peer reviewed reports for parameter data is often necessary. By requiring the user of the checklist to report their sources of evidence, HIIC promotes transparency and allows internal and external reviewers of impact models to review and compare model data, assumptions and sources across different models.

The fidelity to treatment parameter reviews the process of intervention development and administration from a service delivery perspective. This parameter evaluates evidence on the ease of development, implementation and use of intervention by intermediaries, including manufacturers, health professionals, caregivers, and beneficiaries. For example, modeling the impact of Pratt Pouch, an innovation that delivers Nevirapine (NVP) (an antiretroviral prophylaxis) in a small sachet to HIV-exposed infants, requires taking into account the ‘correct’ use of the pouch [Citation28]. Mothers can mistakenly fail to empty the complete contents of the pouch into an infant’s mouth or not complete the entire six-week NVP regimen, resulting in incorrect use, which can reduce the effectiveness of and thus negatively affect the innovation’s impact [Citation28].

The Health System Factors section identifies key components of the overarching health system in which an intervention is being implemented, which can affect an innovation’s utility and potential impact. The health system should be inclusive of all potential implementation challenges given current human and resource constraints. If an intervention is in the form of a product or its functioning depends on the availability of certain equipment or tools at a health facility, then the supply chain parameter gauges if the model takes into account the availability of relevant equipment, tools, or products to enable intervention use. If the intervention is a service and requires trained healthcare staff to administer it, then the attrition of health intermediaries parameter checks if the model incorporates the regular turnover of health workers, which can decrease the knowledge of and use of an intervention. If the intervention does not cater to severe cases that need referral or only caters to highly severe referred cases, then the referral parameter requires that the model discounts the patients that the intervention does not serve. Access to the health intervention may also vary across different segments of the population such as urban versus rural residents or those in different wealth quintiles (economic status), etc. The equitable access parameter reports if these differences in access have been incorporated in the model.

The Scenarios section identifies the overarching expansion strategy for the innovation and enables the reviewer to report on intervention scale-up. If the innovation is modeled to achieve universal coverage in a certain country (versus, for example, scale in government or private sector health facilities only) then all other parameters must also follow suit. For example, the time horizon should reflect the time needed to scale-up across the country, while the disease burden parameter must identify fertility or mortality trends across the entire country’s population.

Discussion

One of the key strengths of the HIIC is that it brings consistency and transparency across different kinds of intervention impact models. In the field of global health innovation where a diverse range of modeling methodologies are being adopted by different organizations, HIIC enables comparison between key elements of the models. Ensuring comparability and standardized approaches for impact modeling should help donors and governments as they seek to invest in and report out on effective and efficient innovations to achieve targeted health outcomes. For innovators and delivery organizations, comparability in modeling can facilitate a deeper understanding of their innovation’s performance by helping them identify parameters against which their innovations may be suboptimal.

In the health innovation field, where the effectiveness base for early-stage innovations can be missing or lacking strong evidence, a tool such as HIIC can help strengthen the analysis by enabling modelers to be more systematic and transparent while developing their estimates. HIIC is by no means a replacement for strategies already being used to model impact where effectiveness data already exist, but rather it helps to ensure more accurate and comparable estimates prior to the availability of that data. Using HIIC enables the modeler to state the assumptions that their model makes and justify the sensitivities used, making it easier for the modelers and reviewers studying the models to understand the rationale behind the estimates. HICC should help in communication between different organizations as they model impact so that any divergent estimates can be more easily explained and clarified.

Modeling the impact of health innovations is by nature a complex task. Each impact model presents its own unique measurement challenges, and requires quantification of diverse input, output and outcome measures. This complexity in design, however, is not replicated in HIIC, which cannot fully encompass all aspects of a model or dictate a particular framework that a model must use. Instead, HIIC only poses questions and highlights key parameters, which the user of the checklist determines whether they apply to their model or not.

Some parameters included in HIIC such as ‘efficacy’ and ‘equitable access’ can prove to be difficult to measure and require in-depth knowledge of an innovation and its implementation. Including these parameters, however, is critical for assessing impact. Despite strong estimates not being easily accessible, reporting on these parameters in HIIC will ensure that the strength of the evidence is captured, which can ultimately inform a reviewer about the robustness (or limitations) of a particular impact model.

Conclusion

To make HIIC more comprehensive and potentially increase its user base, the next steps would entail engaging impact modelers who are using modeling techniques to review and critique HIIC. Field-testing HIIC on a range of impact models, keeping track of iterations resulting from this exercise and disseminating key learnings and revised versions of the checklist can add significant contribution to this trending field of innovation impact modeling. Differential weighting of parameters and their evidence base, scoring, and creating a more comprehensive list of essential and elective model components are examples of what future iterations of HIIC might look like. This article serves as an open call to further review and tailoring of this tool for applicability across global efforts to model the impact of health innovations.

Authors contributions

JNB and AF conceived the original checklist idea; MS expanded on the checklist and wrote the first draft; all authors reviewed and approved the final draft.

Paper context

There are varying global health innovation impact modeling approaches being used to demonstrate impact. A lack of standardization across these methodologies can create issues of transparency and comparability. The newly developed Health Innovation Impact Checklist (HIIC) is a qualitative tool for reviewing quantitative models, designed for modelers and reviewers to pressure test assumptions and review the standardized elements of health innovation impact models. We invite stakeholders who estimate innovation impact to further refine and test this checklist.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

References