392
Views
0
CrossRef citations to date
0
Altmetric
Pharmacoeconomic Trends

The importance of model inputs and assumptions in conducting health technology assessments of novel drugs

&
Pages 1107-1109 | Received 19 May 2017, Accepted 11 Jul 2017, Published online: 04 Aug 2017

Background

Health technology assessments form the basis of drug coverage and reimbursement decisions in most European countries, and are increasingly being used by commercial payers in the US. Also within the US, medical societies, healthcare providers, and non-profit institutes are developing value frameworks with the goals of better measuring the value created by drugs and aligning prices with value. The Institute for Clinical and Economic Review, among other organizations, has begun conducting formal health technology assessments of soon to be and recently FDA approved therapies on a regular basis. These assessments typically involve cost-effectiveness and budget impact analyses, both of which are used to calculate appropriate “value-based” prices for novel drugs.

While guidelines exist on how health technology assessments should be conductedCitation1, for any given drug there remains great variability in cost-effectiveness estimates across studies. For example, one analysisCitation2 of the asthma treatment omalizumab found substantial differences across two studies using the same cost-effectiveness model, but using different modeling assumptions and inputs (the incremental cost per quality adjusted life year (QALY) gained was ∼$48,000/QALY in one study and $71,000/QALY in the other). Another reviewCitation3 of 25 cost-effectiveness studies of asthma treatments found similar variability in cost-effectiveness estimates across studies.

In general, health technology assessments require multiple inputs and assumptions to arrive at the incremental cost-effectiveness ratio for the evaluated therapy, including the choice of the population being considered, baseline event risks, relative risk reductions associated with the treatments being compared, the time horizon being evaluated, and assumptions about how treatment and comparator prices evolve over time, among others. It is well known that, along with differences in point-of-view/perspective and institutional settings, subjectivity in these choices can generate important differences in the cost-effectiveness of any given therapy, which in turn can make therapies appear more or less “valuable” to payers and society. In this piece, we describe several ways in which different modeling inputs and assumptions in health technology assessments can substantively impact the resulting cost-effectiveness of products being analyzed.

Should treatment efficacy in health technology assessments be based on real world or trial data?

A key input in health technology assessment is the efficacy of treatments being compared, which is frequently obtained from randomized control trial settings. Treatment efficacy in the real world setting may differ from trial settings and be more representative of real world patient populations for several reasons, including differences in baseline event risk (which would lead to differences in absolute risk reductions for any given relative risk reduction associated with treatment), differences in drug adherence and monitoring in real world vs trial settings, and differences in populations being considered (trial populations are frequently younger and healthier than populations that receive treatment in the real world). Although several clinical and research organizations, including the International Society for Pharmacoeconomics and Outcomes Research, the Royal Society of Medicine, and the 2nd Panel on Cost Effectiveness, have advocated for the use of real world data in health technology assessments, and despite evidence that the choice of data source is important, guidelines on how trial vs real world data should be weighed have not been developed. For example, in an analysisCitation3 of asthma cost-effectiveness studies that compared data on the treatment efficacy from real world vs randomized controlled trials, those that used real world efficacy were 100% more likely to conclude that a treatment was cost-effective.

How should the cost of therapy be determined?

The second key input in health technology assessment is the cost of treatments being compared. In most assessments, the cost of treatment of a drug is defined by its “list price”, a price that typically over-states the net price that health insurers pay after rebates or other negotiated price discounts. In health technology assessments that compare branded drugs to generic comparators, the modeled cost differential between treatments will be significantly larger than the actual cost differential faced by payers. Accordingly, branded drugs will appear less cost-effective from the perspective of payers than they actually are. For example, recent cost-effectiveness analyses of PCSK9 inhibitors have used list prices of ∼$14,000 per year, which is considerably higher than what the UK’s National Institute for Health and Care ExcellenceCitation4 and some US-based health plansCitation5 are expected to pay.

Should drug costs be static or dynamic?

Most health technology assessments assume that the cost of therapies being evaluated remains static over time. However, most therapies exhibit significant reductions in price as branded competitor drugs or, later, generic drugs become available. This implies that the cost-effectiveness of a given branded drug will improve over time, even without changes in the relative efficacy of treatment. At an extreme, when the patent expires and generics enter the market, a branded drug that was not cost-effective at a given threshold (e.g. $100,000 per QALY) can become cost-effective overnight. Despite the fact that drug prices are not static—indeed, some drug prices rise over time, which would worsen cost-effectiveness, all other things being equal—and that the methodsCitation6 for accounting for drug price dynamics exist, a consensus as to whether health technology assessments should take a “life-cycle” or static view on drug prices, or, more importantly, whether sensitivity analyses routinely performed in health technology assessments should also incorporate how cost-effective a product would be under these two different scenarios, has yet to be reached.

What if evidence on drug efficacy changes over time?

Evidence on drug efficacy may evolve over time, as new post-marketing studies are conducted in larger populations and specific patient sub-groups, including those based on demographics, presence of specific clinical conditions, or presence of biomarkers that predict treatment efficacy. Evidence may emerge that a treatment is more or less effective than initially modeled in health technology assessments. While some organizations conduct re-assessment on harmful or absolute technologies and others conduct post-implementation technology re-assessments using real world data, re-assessments in general are relatively uncommon. Indeed, most health technology assessments are slow to integrate new evidence on drug efficacy and update recommendations, which is problematic if patient access and insurance coverage decisions are based on treatment efficacy data and/or costs that are out-of-date. Health technology assessments should be flexible enough to quickly incorporate new evidence on the efficacy of treatment, both overall and in specific patient sub-groups.

Is the size of the population at risk for treatment relevant for value assessments?

Health technology assessments frequently include two major components: an assessment of drug value (i.e. the cost-effectiveness of the drug) and the projected impact of the drug on health system budgets or affordability (a measure of affordability calculated using estimates of the size of the treated population, duration of treatment, and price known as the budget impact of the drug). While it is important for payers to understand the short-term budget impact of covering a particular treatment, these estimates are completely distinct from a treatment’s value, which is simply the monetized incremental clinical effectiveness of the drug over standard of care minus the incremental treatment costs. Drugs that are highly effective and treat large patient populations are socially desirable and valuable, but may have large budget impacts. For these therapies—e.g. novel drugs for hepatitis C—the key economic and policy challenge is how to finance their potentially large up-front costs over a long horizon, e.g. by reducing utilization of relatively lower value therapies and services, not simply to identify ways in which “value” can be enhanced by mandating lower prices.

Summary

Health technology assessments are becoming increasingly critical for helping public and private players better reconcile the prices of drugs with the value that they create. However, cost-effectiveness studies can vary substantially in their modeled estimates, depending on the assumptions used. Although we highlight several ways in which modeling inputs and assumptions may substantively impact the estimated cost-effectiveness of a treatment, other assumptions matter as well. Despite existing guidelines on how to conduct such analyses, there remain important grey areas where reasonable modeling assumptions may yield different results. For example, should health technology assessments use baseline event risk from clinical trials or real-world data? Some model assumptions matter more than others, and limited guidance exists on which assumptions are most appropriate.

In addition to developing guidance and reaching consensus on the above issues that we raise, along with transparency in modeling approaches undertaken, a more universal process of peer review of health technology assessments is needed. The models used in health technology assessments should be shared publicly so that the underlying assumptions driving a given model are transparent and modifiable. Making models available on an open source platform would give modelers the ability to gauge the reliability of these models, their sensitivity to various modeling assumptions and parameters, and encourage further discussion and debate on health technology assessment methodology.

Transparency

Declaration of funding

This study was funded by Amgen, Inc.

Declaration of financial/other relationships

TP and JPM are employees of Precision Health Economics, which was funded by Amgen, Inc. for this study. JME peer reviewers on this manuscript have no relevant financial or other relationships to disclose.

Acknowledgments

The authors would like to thank the three anonymous reviewers for their thoughtful feedback and input on the manuscript.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.