660
Views
0
CrossRef citations to date
0
Altmetric
Review

Evaluating high-cost technologies – no need to throw the baby out with the bathwater

, , , , &
Pages 1177-1183 | Received 26 May 2023, Accepted 22 Sep 2023, Published online: 28 Sep 2023

ABSTRACT

Introduction

Evidence generation for the health technology assessment (HTA) of a new technology is a long and expensive process with no guarantees that the health technology will be adopted and implemented into a health-care system. This would suggest that there is a greater risk of failure for a company developing a high-cost technology and therefore incentives (such as increasing the funding available for research or additional market exclusivity) may be needed to encourage development of such technologies as has been seen with many high-cost orphan drugs.

Areas covered

This paper discusses some of the key issues relating to the evaluation of high-cost technologies through the use of existing HTA processes and what the challenges will be going forward.

Expert opinion

We propose that while the current HTA process is robust, its evolution into accommodating the incorporation of real-world data and evidence alongside a life-cycle HTA approach should better enable developers to produce the evidence required on effectiveness and cost-effectiveness. This should lead to reduced decision uncertainty for HTA agencies to make adoption decisions in a more timely and efficient manner. Furthermore, budget impact analysis remains important in understanding the actual financial impact on health-care systems and budgets outside of the cost-effectiveness framework used to aid decision-making.

1. Introduction

Every year millions are spent on trying to either generate the evidence required to evaluate the clinical and cost-effectiveness of medical technologies through traditional health technology assessment (HTA) routes or to accelerate their adoption through real-world evidence generation and evaluation in health-care settings across the world. Evidence generation is a lengthy and uncertain process such that even at the end of it there is no guarantee that the uncertainty surrounding the estimates of clinical and cost-effectiveness will be reduced to the extent that decision-makers can confidently make reimbursement decisions. Furthermore, HTAs and decision-making can differ markedly between different countries [Citation1]. For example, Kanters et al. [Citation2] have previously highlighted the variability in reimbursement of an enzyme replacement therapy in Pompe disease between European countries and noted that this was not an isolated event but had been observed for other orphan drugs. In the case of Pompe disease, the different countries based their conclusions on the same efficacy studies but the costs were clearly different and country specific and furthermore perhaps a different weight was applied to the ‘rule of rescue’ [Citation3]. In the Netherlands, the Dutch government allowed alglucosidase alfa to be temporarily reimbursed (for 4 years) in order to collect additional evidence [Citation2]. Another orphan drug called Zolgensma was labeled ‘the world’s most expensive drug’ with a reported list price of £1.79 million per dose when approved by the National Institute for Health and Care Excellence (NICE) in 2021 [Citation4]. In November 2022, Hemgenix (etranacogene dezaparvovex) was given approval by the United States Food and Drug Administration (US FDA) as a one-time gene therapy for hemophilia B with a price tag of $3.5 million [Citation5].

These drugs are exemplars of high-cost orphan drugs, with very large per patient costs, for use against rare or ultra rare conditions and may not necessarily have been developed to market purely for reasons of profitability for the company and may therefore require incentivization. Incentives for the development of orphan drugs can include: increasing funding for basic research to allow a better understanding of the underlying disease pathology; de-risking some of the clinical development and regulatory approvals required in bringing a new drug to market; increase the likelihood of a financial reward through drug pricing at the market access stage [Citation6]. However, if technologies have a high associated cost (whether this cost is upfront or ongoing) this can often compound the potential error of making the incorrect decision to implement when the health-care system is not ready for it to be implemented or cost-effectiveness has not yet been proven, because scarce resources are diverted away from existing technologies and services with more proven benefits. Furthermore, not all innovation is necessarily beneficial, and once adopted, it can be difficult to unadopt. Described as Buxton’s Law ‘it is always too early to assess a new technology, until suddenly it’s too late’ both existing and new technologies continually evolve in their use and the evidence they generate and thus evaluation can be challenging [Citation7]. This all leads to the question:

Are special considerations required when high-cost technologies are being evaluated within the health-care sector?

1.1. Health technology assessment

Before new health technologies can be introduced and used in health-care systems, regardless of whether they are high-cost or not, evidence is required that they are safe that they provide medical or clinical benefit and that they will not displace activities which deliver greater health benefits given the cost involved (i.e. they are considered cost-effective). While safety is not specifically covered here, most technologies need to ensure they have the relevant regulatory approval to be placed on the market. This may take the form of Conformité Européenne (CЄ) marking in the European Economic Area or UK Conformity Assessed (UKCA) marking in the United Kingdom (UK) for example. These marks provide a user with an indication that the product conforms to the relevant legislation and does what it says it does.

The HTA process is multidisciplinary in order to systematically evaluate the properties and impacts of a health technology in order to inform policymaking for that technology in a health-care system. HTAs are intended to facilitate the promotion of equitable, efficient, and high-quality health-care systems [Citation8]. Assessments cover both the clinical effectiveness and the cost-effectiveness of the technology being appraised. HTAs often take place following one of two processes: either funding is sought to undertake evidence generation (in the UK this is potentially from a funding agency like the National Institute for Health Research (NIHR) HTA program or ZonMw in the Netherlands) or HTA agencies (such as NICE in England or the independent Institute for Quality and Efficiency in Health Care (IQWiG) – in Germany) appraise a company submission. In a similar process to NICE and IQWiG, in Canada, the Canadian Agency for Drugs and Technologies in Health (CADTH) acts as an independent not-for-profit organization to provide objective evidence for adoption decisions for Canada’s health-care system. There are differences between the process of evidence generation and evidence appraisal, not least that the former produces evidence to inform decision-makers who are external to that process and the latter organizes the production of evidence for its own use in making decisions. However, for both, the general process involves systematically collecting, generating, or appraising the evidence of clinical effectiveness and then using this to develop economic evaluations to inform decision-making based on cost-effectiveness.

HTAs have a long history of being applied to pharmaceuticals, surgeries, and tests but have also more recently been applied to digital devices and artificial intelligence-based technologies. Drummond et al. [Citation9] have previously described the challenges posed to economic evaluation specifically by orphan drugs where their development would not be commercially viable unless legislated for and incentivized. More recently, several methodological issues specific to the economic evaluation of digital health technologies have been raised elsewhere [Citation10–12], but there is nothing specifically relating to the cost of a technology and in particular high-cost technologies.

1.2. Generation and use of real-world evidence in decision-making

There is an increasing push from policymakers to use real-world evidence and data where possible and relevant, in order to expedite the adoption of health technologies into clinical practice. This is offset against a background where evidence-based medicine has always sought to undertake evidence generation through the most rigorous and robust study designs which are often by default considered to be the randomized control trial (RCT) design. Well-designed RCTs are the ideal source of evidence on relative effects. However, their limitations have been well described elsewhere. One key concern is that the impacts of interventions observed in RCTs are often not replicated in real-world practice. This has led to an increased focus on the use of real-world evidence [Citation13]. For example, the use of real-world evidence was outlined in the NICE strategy 2021 to 2026 [Citation14] and led to the development of the NICE real-world evidence framework [Citation15] but temporary reimbursement to collect real-world evidence has also been seen elsewhere [Citation2].

Real-world data can be defined as data that relates to patient health or experience or the delivery of care which is captured or collected outside of the highly controlled environments provided by randomized controlled and other clinical trial designs. It covers a range of different data sources and allows capture of data relevant to population groups that are actually seen by the health-care system. It also allows consideration of alternatives that reflect clinical practice, and behavioral patterns both from patients and health-care professionals. Furthermore, the hope is that the use of real-world data could potentially reduce the cost of evidence generation for both high-cost and non-high-cost technologies. However, despite this great potential, the impact of real-world evidence on HTA decision-making is still relatively limited. In the UK, for example, the introduction of NICE’s Cancer Drug Fund was set up to reduce uncertainties through the collection of real-world evidence. However, real-world data, collected as part of these managed access agreements, has not been widely used in submissions to NICE [Citation16]. Managed access is becoming more common, and NICE now have two dedicated funding sources to pay for such treatments (Cancer Drugs Fund and Innovative Medicines Fund) with each having a current annual (2022) budget of £340 million [Citation17]. The US FDA developed a real-world evidence program framework in 2018 to outline how to evaluate the potential of real-world evidence and to support new drug approvals [Citation18]. In 2020, the Oncology Industry Taskforce RWE Working Group, a Medicines Australia Initiative, prepared a report on the evolving role of real-world evidence in Australia which highlighted a relative doubling in the number of HTA submissions leveraging real-world evidence between 2012 and 2019 [Citation19].

There are challenges to relying only on real-world evidence with comparators of interest not necessarily being used routinely or being accurately coded, and this means that there may be insufficient data to reliably estimate effects. There are also challenges in that in many cases treatment pathways are complex. This in itself is not a problem unless this complexity is not well understood. More of an issue is that complex treatment pathways often cross health-care system or treatment boundaries. In a UK context, this could be primary and secondary health-care providers. This means that there may be no single data source and therefore substantial practical and governance issues with accessing and linking data between the different sources. Finally, in the case of high-cost orphan drugs, there may be an over reliance on surrogate endpoints due to the length of follow-up required for primary outcomes related to mortality and also from single-arm trials due to the rarity of the disease being treated itself. While this is not specific to high-cost technologies, this is still a challenge for short-term evidence generation using real-world data with reliance on such outcomes introducing decision uncertainty [Citation20,Citation21].

1.3. Understanding the role of uncertainty

There is a fine balance on where the evidential bar is set for reimbursement decisions. There is a trade-off between incentivizing evidence generation that minimizes uncertainty for clinical and cost-effectiveness but takes a long time to adequately capture outcomes of interest and placing too high a burden (in time and resources) that negates the value of real-world evidence in potentially allowing quicker adoption decisions using validated surrogate outcomes. Evidence is required that both a new technology and technologies already in use are a good use of limited health-care resources. However, uncertainty is important in decision-making due to the role it plays in representing the degree of risk associated with the different decisions. It is almost impossible to completely remove uncertainty, so decisions are always made in the presence of uncertainty. In the health-care sector, however, decision-makers tend to be relatively risk-averse, as the consequences in terms of patient harm are to be avoided or minimized where possible. Patient harm can occur in the absence of long-term safety evidence or by denying patients the optimal treatment. Therefore, efforts are made to try and minimize any decision uncertainty so that decision-makers can be confident that adopting technologies will lead to overall improvements in patient health. Decision-makers must balance these efforts against the cost and time that it will take to reduce uncertainty. This is because there will be opportunity costs in terms of patient health in spending time and resources in reducing uncertainty by undertaking further research.

Uncertainty can arise due to a variety of reasons. Some of these are set out in .

Box 1. Examples of sources of uncertainty in an economic evaluation

The uncertainty due to lack (or the limited availability) of data is particularly large when there is little overall evidence available. This might be the case because the technology is novel (so there has not been much time for data to accrue) or the technology focuses on a very rare disease (due to the limited number of patients with the condition of interest there are only ever going to be small sample sizes and length of follow-up available for any given parameter estimate). It may also occur when the evidence is difficult to capture. Public health interventions (for example, screening programs) have higher up-front costs, but benefits and cost savings may only manifest in the long term. This gives rise to difficulties in measurement, especially if there are challenges in causally linking these impacts back to the technology or intervention. In addition, assessments of long-term benefits (sometimes several decades) are often based on just a few years of follow-up data.

One of the ways in which the role of uncertainty is captured and quantified in economic evaluation is through the use of sensitivity analyses. This is where, simplistically, the significance of uncertainty in the value of an individual parameter can be explored by investigating the effect of varying the value of the parameter on the cost-effectiveness results (deterministic sensitivity analysis). The effect of uncertainty in parameter estimates on cost-effectiveness results using probabilistic sensitivity analyses allows the joint uncertainty to be accounted for and provides a more accurate estimation of the mean. Probabilistic sensitivity analyses assign distributions to each individual parameter and draw values for each value. This allows the generation of a large number of mean costs and mean effectiveness values which can be used to estimate the uncertainty. Threshold analyses can also be used to determine for a particular input or group of inputs the parameter value/s required above or below which the new technology would be favored. In the case of non-linear economic models, it has been suggested that probabilistic analysis should be favored [Citation22].

The traditional means to reduce uncertainty is for further research to be undertaken, but new research by no means guarantees reduced uncertainty. A value of information (VoI) analysis can be applied in order to help decision-makers decide whether further research should be conducted and what that research should look like given their understanding of uncertainty. VoI values expected gains and harms (in terms of delaying the use of an intervention) from the reduction in uncertainty through additional data collection and compares this against the expected cost of the research project required for the additional data collection [Citation23]. There are practical challenges around the use of VoI including the incorporation of all relevant evidence within a decision model and whether we really understand the nature of the uncertainty for the parameters of interest [Citation24].

VoI can be calculated at both an individual level and a population level, and the size of the population of interest (for example, rare disease versus more common conditions) will determine population estimates of VoI. For a high-cost technology (orphan drug rare disease), the VoI may be higher at the individual level if there is significant decision uncertainty, whereas it may not be as large at the population level due to the rarity of the disease. Furthermore, the expected value of perfect parameter information (EVPPI) can be used to estimate the value of eliminating uncertainty in one of more input parameters within the model, and the expected value of sample information (EVSI) can be used to inform sample size requirements for alternative study designs. These are important as decisions about whether further research is considered worthwhile are based on these population estimates. Good practice guidelines have been developed for the use of VoI in planning, undertaking, reviewing, and using the results of such analyses [Citation25,Citation26].

Finally, there may be some uncertainty regarding what the optimum model of service delivery is at the time of assessment and how service delivery may change over time. For example, it may be unclear what the optimal scale of implementation should be. This is especially important for technologies that have a very high (potentially irrecoverable) capital cost or a technology that is highly skilled based. For example, genetic diagnosis of development delay and learning disability traditionally involved looking down a microscope at G-banded karyotypes (a highly skilled task) but this was replaced and largely made redundant by a newer high-resolution microarray-based technology which required staff with a different skill-set [Citation27]. When first developed, there was a role for karyotyping alongside microarray analysis, and this was included at the time of the initial evaluation. But the pace of technology development has quickly made lower resolution technologies obsolete with subsequent evaluations updating service delivery. Taking another example of reperfusion technologies that might be used to optimize donor organs for transplant. The high cost and skill required to use these technologies alongside patient throughput and staff resourcing and workload management may lead to questions as to where they are best placed: local secondary or tertiary care hospitals or centralized in a regional or supra-regional services. A more local placement may make the service more accessible, but centralizing a service may offer economies of scale and scope, especially if patient throughput is low and requires specialization. However, over time, given the pace of technology development, initial centralized models of service delivery may change. An example of this is genomic sequencing which was initially much more specialized with only a handful of accredited laboratories across the world contributing toward the Human Genome Project, but now multiple machines are routinely found in hospital and university laboratories everywhere.

1.4. Can understanding budget impact help?

Budget impact analysis (BIA) can be used to estimate the financial impact of implementing a new technology for a budget holder and/or decision-maker. BIA has become a more important part of the economic assessment of new technologies, and it has been argued that it should be viewed as complementary to cost-effectiveness analyses and not a variant or replacement [Citation28]. Whereas cost-effectiveness analyses focus on whether a new technology represents value for money, the BIA can be used to incorporate the discussion of affordability because BIA focuses on the financial consequences of implementing a new technology within a health-care system purely from the perspective of the budget holder and includes the following six items:

  • The size and other important characteristics of the affected population to be ‘treated’ by the new technology;

  • The current intervention mix with the new technology (current state of play);

  • The costs of the current intervention mix (budget impact of current care);

  • The new intervention mix incorporating the new technology (new state of play);

  • The cost involved with use of the new technology;

  • The use and cost of other health condition-related and treatment-related health-care services for the affected population (to build up a full budget impact estimate following implementation of the new technology). (taken from [Citation28])

Mauskopf et al. [Citation28] detail the use of these six elements in estimating budget impact. BIA is most useful over very short timeframes, focusing on the first year or two following adoption of a technology. While this is generally the case, it can also be used over longer time periods as in Australia (5-years ±1-year). They may also account for the diffusion of a technology over that timeframe (or speed and scale at which the technology is adopted). Since 2017, NICE have introduced the use of a budget impact test for technologies assessed through their technology appraisal and highly specialized technology programs using a £20 million threshold in any of the first 3 years of use for a technology within the NHS. The use of a BIA may be more pertinent to a high-cost technology and dispel the idea that it would be costly to implement just based on its high cost. For example, an expensive new treatment for a rare disease, such as gene therapy atidarsagene autotemcel (also called Libmeldy) may be costly based purely on a per patient treated basis [Citation29]. However, given the rarity of the disease very few individuals would require treatment and so the overall total cost would not be as high for a drug which was half as expensive but was used to treat 100 times as many patients. This information is useful not just to address questions of financial affordability but also because the aggregate opportunity cost of investing in a given technology is directly linked to the total amount of resources that it consumes and which are therefore not available for use elsewhere.

1.5. Is life-cycle health technology assessment the way forward?

Life-cycle HTAs (LC-HTAs) are intended to build on the standard HTA methodology by introducing that process across the life-cycle of the health technology development journey. Kirwin et al. [Citation30] recently introduced a conceptual framework for the LC-HTA covering six specific stages: Preassessment; Safety and Efficacy Assessment; HTA; Adoption/De-adoption; and Reassessment. The development of an adapted HTA process was deemed necessary to better support decision-makers because the standard HTA process can be cumbersome and not conducive to the generation of and adaption to new evidence and furthermore can be problematic when there is significant uncertainty involved. For example, the preassessment stage can allow technology developers to better align evidence generation with what is required for decision-making through the provision of early feedback and guidance from HTA bodies, regulators, and decision-makers. Developers would better understand what is required to demonstrate that their technology is cost-effective and potentially therefore allows a more focused and efficient development pathway [Citation30]. These would be particularly helpful for small- to medium-sized companies where the research and development budget is constrained and achieving recognizable milestones on the development pathway is key to securing further investment even before the technology has reached the market place. The LC-HTA process has also been described by Kirwin et al. [Citation30] as being better geared than the standard HTA process toward enabling the adoption of high-value but potentially high-risk and more costly technologies through a more responsive and agile process of reassessment. The take-up and use of the LC-HTA framework by HTA agencies and researchers and the technology adoption decisions made will be of interest to technology developers over the coming years. This may be especially the case with regard to the risk-based pricing and research-oriented managed access – the latter being similar in nature to the early value assessment process and evidence generation plans recently developed and piloted by NICE [Citation31].

Finally, once this evidence has been generated it should be possible (and perhaps even mandated) to reevaluate whether the evidence generation was successful in reducing decision uncertainty and furthermore whether the high-cost technology is cost-effective. If it is not, then it should not be recommended for full-scale adoption and steps should be made to ensure de-implementation. Efforts must also be made to ensure that the decommissioning of existing services is given much greater consideration and is undertaken routinely with the focus not just on new technologies [Citation32]. Evidence also suggests that despite earlier access to promising treatments, through managed access routes such as the Cancer Drugs Fund, the NHS has failed to benefit from any meaningful value in part due to the lack of outcome data collected allowing a robust reevaluation of overall cost-effectiveness [Citation33].

2. Conclusion

A high-cost technology should not be treated differently when being evaluated as the health technology assessment process is a robust evaluation framework and is still appropriate in this case. However, it may require adaption over the coming years (with greater use of the LC-HTA for example), and any real-world evidence generated should be used to reevaluate whether the technology is cost-effective in actual real-world practice, and if needed be prepared to de-adopt or de-prioritize technologies, but we should by no means throw the baby out with the bathwater.

3. Expert opinion

We are seeing the increased use of managed access and real-world evidence to streamline the evidence generation pathway and speed-up time to adoption for technologies, and this is particularly appealing for high-cost orphan drugs. The advantages for individual patient benefit seem clear and obvious with quicker access to potentially effective drugs which may otherwise have not been made available due to limited evidence and uncertainty over whether they represent value for money. These benefits provide quality of life improvements for both patient and families or carers, allow patients to recover from and survive conditions that they may not have previously done so, and can reduce the stress for patients and families unable to afford such high-cost technologies. For the company involved in developing the high-cost technology, the advantages that stem from expedited adoption are clear and include a much quicker and streamlined route to market and reimbursement. The downside is that an incorrect decision is made and hence resource use is sub-optimal from the perspective of both society and the health-care system.

The process may also result in perverse incentives for the developers of high-cost technologies (and indeed any technology) – an incentive incompatibility problem. If these developers can make greater use of the increase in managed access routes and the willingness to use real-world evidence generation, then there is no incentive to invest in costly and time-consuming evidence generation needed to remove decision uncertainty. Lowering this evidential bar may lead to reduced up-front evidence generation by companies. Thus, there may be a need for further policies to ensure that the required evidence generation actually occurs. In the case of orphan drugs, we know that if this risk is placed solely on the developer, then there will be a much lower chance of development because it will be deemed too risky and less likely to be profitable. For this reason, there needs to be a balanced approach which encourages development without ‘too much’ risk being accepted by the public sector. A further point to note here is that the developer does not generally just sell to a single payer. These technologies are developed for several markets (e.g. US, European countries, Asia, etc.) with their own individually negotiated prices. Therefore, the risk should be shared between these payers, but overcoming the free rider problem requires a willingness and ability for these payers to co-operate in the risk sharing.

The greater use of LC-HTA with health-care decision-makers working alongside technology developers can lead to improvements in evidence generation and streamline decision-making so that companies can develop the evidence required. An exemplar for the potential use of this approach can be seen through earlier engagement by diagnostic test developers with health economists to help develop early economic models which highlight whether the new test has the potential to be cost-effective and therefore worth developing further [Citation34]. Working together would also highlight how the risk of developing and bringing new technologies to patient benefit changes between public and private investment. For example, the basic laboratory science may be funded through public sector or charity funding. This may lead to a proof-of-concept technology and with private sector funding may result in the development of the technology to meet the regulatory hurdles required to bring to market.

Evidence generation can be done in a stepped fashion to allow early decision-making in the face of uncertainty while mandating the collection, analysis, and reporting of additional evidence to allow more robust evaluations to reevaluate questions around whether a new high-cost technology is cost-effective or not. Sometimes, this may require a ‘simple’ reevaluation with updated data on the key parameters of interest, but in other circumstances, it could involve a completely new evaluation. The initial reevaluation could, for example, fall in-line with the timeframe used in any accompanying budget impact analysis, although this also depends on when sufficient data have accumulated (e.g. if the initial budget impact analysis time horizon is, say, 3 years, then the reevaluation should take place at 3 years). What will be required will also depend upon the time and resources available for the evaluation, the timeframe between the initial evaluation and the reevaluation, the broader changes in the evidence base and disease management – such that both model structure and the relevant comparators may need to change. In order to determine how best to operationalize this requires further empirical work, including pilot studies to develop evidence-based recommendations of the LC-HTA approach.

Overall, the HTA methodology is a robust and transparent evaluative process, but what is often lacking is the reevaluation using data generated post-adoption. This is where real-world evaluation can play an important part regardless of whether a high-cost technology is ‘adopted’ via managed access or not. This is because traditional evidence generation often does not account for real-world settings where pathways are not quite as clean as in the simplified versions used to model cost-effectiveness. Patient and clinician behavior can never be perfectly modeled in an HTA, but these behaviors can potentially be captured in real-world settings. Decision-makers should mandate evidence generation plans with data routinely collected to allow re-analysis – that way we can make earlier decisions under uncertainty but not wait too long to correct it if subsequent evidence indicates that we got it wrong the first time. This applies equally to new and existing technologies, and we must not fall into the trap of progress bias and expect that progress can only be made by adding new technologies and services but must also ensure that we reevaluate to allow de-adoption through the disposal of existing technologies using equally robust methods [Citation35].

Article highlights

  • Reimbursement of high-cost technologies can differ between countries for many reasons as can the involvement of managed access routes to allow quicker decision-making.

  • Health technology assessments are rigorous and evidence-based process but are often used late in the product development cycle. However, recent years have seen a push from both the public and private sector toward using real-world evidence to speed up adoption decisions allowing patients faster access to technologies.

  • The earlier use of real-world evidence needs a greater understanding on the impact it could have on the generation of evidence required for robust and rigorous decision-making and where the risk and cost of development falls.

  • Life-cycle health technology assessment should be routinely used for all technologies, include reevaluation and not just focus on the evaluation of new high-cost technologies.

Declarations of interest

The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.

Reviewer disclosures

Peer reviewers on this manuscript have no relevant financial or other relationships to disclose.

Additional information

Funding

This paper was not funded.

References