335
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Comparative effectiveness: beyond the buzz

Pages 1036-1038 | Published online: 19 Nov 2012

Despite the enormous contribution of medicines to enhancing health, the quality of pharmaceutical innovation varies widelyCitation1. In some cases, a true improvement for patients who suffer can be achieved, whereas in other situations marginal benefits are observed. Therefore, once medicines claimed to be innovative approach the market, healthcare policy-makers want to better understand their added therapeutic value. It is also clear that the evidence required to demonstrate such added value goes beyond the traditional evidence in terms of efficacy and safety as required by market authorisation bodies: it is not enough to show that a drug works better than placebo and that its incidence of side-effects is acceptable; evidence is needed about how much this new drug is better than the current standard of care.

Still today several medicines are being approved that may not have much added clinical benefit. This may not necessarily be an issue if the new drug is introduced into the market at the same price/reimbursement level as existing medicines. Indeed, the new products might have specific characteristics (such as a different interaction profile, another galenic form, …) that justify their place in the market next to the existing ones. However, once a price premium is claimed for a new drug, the least a decision-maker needs to know is whether an added therapeutic benefit is presentCitation2.

The terms ‘comparative effectiveness’ (often used in the US) and ‘relative effectiveness’ (often used in Europe) have been introduced to reflect the need for such data on added benefit. They are part of any Health Technology Assessment (HTA). HTA has been defined as a multidisciplinary field of policy analysis studying the medical, economic, social, and ethical implication of development, diffusion, and use of health technology, such as a new medicineCitation3. In HTA not only comparative/relative effectiveness, but also the cost-effectiveness of new technologies is assessed. Moreover, there is a strong focus on the social/ethical implications, i.e. assessing the medical/therapeutic need for a drug, as well as on guidance related to best practice.

In the remaining text I describe several issues that have occurred with the introduction of relative and comparative effectiveness.

Definitions

Relative effectiveness is defined as the extent to which an intervention does more good than harm compared to one or more intervention alternatives for achieving the desired results when provided under the usual circumstances of healthcare practiceCitation4.

Relative effectiveness is different from relative efficacy, in that the latter refers more to ideal circumstances and studies using intermediate end-points. The Agency for Healthcare Research and QualityCitation5 in the US states that a number of factors may limit the generalizability of results from efficacy studies: patients are often carefully selected, excluding patients who are sicker or older and those who have trouble adhering to treatment. Efficacy studies also often apply protocols that minimize bias and confounders, but may be impractical in usual practice. In contrast, effectiveness studies, which are conducted in practice-based settings, use less stringent eligibility criteria and assess longer-term health outcomes. They are intended to provide results that are more applicable to ‘average’ patients.

Yet today, there seems to be no clear consensus as to whether clinical trials yield efficacy or effectiveness information. All data on drugs yield information that is somewhere on an efficacy/effectiveness spectrum. Traditional placebo controlled and blinded trials tend to run on the efficacy side of the spectrum. The term ‘effectiveness’ entails, moreover, some confusion: while some use it to describe what is actually happening in real life, others use it to describe clinical trials that are oriented as far as possible to the effectiveness side of the spectrum. Unfortunately, there is today no consensus on these divergent views.

There is also some misunderstanding of the term ‘relative’ vs ‘absolute’. This is due to the well-known epidemiological logic that expressing benefits in absolute terms is more meaningful than presenting results in relative terms. It should be clearly stated that the term ‘relative’ in ‘relative effectiveness’ does not refer to ‘results in relative terms’ but to ‘in relation to a comparator’.

A next question is whether ‘relative effectiveness’ and ‘comparative effectiveness’ mean the same thing. According to the US SenateCitation6, the term comparative effectiveness research (CER) means research evaluating and comparing health outcomes and the clinical effectiveness, risks and benefits of two or more medical treatments or services (note these include medicines as well). Title VIII of the American Recovery and Reinvestment Act of 2009 authorized the expenditure of $1.1 billion to conduct research comparing ‘clinical outcomes, effectiveness, and appropriateness of items, services, and procedures that are used to prevent, diagnose, or treat diseases, disorders, and other health conditions’.

CER is thus said to be used to better understand the effectiveness, risk, and benefits of medical interventions and strategies for managing diseases.

Just like evidence-based medicine, a fully formulated CER topic consists of a set of questions that specify the patient populations, interventions, comparators, outcome measures of interest, timing, and settings (PICOTS) to be addressed.

Hence, CER does not seem to add something really new to the debate, as it does not seem to differ a lot from the principles of evidence-based medicine nor from relative effectiveness.

Note, moreover, that several authors criticize the fact that the definition of CER does not involve an economic aspect. Weinstein and SkinnerCitation7 correctly state that CER should explicitly account for medical need and include cost-effectiveness and budget impact considerations. However, then the term should not remain ‘comparative effectiveness’, since those words do not cover these additional criteria sufficiently. Then in the end what we are talking about is HTACitation8.

Comparator choice

When ‘more or better relative effectiveness data’ are demanded, one should ideally refer to trials that have the best possible alternative treatment as a comparator. This means that, ideally, a comparison with placebo would only be acceptable when it can be explained why a comparison with an active comparator was not possible (for instance, when the new drug is an add-on drug, it is acceptable that the comparator group receives current treatment plus placebo).

If this is not the case, and one still needs to know how the innovative medicine compares to the current best alternative, indirect comparisons or mixed treatment comparisons can be made either through value judgment or by modeling. Although a lot of progress has been made regarding the quality of these indirect comparisons, many methodological issues remainCitation9. That might explain why such comparisons are currently only adopted by a small number of countries (Australia, Canada, UK, Sweden).

Data availability and uncertainty

At the time of a decision on added value there are often no effectiveness data available, beyond what can be assumed from phase III clinical trials.

Efficacy-oriented clinical trials leave uncertainties about performance in real life, as this performance can differ greatly from that established in a controlled experimental setting. There also remain uncertainties about who will be treated, adherence to the therapy, impact on long-term individual and population outcomes, dosages, etc. Findings of efficacy-oriented trials are therefore incomplete and systematic biases exist, due to selection of patients, duration of the trial, and choice of intermediate end-points. Hence, modeling techniques are in a majority of cases needed to bridge from efficacy to effectiveness and from short-term to long-term outcomes. However, if decision-makers do not understand models or have difficulties in trusting or adopting them, there is clearly an issue. There is still a huge need for education with this regard in order to bring all involved stakeholders to the same level.

It should be recognized, however, that, due to better implementation of methodological guidelines, the quality of health economic models has improved over timeCitation10. Moreover, since health economic expertise improves at the level of the HTA bodies and competent bodies, a better distinction can be made between high and low quality models, which forces the industry to improve the validity and reliability of the submitted material. The adoption of models could moreover go hand-in-hand with two-step procedures whereby an initial decision relying on modeling techniques is taken, followed by a second decision later on (for instance after 1 or more years, depending on the nature of the disease), when more effectiveness information based on post-marketing research is available (see also Carlson et al.Citation11). However, these post-marketing evaluations are also confounded with several issues, such as selection bias, confounding factors, etc …

In any event, it is clear that better clinical trials (large pragmatic trials/effectiveness trials) will yield data that are more oriented to the effectiveness side of the spectrum. This is highly desirable for the benefit of all stakeholders. Increased attention to these aspects will impose a paradigm shift whereby the development of a medicinal product should not be for the sake of market authorization only, but also for reimbursement and market access.

Who is to assess the benefit?

Currently, the task of assessing the additional value is largely the responsibility of national and regional pricing and reimbursement authorities, sometimes supported by health technology assessment (HTA) bodies.

This leads to a situation whereby Regulators and HTA bodies, although both aiming at the availability of medicines that make a contribution to public health, are currently applying different approaches. Calls have been made for a closer interaction and collaboration between both parties. The assessment of relative effectiveness, and the way it is organized should be better co-ordinated and aligned in order to avoid duplication of efforts and deal with the identified challenges.

There is a need to engage with HTA bodies from very early medicine development throughout the medicine’s lifecycle. Maintaining the dialogue with HTA bodies, especially in the post-authorization phase, is very important in view of the vast amount of data which are obtained through post-authorization collection.

Concluding remarks

Comparative effectiveness or relative effectiveness is a logic approach when decisions are to be made to allocate healthcare money to (claimed to be) innovative drugs. Yet, different issues and lack of consensus remains related to their correct definition and interpretation, the methods needed to assess them and the role of other criteria in decision-making such as cost-effectiveness, budget impact, and medical/therapeutical need. Health economists should take the opportunity of their research and fora to guide and educate each other and decision-makers with this regard.

Transparency

Declaration of funding

This manuscript was not funded.

Declaration of financial/other relationships

None.

References

  • NIHCM. Changing patterns of pharmaceutical innovation. A research report by The National Institute for Health Care Management Research and Educational Foundation. NIHCM, 2002.
  • Garattini L, et al. Pricing and reimbursement of in-patent drugs in seven European countries: a comparative analysis. Health Policy 2007.
  • INAHTA. http://www.inahta.org/HTA/. Accessed November 2012.
  • High Level Pharmaceutical Forum. http://ec.europa.eu/pharmaforum/ Accessed November 2012.
  • AHRQ 2007. Methods reference guide for effectiveness and comparative effectiveness reviews agency for healthcare research and quality. Methods reference guide for effectiveness and comparative effectiveness reviews, Version 1.0 [Draft posted Oct. 2007]. Rockville, MD: Available at: http://effectivehealthcare.ahrq.gov/repFiles/2007_10DraftMethodsGuide.pdf. Accessed November 2012.
  • Senate of the USA. Amendment 2786. Comparative Effectiveness Research, 2009
  • Weinstein MC, Skinner J. Comparative effectiveness and health care spending — Implications for reform. N Engl J Med 2010;362:5
  • Drummond MF, Schwartz JS, Jönsson B, et al. Key principles for the improved conduct of health technology assessments for resource allocation decisions. Int J Technol Assess Health Care 2008;24:244–58
  • Jansen JP, Fleurence R, Devine B, et al. Interpreting indirect treatment comparisons and network meta-analysis for health-care decision making: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: part 1. Value Health 2011;14:417–28
  • Wolowacz et al. 2008.
  • Carlson JJ, Sullivan SD, Garrison LP, et al. Linking payment to health outcomes: a taxonomy and examination of performance-based reimbursement schemes between healthcare payers and manufacturers. Health Policy 2010;96:179–90
  • Annemans L, Cleemput I, Hulstaert F, et al. Valorising and creating access to innovative medicines in the European union. Front Pharmacol 2011;2:5.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.