498
Views
3
CrossRef citations to date
0
Altmetric
Editorial

Can the pharmaceutical industry embrace comparative effectiveness research? A view from inside

Pages 565-568 | Published online: 09 Jan 2014

In my view, success for pharmaceutical companies is already linked to the evaluation by providers, patients and payers of the value of their medicines including comparative effectiveness research (CER) Citation[1]. Well designed and executed CER facilitates improved clinical decision making by patients and their physicians, and enables the commercial success of high-value medicines. However, I have been asked to address a different question in this editorial: ‘Can the pharmaceutical industry embrace CER?’ My answer to this question is colored by almost 30 years of experience in biomedical research, most of that working for major pharmaceutical companies.

What is CER?

CER has been defined in a variety of ways. Here, I adopt the formulation of Greenfield, who argued that CER answers the principal question: what is the best treatment for a specific patient during an encounter with a specific doctor, both in terms of benefits and harms? To answer that question, CER includes the following: head-to-head comparisons of the proposed interventions versus the best available alternatives; emphasis on both benefits and harms, the harms having the most immediate impact on the patients; the examinations of effectiveness in key subgroups within the disease, so that a given patient and doctor can easily match the patient with that group; the study of multiple relevant outcomes; and the impact of the provider, and the differential quality rendered Citation[2].

Comparative effectiveness evidence is comprised not only of the results from traditional head-to-head randomized clinical trials (RCTs), but also of the results from head-to-head randomized pragmatic clinical trials, comparative observational studies (retrospective, prospective, hybrid), indirect and mixed treatment comparisons using network meta-analysis and comparative models. Unfortunately, available evidence is limited from RCTs due to a variety of considerations including cost, feasibility and misaligned incentives, and therefore is frequently inadequate by itself to inform health policy Citation[3,4]. In addition, those that are conducted are frequently limited by their focus on carefully selected, adherent populations that underestimate the harms to patients observed in real-world practice. Moreover, the majority of studies are designed with intermediate measures as the trial end points, and do not provide information on the broad range of outcomes that are really of interest to decision makers – including patients Citation[4].

The biopharmaceutical industry & CER

Over the last two decades, formal Health Technology Assessment of marketed pharmaceuticals – which incorporates relative effectiveness evidence – has been widely adopted by healthcare systems around the world, motivated by the desire to contain healthcare costs while improving health outcomes. The pharmaceutical industry has provided the information requested by these organizations including evidence from RCTs–indeed 70% of US FDA approval packages contain comparative efficacy data between 2000 and 2010 Citation[5]. Although not all this evidence is given equal credence by all payers and policy makers, companies routinely provide this information where it is requested in submission guidelines; it is also critical to their value argument for obtaining access and reimbursement Citation[6]. This information influences pricing negotiations between the company and the payer. However, both payers and pharmaceutical companies factor a variety of considerations into their decisions including comparative and cost-effectiveness Citation[7]. Indeed, there remains a healthy dialogue among stakeholders as to what constitutes the value of innovation and how to reward innovators.

In order for pharmaceutical companies to fully embrace CER, there needs to be clarity on what is considered credible evidence. Three major questions need to be answered. What CER evidence will be considered both relevant and reliable by payers? How can there be a level playing field for companies and other stakeholders in the dissemination of CER? How should observational CER studies be considered within the context of available knowledge?

What is relevant & reliable CER?

The goal posts (i.e., the certainty regarding safety, effectiveness and comparative effectiveness) employed by payers for providing access and reimbursement for new treatments have moved significantly over the last 20 years and continue to do so. This has reflected the general adoption of an evidence-based medicine framework, with increasing emphasis placed on comparative evaluation to the best available alternatives Citation[1]. Although this reflects an appropriate evolution of the field, it has been complicated by pervasive skepticism of comparative evidence produced by pharmaceutical companies. In response, many in the industry (including the author of this article) have championed the promulgation of good research practices for CER Citation[8–11] to assist decision makers in separating the ‘wheat from the chaff’; regardless, many key stakeholders discount the results of company-sponsored CER, except where it is the only relevant evidence available. Ignoring or discounting well-developed evidence at best limits improvements in clinical practice and at worst represents a real harm to patients.

Analysis of the ever growing body of data generated outside of the clinical trial in electronic health records, claims and registries will significantly increase the body of CER evidence. Establishing its credibility and using it to the maximum potential to understand the relative value of treatment would be facilitated by providing access to data by appropriate users, developing clear methodology standards, preserving privacy of patient information while facilitating data sharing and promoting communication and dissemination practices that consider patient and practice diversity and recognize that there is variability in findings depending on the analytic approach. Whether it is conducted by a pharmaceutical company, government, a health plan or academics, clear standards and practices increase the credibility of CER and will help avoid risks to patients from erroneous conclusions drawn from individual studies.

Dissemination of CER

Pharmaceutical companies also suffer from ‘asymmetry’ between the strict regulations they must follow in disseminating CER results and the absence of these restrictions for other organizations, including public and private payers as well as academic institutions, government agencies such as the Agency for Healthcare Research and Quality and quasi-public agencies such as the Patient-Centered Outcomes Research Institute, which are conducting CER and communicating its results Citation[12]. The industry seeks a level playing field in the dissemination of CER relevant to decision makers.

Is CER – particularly from observational studies – good enough?

A methodological issue that is actively debated among key stakeholders concerns about the robustness of CER results not derived from head-to-head RCTs. The magnitude of effect observed in observational CER is generally smaller than what has been viewed as reliable in traditional epidemiologic studies; most head-to-head RCTs are designed as noninferiority studies Citation[13]. Consistency in results across CER studies is also questioned by some as to whether it increases confidence in CER evidence. For them, the preferred solution is to promote the conduct of large simple RCTs and/or naturalistic pragmatic clinical trials; this is based upon the hope that more of such studies will be publicly funded by institutions such as the NIH.

Where do we go from here?

Regardless of how these questions are answered, the world is changing rapidly – we are on the cusp of an era of ‘Big Data’ in healthcare research. Indeed in the USA, the NIH has committed to fund up to $24 million per year for 4 years to establish six to eight investigator-initiated Big Data to Knowledge Centers of Excellence Citation[101]. The increasing adoption of electronic medical records, integration of electronic medical records, healthcare claims data sets and other sources of data on patient attitudes and behaviors will usher in a new era in which the volume of CER from observational studies will dwarf that of RCTs and pragmatic trials.

Given the dynamic nature of the landscape, all stakeholders will find themselves utilizing the broad range of CER evidence in assessing marketed treatments. We will naturally move toward a continuous learning model for health care that will supplant the current model that is event driven by the results of ‘definitive’ RCTs. Ongoing collaborative dialogue among all stakeholders including patients will be required to shape a consensus about what constitutes good enough evidence for particular healthcare decisions; one size does not fit all and is inefficient Citation[4].

As evidence generation and analysis become democratized, some stakeholders may not feel that the differential restriction on evidence dissemination will be worth addressing. This would be short-sighted, however, since this only would reinforce the status quo where comparative effectiveness studies have limited impact in changing patient care and clinical practice Citation[14].

Recommendations

  • • The pharmaceutical industry has already embraced an evidence-based medicine framework for the evaluation of their medicines and is doing so increasingly in partnership with payers and providers. However, companies will neither generate all the comparative evidence desired nor embrace any particular approach for providing such evidence until there is greater transparency and consistency on the part of payers and regulators on their evaluation and use of CER, and companies can fully leverage their investment.

  • • To address publication bias and to guard against data dredging, all researchers should be required to register studies if they want to be viewed as compelling enough to inform health policy decisions. This would be analogous to the registration of clinical trials on ClinicalTrials.gov.

  • • To address the unfamiliarity in assessing the quality of CER, adoption of good research practices needs to be demanded by journal editors and evaluated as part of the peer review process. To assist in the assessment of the quality of CER studies, a joint effort by International Society for Pharmacoeconomics and Outcomes Research, National Pharmaceutical Council and Academy of Managed Care Pharmacy will produce interactive questionnaires to assist reviewers in assessing the relevance and credibility of CER studies Citation[102].

To address the shifting of the goal posts for CER evidence, payers must become more transparent in how they render decisions. Payers must clearly disclose to both innovator companies and the public how their considerations of value and budget impact separately contributed to each of their reimbursement and access decisions. If payers want timely evidence about the comparative effectiveness of a new medicine, they should explore conditional access approval schemes to provide companies with the opportunity to collect the desired evidence through real-world use in more targeted populations. In addition, continued effort must be put into exploring innovative risk sharing and outcomes-based contracting models. Although a major obstacle to widespread adoption has been the difficulty and cost associated with collecting the required data to adjudicate such arrangements, this will improve as the healthcare information infrastructure becomes more robust.

To answer the question of when CER is good enough will depend on new analytic approaches that provide more robust effect estimates or a shifting to a continuous learning paradigm. With respect to the former, there are emerging developments in machine learning and data mining that may revolutionize our analysis of observational data. With respect to the latter, as CER becomes embedded into healthcare delivery, the lessons taken from CER will be continuously revised with the ongoing analysis of outcomes.

Financial & competing interests disclosure

ML Berger is an employee of Pfizer Ltd. The author has no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.

No writing assistance was utilized in the production of this manuscript.

References

  • Berger ML, Grainger D. Comparative effectiveness research: the view from a pharmaceutical company. Pharmacoeconomics 28(10), 915–922 (2010).
  • Greenfield S. Comparative effectiveness and the future of clinical research in diabetes. Diabetes Care 36, 2146–2147 (2013).
  • Mansley EC, Elbasha EH, Teutsch SM, Berger ML. The decision to conduct a head-to-head comparative trial: a game theoretic analysis. Med. Decis. Making 27(4), 364–379 (2007).
  • Teutsch SM, Berger ML, Weinstein MC. Comparative effectiveness: asking the right questions, choosing the right method. Health Aff. (Millwood) 24(1), 128–132 (2005).
  • Goldberg NH, Schneeweiss S, Kowal MK, Gagne JJ. Availability of comparative efficacy data at the time of drug approval in the united states. JAMA 305, 1786–1789 (2011).
  • Wang A, Halbert RJ, Baerwaldt T, Nordyke RJ. US payer perspectives on evidence for formulary decision making. J. Oncol. Pract. 8, 22s–27s (2012).
  • Teutsch SM, Berger ML. Evidence synthesis and evidence-based decision making: related, but distinct processes (editorial). Med. Decis. Making 25(5), 487–489 (2005).
  • Berger ML, Mamdani M, Atkins D, Johnson ML. Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources. The ISPOR good research practices for retrospective database analysis task force report – part I. Value Health 12(8), 1044–1052 (2009).
  • Dreyer N, Schneeweiss S, McNeil B et al. GRACE principles: recognizing high-quality observational studies of comparative effectiveness. Am. J. Manag. Care 16(6), 467–471 (2010).
  • Berger ML, Dreyer N, Anderson F, Towse A, Sedrakyan A, Normand SL. Prospective observational studies to assess comparative effectiveness: the ISPOR good practices task force report. Value Health 15, 217–230 (2012).
  • Caro JJ, Briggs AH, Siebert U et al. Modeling good research practices-overview: A report of the ISPOR-SMDM modeling good research practices task force-1. Value Health 15, 796–803 (2012).
  • Neumann P. Communicating and promoting comparative-effectiveness research findings. NEJM 369, 209–211 (2013).
  • Berger ML. Accelerating the development of comparative effectiveness information: does Phase IIIb represent an opportunity? ISPOR Connect. 18(5), 5–6 (2012).
  • Timble J, Fox S, Van Busum K, Schneider E. Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. Health Aff. 31, 2168–2175 (2012).

Websites

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.