346
Views
0
CrossRef citations to date
0
Altmetric
Meeting Report

Comparative effectiveness research and healthcare reform: topics of the day

, &
Pages 395-397 | Published online: 09 Jan 2014

Abstract

The International Society for Pharmacoeconomics and Outcomes Research recently sponsored the 16th Annual International Meeting. Participants included industry, clinical practice, government, academia and health research professionals. The purpose of the conference is to share information on increasing the efficiency, effectiveness and fairness with which available healthcare resources are used to improve health.

The topic of comparative effectiveness research (CER), in the wake of recent healthcare reform, dominated the recent agenda at the 16th Annual International Society for Pharmacoeconomics and Outcomes Research (ISPOR) meeting. Under the American Recovery and Reinvestment Act, US$1.1 billion was allocated for CER; as such, CER has taken on increased urgency and importance. In addition, the American Recovery and Reinvestment Act included a provision for the Patient-Centered Outcomes Research Institute. The institute was created to conduct research regarding the best available evidence to help patients and their healthcare providers make more informed decisions Citation[101], as while a therapy must be efficacious, it also must represent good value. Based upon this mandate, the ISPOR meeting emphasized the shifting focus of research away from sole reliance on randomized controlled trials (RCT), and towards the inclusion of real-world data. This shift is particularly relevant to payer decision-making regarding coverage and reimbursement.

This new healthcare environment presents opportunities and challenges for pharmacoeconomics and outcomes research professionals, as evidenced by the American Heart Association’s principles for CER, highlighted in one of the conference’s leading presentations. The Association stated that “CER should ideally build on data provided by RCTs by evaluating medical interventions in more diverse populations and broader clinical contexts” Citation[1], which highlights the importance of using CER in conjunction with RCTs to identify the best value for a wide range of patients in a variety of clinical settings.

The future of CER

The second plenary session, moderated by J Sanford Schwartz from the University of Pennsylvania (PA, USA), highlighted the past/future of CER. Carolyn Clancy of the Agency for Healthcare Research and Quality (AHRQ; MD, USA) explained how recent legislation has provided the resources and momentum for change and how AHRQ serves as a ‘convener’ for system transformation. For example, patient-centered outcomes research at AHRQ has emphasized both the synthesis of existing evidence and the creation of new evidence. The AHRQ now offers an effective healthcare program, comprised of medication guides that are geared towards policymakers, clinicians and consumers. Finally, from 2008–2010, AHRQ made investments in the area of evidence generation, including request for registries in priority conditions, evidence development to inform decisions about effectiveness (DEcIDE Network), clinical and health outcomes initiative in comparative effectiveness (CHOICE) to establish pragmatic clinical CER studies and innovative adaptation and dissemination of AHRQ CER Products (iADAPT).

One of the conference’s forum sessions, sponsored by the ISPOR Prospective Observational Clinical Studies Task Force and led by co-chairs Marc Berger from Ingenix Life Sciences (New York, NY, USA) and Sharon-Lise Normand from Harvard Medical School (Boston, MA, USA), discussed prospective observational studies conducted for CER. The task force suggested that observational studies or pragmatic controlled trials can provide credible and useful information for CER, in particular if there is little or no treatment preference in the prestudy clinical context. Clearly defined study questions, a well-established study protocol and a detailed statistical analysis plan are examples of good execution practices for CER when using prospective observational studies.

Patient registries in CER were highlighted via an issue panel moderated by Nancy Dreyer of Outcome (Cambridge, MA, USA). Even though registries have difficulty in reaching comparable groups without randomization, they are still useful in answering many questions. From the manufacturer perspective, registries can provide value in several ways. For example, registries offer safety information in special/under-represented populations, evaluate consistency for efficacy and safety across subgroups and in broader populations, address new questions from clinical practice and assist in understanding treatment patterns and their effect on outcomes. As such, observational study data (e.g., registries) can be used as a complement and support to RCTs and are less costly than clinical trials. The Good Research for Comparative Effectiveness Initiative, supported by the National Pharmaceutical Council, is developing a validated checklist for CER observational studies. This checklist may add further value to patient registries and other types of observational studies.

Data access in an era of CER

As CER takes hold globally, several analytic tools are being developed to aid researchers with the problems they face in moving from raw health data to person-level statistical analysis. One such analytic innovation is Project Libra, a new analytic tool for CER analyses of multipayer claims databases, as described by Thomson Reuters (Washington, DC, USA). Project Libra is a multisource, multiyear patient health history repository, which runs on a standard web browser over a secure internet connection. It provides next-generation analytic applications to query the data in a more efficient data structure, allowing for meaningful analysis without the need for programming with query tools. As such, it can improve researchers’ access to data and shorten study timelines.

Along the same lines, the Centers for Medicare and Medicaid Services (Baltimore, MD, USA) has launched a public use file pilot project to increase access to its detailed claims data for all fee-for-service Medicare beneficiaries. Each type of care (e.g., inpatient, outpatient, durable medical equipment and drugs) will have one stand-alone public use file that has been cleansed, de-identified and disseminated to select pilot researchers to simulate the actual experience of CER.

Methodologies in an era of CER

Albert Wu of Johns Hopkins University (Baltimore, MD, USA) believes the main objective of much of healthcare is improving how the patient feels and functions (effectiveness equals patient outcomes). As such, Johns Hopkins Medicine has an electronic patient record in which a clinician can schedule a patient survey, thereby adding patient reported outcomes (PROs) to administrative data. According to Amy Abernethy at Duke University Medical Center (Durham, NC, USA), incorporating PROs in CER is important, as it determines whether patients will comply with treatments.

Owing to the growing importance of PROs, a range of new methods have emerged to value outcomes: preference-based methods (e.g., conjoint analysis, discrete choice experiments, best–worst scaling) and multicriteria decision analysis (e.g., analytic hierarchy process, benefit–risk assessment tool). Recently, health technology assessors have turned their attention towards PROs and, as a result, new paths for the evaluation of costs and benefits focusing on a single indication and its comparators have emerged, particularly at the Institute for Quality and Efficiency in Health Care (IQWiG) in Germany, as described during a panel moderated by Michael Drummond of the University of York (York, UK). According to the panel, IQWiG will not use quality-adjusted life year-based or threshold-based methods but will work within a reference pricing system focusing on efficiency frontiers, while exploring the use of conjoint analysis and analytical hierarchical processes as methods for identifying, prioritizing and valuing PROs.

United BioSource Corporation (Bethesda, MD, USA) discussed trial simulation and its ability to inform the design of pragmatic trials whereby you can build on what is known from exploratory studies to explore testing hypotheses in pragmatic trial settings. Such a simulation begins with patient creation (assigned a set of baseline characteristics), randomization to a treatment group, prediction of outcomes (detailed statistical modeling is required to properly link predictors to outcomes using baseline and intermediate variables) and a trial exit.

Healthcare decision-makers face choices among a growing number of alternative treatments. As such, comparisons among these choices are paramount. However, head-to-head RCTs are not always available; therefore, indirect comparisons have a growing role in an era of CER. According to the Analysis Group (Boston, MA, USA), these indirect comparisons include pooled analyses of clinical trial data, matching-adjusted indirect comparison, adjusted indirect comparison and unadjusted indirect comparison. Key questions from the payer perspective around indirect comparisons include whether the data are appropriate, was the correct methodology used and how reliable are the results?

Healthcare reform

In 2010, the US Congress passed major healthcare reform legislations in which employers will be penalized if they do not offer healthcare coverage, small businesses can purchase qualified coverage via state-based health insurance exchanges and the Medicare prescription drug ‘doughnut hole’ will be phased out. According to Steve Phurrough at the Center for Medical Technology Policy (Baltimore, MD, USA), this new environment will probably have an impact on outcomes research, in the areas of evidence-based practices relating to prevention, the translation of interventions from academic settings to real-world settings and healthcare delivery system improvement.

A panel moderated by Peter Neumann from Tufts Medical Center (Boston, MA, USA) highlighted the potential benefits of the US FDA and Centers for Medicare and Medicaid Services’ parallel review in the USA, as it allows reduced time to reimbursement, enhanced patient access to new medical technology and has the potential to lower industry development costs. This sentiment is also shared globally, as the Green Park Collaborative held its first meeting in London, UK, on 17 March 2011 to identify the steps needed to produce technology-specific guidance documents with recommendations for the design of clinical studies that address the information needs of payers and health technology assessment bodies from a number of different countries Citation[102]. The challenge around a parallel review is reaching a consensus on what level and type of evidence is acceptable to both agencies.

Richard E Ward of Reward Health Sciences (Ontario, Canada) encouraged the analytics community to migrate from a current focus on variability (e.g., large databases to increase sample size) to one that also addresses bias (e.g., comparability of data and patients) and trusting those with expertise in epidemiology and biostatistics to use the data accordingly. Similarly, the Medicare electronic health record incentive program, which provides incentive payments to eligible professionals, hospitals and critical access hospitals that demonstrate ‘meaningful use’ of certified electronic medical record technology, may enhance current databases for rapid learning, analyses of product use and CER, particularly owing to the clinical quality measures being mandated by the Federal Government by 2015.

Financial & competing interests disclosure

Mitch DeKoven, Erica Goldberg and Julia Powers are employed by the IMS Consulting Group. No products from this company are reviewed in this article. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.

No writing assistance was utilized in the production of this manuscript.

References

  • Gibbons RJ, Gardner TJ, Anderson JL et al. The American Heart Association’s principles for comparative effectiveness research. Circulation119, 2955–2962 (2009).

Websites

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.