700
Views
1
CrossRef citations to date
0
Altmetric
Editorial

The need for increased harmonisation of clinical trials and economic evaluations

, &

Abstract

Despite the increasing number of protocol and reporting guidelines available to trialists, there is still little guidance for protocol writers on the incorporation of patient-reported outcomes and economic assessments alongside clinical trials. It is unsurprising, therefore, that trial protocols present disproportionately less information for the economic evaluation component than for clinical outcomes. Costing methodologies, generalisability considerations, methods to address sensitive patient-reported outcome information and missing data are often insufficiently described in the trial protocol. The paper illustrates these shortcomings with specific examples and makes a case for shifting researchers’ attention from the reporting to the design stage of trial-based economic evaluation to promote the validity, generalisability and accountability of trial-based economic evaluations.

Economic evaluations conducted alongside clinical trials can provide valuable evidence to inform health policy. Although within-trial analyses have well-known limitations and must often be complemented with decision models Citation[1], they can still provide reliable estimates of cost–effectiveness or may be used to provide primary data on resource use, or patient-reported outcomes (PROs) such as health utility data to inform decision models. The usefulness of such data is dependent on the quality with which it has been collected, the appropriateness of the analysis methods and the completeness with which it is described in the resulting trial report. Thus, trialists should have a legitimate interest in minimizing avoidable missing data through rigorous study design and the inclusion of clear data collection and analysis instructions in trial protocols and standard operating procedures, but also in the reporting of protocol compliance in the final publication.

Guidelines, which encourage full reporting of all pertinent general and PRO-specific trial information, are available in the form of the CONSORT statement Citation[2] and PRO extension Citation[3]. For health economists, the 1996 BMJ guidelines Citation[4] have long served as the reference reporting framework, before being superseded by the CHEERS Statement in 2013 Citation[5]. Guidelines surrounding protocol development are less well established. The recent SPIRIT 2013 Statement provides general evidence-based guidance aiming to improve the quality of clinical trial protocols, recommending that 33 items should be routinely included where appropriate Citation[6]. However, there is no consolidated guidance for protocol writers wishing to include PROs or health economics data collection in the design of a study.

It is unsurprising, therefore, that there is some evidence that where economic analyses are included in trial protocols, the information provided is scarce and may lack detail Citation[7]. In the current state of play, economic evaluations are more often than not loosely specified in a few paragraphs of the methods section of trial protocols. This is not to say that a full data collection and analysis plan is never elaborated; however, that plan may not be readily available to all study personnel and, following publication of the trial report, to the general readership. This may lead to a lack of standardization across research sites and may adversely affect data quality Citation[8]. Illustratively, a survey of UK clinical trials units reported that these did not have standard operating procedures in relation to conducting economic evaluations Citation[9]. One possible reason for the lack of detail regarding PROs and economic evaluations is that trials are primarily driven by clinicians who aim to answer clinically relevant questions, and, as such, PRO experts and health economists may be approached in the late stages of trial design. As Whitehurst and Bryan have argued, the clinical paradigm, interested primarily in inference, is fundamentally different from the economic evaluation paradigm, interested in estimating value and supporting decision making, thereby supporting a division between trialists and economists Citation[10]. A further reason may be that not all trials share the same types of objectives. For example, regulatory trials are often primarily concerned with safety and efficacy, whereas reimbursement trials are more interested in relevant patient outcomes and effectiveness. Consequently, the PRO assessment and economic evaluation component may be seen as a ‘tick box’ exercise in some trials, expedited in a subsection of the protocol, using standard phrases and with disproportionately less detail than the clinically focused components of the trial. While ongoing work is investigating ways to harmonize approval processes Citation[11], there is currently little incentive for thorough economic assessments alongside regulatory trials.

It has been argued Citation[12] that there is a limit as to which analyses researchers can pre-specify, as one can never know the result of an evaluation in advance. Nevertheless, this cannot act as a valid excuse for not incorporating a range of relevant methodological considerations in the design, conduct and analysis of the trial. Having a comprehensive protocol for health utility assessment and economic evaluations is desirable to ensure accountability, raise the standard of economic evaluation practice, provide reliable evidence to inform clinical practice and health policy and minimize research waste Citation[13].

Several methodological issues with relevance for trial-based economic evaluation require careful consideration at the design stage. One such issue is the assessment of costs as an integral component of trial-based economic evaluations. Unit costs are expected to vary across locations so trial teams should consider the appropriateness of applying average reference costs across all the centers in the trial; in the absence of unit costs missing completely at random, the average unit cost will most likely misrepresent the centre-specific cost Citation[14]. A systematic review of economic evaluations conducted alongside trials funded by the UK Health Technology Assessment Programme revealed that only 52 of 95 reviewed studies used locally sourced unit costs Citation[15]. Although it may be impractical or even impossible to collect unit costs from all centers involved in a study, alternatives such as sampling Citation[16,17] and multiple imputation Citation[18] are available and should be considered at the design stage.

A related issue refers to the generalizability of economic evaluation findings. The importance of ensuring a representative sample of recruiting centers has long been recognized Citation[19,20], but there are still gaps in the evidence: to what extent does this really matter and how can it be achieved in practice? With respect to the first question, there would be little reason for concern if the centers enrolled in the trial were representative of the jurisdiction they represent. This could be achieved in two ways: either deliberately choosing centers based on a number of covariates indicating that they are representative at jurisdiction level or randomly selecting centers from the pool of available centers within the jurisdiction. Recent evidence suggests that the centre selection process is driven, as one may expect, by pragmatic considerations and is very rarely random Citation[7]; therefore, neither condition is satisfied. Furthermore, there are indications that cost–effectiveness findings may differ across categories of healthcare providers Citation[18]. With respect to the second question, Drummond et al. Citation[20] suggested possible centre-level covariates and introduced the concept of minimum patients recruited from each centre, but no relevant method has yet been developed. Moreover, the uptake of the recommended methods of centre-level adjustment, that is, multilevel modeling and bivariate hierarchical modeling, still appears to be low Citation[7].

There are also a number of PRO-specific methodological challenges that should be addressed during the design of trial-based economic evaluation. First, missing PRO data can be particularly problematic, as retrospective data capture is difficult and the data are often not missing at random; rather, PRO data may be missing from individuals suffering the poorest outcomes Citation[21]. The literature suggests two strategies should be employed to reduce levels of avoidable missing data within a trial: appropriate selection of the PRO and the standardized implementation of optimal data collection methods across all study sites. Thus, the PRO measure should be carefully selected by the trial team, in consultation with stakeholders including patients, to ensure appropriateness and acceptability Citation[22]. In addition, the protocol should provide information on methods to prevent missing data, for example, training of trial staff, participant reminders and centralized monitoring of PRO compliance Citation[23]. Second, PRO assessment can generate ‘PRO Alerts’, that is, where concerning levels of psychological distress (e.g., severe depression or suicidal ideation) or physical symptoms (e.g., pain) are recorded on a PRO, which may require an immediate clinical response Citation[24]. Evidence suggests that where ‘PRO Alerts’ occur in practice research personnel may deliver non-protocol-driven interventions to aid participants, risking co-intervention bias Citation[8]. Clearly, the trial team requires an a priori plan in place to manage such alerts, and any co-intervention should be recorded so that resource use may be considered in the health economic evaluation. More guidance on handling such a delicate matter is needed, as our recent systematic review concluded that relevant recommendations are currently missing Citation[25].

Despite the available guidance supporting high-quality trial-based economic evaluations, its uptake in practice remains uncertain. Costing, centre selection, PRO ‘Alerts’ and missing data are only examples of sensitive areas where not just analysis but primarily study design decisions affect the results of an evaluation. Consequently, such decisions require careful thought to ensure valid and transparent reporting in the interest of accountability. In accordance with the recent developments concerning the clinical component of trials, there is a strong need to shift the researchers’ attention from reporting and analysis to the design stage of trial-based economic evaluations. In that respect, the economic evaluation protocol must become a much more prominent tool than it is today. This involves the development of high-quality protocols for PRO assessment, resource use, cost data collection and health economic analyses more generally. Research commissioners and journal editors can play a pivotal role in this transition. Given the ever-expanding volume of available research focused on economic evaluation methodology, preserving the status quo is becoming less and less defensible.

Financial & competing interests disclosure

D Kyte is supported by a National Institute for Health Research School for Primary Care Research funded PhD studentship. M Calvert is a member of the MRC Midland Hub for Trials Methodology Research, University of Birmingham, United Kingdom (MRC Grant ID G0800808). The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.

No writing assistance was utilized in the production of this manuscript.

References

  • Sculpher MJ, Claxton K, Drummond M, McCabe C. Whither trial-based economic evaluation for health care decision making? Health Econ 2006;15(7):677-87
  • Schulz KF, Altman DG, Moher D; CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c322
  • Calvert M, Blazeby J, Altman DG, CONSORT PRO Group. Reporting of patient-reported outcomes in randomized trials: the consort pro extension. JAMA 2013;309(8):814-22
  • Drummond MF, Jefferson TO. Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ economic evaluation working party. BMJ 1996;313(7052):275-83
  • Husereau D, Drummond M, Petrou S, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMJ 2013;346:f1049
  • Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 Statement: defining Standard Protocol Items for Clinical Trials. Ann Intern Med 2013;158(3):200-7
  • Gheorghe A, Roberts TE, Ives JC, et al. Centre selection for clinical trials and the generalisability of results: a mixed methods study. PLoS ONE 2013;8(2):e56560
  • Kyte D, Ives J, Draper H, et al. Inconsistencies in quality of life data collection in clinical trials: a potential source of bias? interviews with research nurses and trialists. PLoS One 2013;8(10):e76625
  • Edwards RT, Hounsome B, Linck P, Russell IT. Economic evaluation alongside pragmatic randomised trials: developing a standard operating procedure for clinical trials units. Trials 2008;9(1):64
  • Whitehurst DG, Bryan S. Trial-based clinical and economic analyses: the unhelpful quest for conformity. Trials 2013;14(1):421
  • Tsoi B, Masucci L, Campbell K, et al. Harmonization of reimbursement and regulatory approval processes: a systematic review of international experiences. Expert Rev Pharmacoecon Outcomes Res 2013;13(4):497-511
  • Donaldson C, Hundley V, McIntosh E. Using economics alongside clinical trials: why we cannot choose the evaluation technique in advance. Health Econ 1996;5(3):267-9
  • Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009;374(9683):86-9
  • Raikou M, Briggs A, Gray A, McGuire A. Centre-specific or average unit costs in multi-centre studies? Some theory and simulation. Health Econ 2000;9(3):191-8
  • Ridyard CH, Hughes DA. Methods for the collection of resource use data within clinical trials: a systematic review of studies funded by the UK Health Technology Assessment Program. Value Health 2010;13:867-72
  • Goeree R, Gafni A, Hannah M, et al. Hospital selection for unit cost estimates in multicentre economic evaluations: does the choice of hospitals make a difference? Pharmacoeconomics 1999;15(6):561-72
  • Glick HA, Orzol SM, Tooley JF, et al. Design and analysis of unit cost estimation studies: how many hospital diagnoses? How many countries? Health Econ 2003;12(7):517-27
  • Grieve R, Cairns J, Thompson SG. Improving costing methods in multicentre economic evaluation: the use of multiple imputation for unit costs. Health Econ 2010;19(8):939-54
  • Drummond M, Barbieri M, Cook J, et al. Transferability of economic evaluations across jurisdictions: ISPOR good research practices task force report. Value Health 2009;12(4):409-18
  • Drummond M, Manca A, Sculpher M. Increasing the generalizability of economic evaluations: Recommendations for the design, analysis, and reporting of studies. Int J Technol Assess Health Care 2005;21(2):165-71
  • Fairclough DL, Peterson HF, Chang V. Why are missing quality of life data a problem in clinical trials of cancer therapy? Stat Med 1998;7(5-7):667-77
  • Coyne KS, Tubaro A, Brubaker L, Bavendam T. Development and validation of patient-reported outcomes measures for overactive bladder: a review of concepts. Urology 2006;68(Suppl 2):9-16
  • Basch EM, Abernethy AP, Mullins CD, Spencer MR. EV1 development of a guidance for including patient-reported outcomes (PROS) in post-approval clinical trials of oncology drugs for comparative effectiveness research (CER). Value Health 2011;14(3):A10
  • Kyte D, Draper H, Calvert M. Calvert, Patient-reported outcome alerts: ethical and logistical considerations in clinical trials. JAMA 2013;310(12):1229-30
  • Kyte DG, Draper H, Ives J, et al. Patient reported outcomes (PROs) in clinical trials: is ‘in-trial’ guidance lacking? a systematic review. PLoS One 2013;8(4):e60684

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.