1,044
Views
2
CrossRef citations to date
0
Altmetric
Special Report

Protocol-driven costs in trial-based pharmacoeconomic analyses

Pages 673-675 | Published online: 09 Jan 2014

Abstract

Many authors and guidelines have proposed to exclude protocol-driven costs from cost–effectiveness analyses alongside clinical trials because they do not occur in clinical practice. This article, however, argues that only costs to improve patient adherence can be excluded, as the underlying protocol-driven activities have a clearly distinguishable cost and utility impact (most of the time). All other protocol-driven costs need to be included because the cost and utility impact of the underlying protocol-driven activities cannot be easily separated.

Protocol-driven costs are costs that are required for the completion of the protocol treatment but are typically not incurred outside of a clinical trial. Many authors and guidelines have proposed to exclude protocol-driven costs from cost–effectiveness analyses alongside clinical trials because they do not occur in clinical practice. However, the authors of the authoritative guidelines for authors and peer reviewers of economic submissions to the British Medical Journal have a mixed view Citation[1]. They state:

One view is that everything done to a patient during a clinical trial could potentially influence outcome, so the costs of all procedures should be included. On the other hand, procedures such as clinic visits solely for data collection would not take place in regular clinical care and may seem unlikely to affect outcome.

Nyman established a principle that generalizes the view of the British Medical Journal guidelines Citation[2]. He argues that costs should be included in cost–utility analysis “if they represent resources that directly produce the utility that is being measured in the denominator of the cost–utility ratio.”

The purpose of this article is to use Nyman’s principle to infer which components of protocol-driven costs can be excluded in cost–effectiveness analyses when extrapolating results from an economic evaluation alongside a controlled medication trial to clinical practice. The article will extend the above arguments of the British Medical Journal guidelines by discussing the various components of protocol-driven activities, their impact on patient utility and costs, and their excludability from the calculation of incremental costs in clinical practice.

Protocol-driven activities & impact on utility

In the following sections, protocol-driven activities are divided into four components. For each component, the impact on patient utility as well as their excludability from the calculation of incremental costs in clinical practice are discussed.

Activities to enhance patient adherence to medication

Medication adherence refers to the level of participation achieved in a medication regimen once an individual has agreed to the regimen Citation[3,4]. Adherence includes both the notion of compliance with the dosing regimen (i.e., taking the right dose at the right time) and persistence with therapy (i.e., continuing drug intake). Adherence improves medical treatment outcomes Citation[5] and may be higher in clinical trials than in clinical practice Citation[6]. Reasons for the latter include education of patients, monitoring of adherence (e.g., pill counts) during patient visits to health professionals, reassurance of patients owing to extra tests as well as prerandomization screening, which identifies patients at risk for nonadherence and excludes them Citation[7].

Costs to improve patient adherence are relatively easy to identify most of the time. They include the time for discussing clinical trial enrollment and providing information about the disease, prognosis, therapeutic options and potential adverse effects. They also include the time for patient education, improving dosing schedules and monitoring adherence (e.g., pill counts) during patient visits to participating health professionals. Costs of materials (e.g., pillboxes) need to be considered as well. Note that only activities that go beyond clinical practice are protocol driven. Hence, not all activities for patient education are protocol driven as some of them are also provided in clinical practice. As a word of caution, there may be situations where it becomes difficult to determine what time spent for providing clinical information during clinical trial enrollment is strictly marginal. This is the case when trial enrollment takes place directly through a patient’s physician and the physician provides less clinical information outside trial enrollment than he or she would do in clinical practice. In any case, when trial enrollment does not take place through the patient’s physician, the time provided for clinical information by a recruiter is clearly marginal to what would occur in clinical practice.

Activities to improve patient adherence also have a clearly identifiable impact on utility and medical costs. The reason is that adherence is linearly related to utility and medical costs when adherence is defined in terms of continuing drug intake. This is based on the calculation of incremental medical costs as the weighted average of the incremental costs of adherers and nonadherers Citation[8]. Similarly, incremental utility/effectiveness is calculated as the weighted average of the incremental utility/effectiveness of adherers and nonadherers Citation[9]. Nonadherers have zero incremental utility/effectiveness and medical costs (except when they fill the initial prescription or switch to alternative treatments). Thus, when moving from a trial setting to clinical practice, utility and medical costs are reduced in proportion to adherence.

Given their clear impact on costs and utility, activities to improve patient adherence may be easily excluded when extrapolating trial-based results to clinical practice Citation[8]. Note that this does not presuppose an understanding about the reasons for better adherence in a trial setting. The adherence level as measured in a trial is taken as is and then adjusted for nonadherence in clinical practice. Still, in cases where adherence is defined in terms of the dose patients take, we cannot clearly define the cost and utility impact of activities to improve patient adherence. Hence, in this case, we cannot exclude protocol-driven activities when moving from a trial setting to clinical practice.

Activities for the diagnosis & treatment of conditions that would have remained undetected in clinical practice

The utility or disutility of diagnosing and treating such conditions is measured in the denominator of the cost–utility ratio. That is, the denominator does not only capture utility from treating the target condition, but also from treating unrelated diseases. A disutility may result if effective treatment is unavailable for diagnosed conditions.

It seems difficult, if not impossible, however, to factor out the costs and utility of diagnosing and treating these conditions when trying to extrapolate trial-based results to clinical practice.

Activities for extra testing & data collection

Extra testing and data collection could have an impact on patient utility. Extra visits, for example, can be reassuring to the patient, who feels more taken care of, and thus provide process utility. In theory, process utility can be captured by preference-based questionnaires such as the time trade-off or the standard gamble questionnaire. For example, when asked in the time trade-off questionnaire how many years in current health to give up in order to become healthy, the patient could incorporate satisfaction with current care by providing a high rating of current health.

Extra testing and data collection may also have an indirect impact on patient utility due to the Hawthorne effect: physicians participating in a trial are aware of these activities, feel observed, and hence pay particular attention to their patients. The Hawthorne effect thus could even occur when physicians do not know the test results. That is, patients may be affected by extra testing and data collection even when it is only performed to help in analyzing the relationship between the intervention and the effect on the patient.

Again, it seems difficult to rule out the utility from extra testing and data collection when trying to transfer trial-based results to clinical practice Citation[10].

Activities for quality assurance

Quality assurance is defined as “all those actions that are established to ensure that the trial is performed and the data are generated, documented (recorded) in compliance with these guidelines for good clinical practice … and the applicable regulatory requirements” Citation[101]. This is achieved through monitoring, audits and inspections. Therefore, activities for quality assurance may also affect patient outcomes both directly and indirectly: directly, by ensuring compliance with the treatment protocol; and indirectly, by the Hawthorne effect. Again, it seems difficult to rule out the resulting utility from the denominator of the cost–utility ratio.

Conclusion

Many authors and guidelines have proposed to exclude protocol-driven costs because they do not occur in clinical practice. This article, however, argues that only costs to improve patient adherence can be excluded as the underlying protocol-driven activities have a clearly distinguishable cost and utility impact (most of the time). Specifically, this holds for the case where adherence is defined in terms of continuing drug intake. All other protocol-driven costs need to be included because the cost and utility impact of the underlying activities cannot be easily separated.

The approach proposed in this paper holds regardless of whether a clinical trial is explanatory (i.e., evaluates efficacy) or pragmatic (i.e., evaluates effectiveness). While most drug trials are explanatory and have highly selected participants, pragmatic drug trials have little or no selection and ask whether the intervention works when used in normal practice Citation[11]. Results of pragmatic trials are, therefore, considered to be more widely applicable. On the other hand, very few trials are purely pragmatic and the act of conducting an otherwise pragmatic trial may impose some control, resulting in the setting being not quite usual Citation[12]. For this reason pragmatic trials include all the protocol-driven activities described above. For example, when a patient refuses trial enrollment, the trial population will not fully represent clinical practice and will probably show better adherence.

Some authors prefer to include all protocol-driven costs in the base-case analysis (‘as is’ analysis) and make adjustments to clinical practice in a sensitivity analysis. Then, the above suggestion holds for the sensitivity analysis.

Finally, there are some additional factors that need to be considered when transferring results to clinical practice. For example, there might be differences between the trial and the real-world setting in terms of scale economies, capacity utilization and prevailing hospital type (e.g., teaching vs nonteaching). Furthermore, real-world patients may have additional comorbidities, resulting in drug–drug interactions and adverse events. The costs and disutility of these adverse events cannot be easily determined.

Key issues

  • • Protocol-driven activities alongside clinical trials can be divided into four components.

  • • Many authors and guidelines have proposed to exclude protocol-driven costs from cost–effectiveness analyses alongside clinical trials because they do not occur in clinical practice.

  • • However, only costs to improve patient adherence can be excluded as the underlying protocol-driven activities have a clearly distinguishable cost and utility impact (most of the time).

  • • An exception occurs when adherence is defined in terms of continuing drug intake.

  • • All other protocol-driven costs need to be included because the cost and utility impact of the underlying protocol-driven activities cannot be easily separated.

Financial & competing interests disclosure

The author has no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.

No writing assistance was utilized in the production of this manuscript.

References

  • Drummond MF, Jefferson TO. Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ Economic Evaluation Working Party. BMJ313, 275–283 (1996).
  • Nyman JA. Should the consumption of survivors be included as a cost in cost-utility analysis? Health Econ.13(5), 417–427 (2004).
  • Mihalko SL, Brenes GA, Farmer DF, Katula JA, Balkrishnan R, Bowen DJ. Challenges and innovations in enhancing adherence. Control Clin. Trials25(5), 447–457 (2004).
  • Balkrishnan R. The importance of medication adherence in improving chronic-disease related outcomes: what we know and what we need to further know. Med. Care43(6), 517–520 (2005).
  • DiMatteo MR, Giordani PJ, Lepper HS, Croghan TW. Patient adherence and medical treatment outcomes: a meta-analysis. Med. Care40(9), 794–811 (2002).
  • Robiner WN. Enhancing adherence in clinical research. Contemp. Clin. Trials26(1), 59–77 (2005).
  • Pablos-Mendez A, Barr RG, Shea S. Run-in periods in randomized trials: implications for the application of results in clinical practice. JAMA279, 222–225 (1998).
  • Gandjour A. Extrapolating results of trial-based pharmacoeconomic analyses to clinical practice: a decision model. Pharmacoeconomics29(2), 97–105 (2011).
  • Dunn G, Maracy M, Dowrick C et al. Estimating psychological treatment effects from a randomised controlled trial with both non-compliance and loss to follow-up. Br. J. Psychiatry183, 323–331 (2003).
  • O’Sullivan AK, Thompson D, Drummond MF. Collection of health-economic data alongside clinical trials: is there a future for piggyback evaluations? Value Health8(1), 67–79 (2005).
  • Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials10, 37 (2009).
  • Thorpe KE, Zwarenstein M, Oxman AD et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. CMAJ180(10), E47–E57 (2009).

Website

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.