299
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Adjusting for risk and confounding in outcomes research

&
Pages 109-110 | Published online: 09 Jan 2014

Outcomes research commonly refers to the total effect of particular healthcare interventions and practices on general mental and physical health, function and economic wellbeing. For example, health end points may include morbidity or mortality associated with obesity-related surgeries and drugs, while economic end points may reflect the direct and indirect costs associated with hospital length of stay and lost days of work. By linking the care individuals receive to the outcomes they attain or perceive attaining, outcomes research has become the course to developing better strategies to monitor and improve the quality of care Citation[101]. Over the years, outcomes research has expanded its focus from traditional end points of morbidity and mortality to include others such as patient satisfaction, quality of life, functional status and economic end points Citation[1].

Outcomes research commonly relies on data from large populations to assess these health and economic end points Citation[2]. Short of electronic medical records, large-claims databases have often served as the source for analysis in outcomes research Citation[2]. However, these data often consist of large data sets that are primarily built for billing or administrative audit purposes and, at times, may not be adequate for research purposes. However, with proper adjustments to bias and confounding, analyses based on claims data have fed valuable information into major clinical and insurance coverage decisions Citation[2,3].

The exact extent of potential for bias and confounding is largely determined by the research question. For instance, researchers may conduct comparative studies to assess the cost–effectiveness of different therapy options within a specific patient population, for example Hispanics or African–Americans, or ask a risk–benefit question about a newly adopted drug Citation[4]. It is also very important that, when conducting such outcomes research studies, a perspective is specified to guide the analysis appropriately. Finally, the results of the study may not be representative of another patient group, or even generalizable to the entire population.

Large data sets used in outcomes research have many advantages. The retrospective analysis of these data can supplement information from clinical trials. Additionally, such data have the advantage of affording a large sample size for analysis, which can be completed within relatively shorter time frames and with fewer resources than clinical trials or prospective studies. However, it is essential for researchers, as well as users of these data in outcomes research, to understand the difference between bias and confounding and how to adjust for it in order to maximize validity and possibly reliability Citation[3,5].

Bias results from systematic error in the design, conduct or analysis of a study. Often researchers overlook bias, or fail to adjust for it, producing results that lack validity and are ill positioned to inform clinical or coverage decision making. There are three main types of bias that have to be accounted for in the analysis of data. They are selection, information and confounding bias Citation[2,3]. Selection bias is related to the way in which patients are recruited or retained for a study. In a retrospective claims data analysis, such bias may occur when patients opt out of a drug regimen, for example because their condition has improved or, conversely, because they do not easily tolerate the associated side effects [5]. Information bias is related to the accuracy of information collected on health status and medical and drug resource utilization of patients. Given the layout and design of pharmacy claims, which lack specific information on specific indications, information bias threatens validity in analyses based on drug use. Confounding occurs when an extraneous variable is associated with both the outcome and the exposure of interest, so that the result is not a true association, but in fact masks the true association. Data relating to possible confounders in large data sets may be incomplete or missing, often because the research does not reflect the purpose for which the original data were collected.

There are a number of methods for adjusting for different bias. To address selection bias, one could use random sampling, consider only incident cases and minimize loss to follow-up. To remedy information bias, it is important to determine objective criteria for exposure. For example, it would be important to indicate whether exposure is continuous, as may be relevant in studies of risk, or whether it does not matter Citation[3]. Another commonly used method is blinding. Confounding can be minimized through randomization, matching, stratification and mathematical modeling. In specific cases of confounding, such as confounding by indication, propensity scores have been used. Overall, the sources of bias highlighted above should be critically addressed in data evaluation of effectiveness in outcomes research.

The need for proper adjustment is driven by the magnitude of the decisions that can potentially be based on the results of outcomes research. Indeed, drug coverage decisions have the potential to affect the health of millions at a time. On the other hand, new information on postmarketing drug safety, as derived from claims data, can also have major implications on the viability of the drug in question. Both researchers and decision makers should be well aware of the caveats of outcomes research, and be well equipped to identify sources of bias or confounding and consider them in their decision-making process.

References

  • Mullins CD, Baldwin R, Perfetto EM. What are outcomes? J. Am. Pharm. Association. NS36(1), 39–49 (1996).
  • Gordis L. Epidemiology. WB Saunders Company, PA, USA (2004).
  • Strom B. Pharmacoepidemiology. John Wiley & Sons Ltd, UK (2005).
  • Detsky AS. Evidence of effectiveness: evaluating its quality. In: Valuing Health Care: Costs, Benefits, and Effectiveness of Pharmaceuticals and Other Medical Technologies. Sloan FA (Ed.). Cambridge University Press, Cambridge, UK, 15–29 (1996).
  • Mullahy J, Manning W. Statistical issues in cost-effectiveness analyses. In: Valuing Health Care: Costs, Benefits, and Effectiveness of Pharmaceuticals and Other Medical Technologies. Sloan FA (Ed.). Cambridge University Press, Cambridge, UK, 149–184 (1996).

Website

  • Outcomes Research. Fact Sheet. AHRQ Publication No. 00-P011, March 2000. Agency for Healthcare Research and Quality, Rockville, MD, USA. www.ahrq.gov/clinic/outfact.htm

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.