181
Views
36
CrossRef citations to date
0
Altmetric
Original Research

Meta-analysis provides evidence-based interpretation guidelines for the clinical significance of mean differences for the FACT-G, a cancer-specific quality of life questionnaire

, , , , , & show all
Pages 119-126 | Published online: 23 Sep 2010

Abstract

Our aim was to develop evidence-based interpretation guidelines for the Functional Assessment of Cancer Therapy-General (FACT-G), a cancer-specific health-related quality of life (HRQOL) instrument, from a range of clinically relevant anchors, incorporating expert judgment about clinical significance. Three clinicians with many years’ experience managing cancer patients and using HRQOL outcomes in clinical research reviewed 71 papers. Blinded to the FACT-G results, they considered the clinical anchors associated with each FACT-G mean difference, predicted which dimensions of HRQOL would be affected, and whether the effects would be trivial, small, moderate, or large. These size classes were defined in terms of clinical relevance. The experts’ judgments were then linked with FACT-G mean differences, and inverse-variance weighted mean differences were calculated for each size class. Small, medium, and large differences (95% confidence interval) from 1,118 cross-sectional comparisons were as follows: physical well-being 1.9 (0.6–3.2), 4.1 (2.7–5.5), 8.7 (5.2–12); functional well-being 2.0 (0.5–3.5), 3.8 (2.0–5.5), 8.8 (4.3–13); emotional well-being 1.0 (0.1–2.6), 1.9 (0.3–3.5), no large differences; social well-being 0.7 (−0.7 to 2.1), 0.8 (−2.9 to 4.5), no large differences. Results from 436 longitudinal comparisons tended to be smaller than the corresponding cross-sectional results. These results augment other interpretation guidelines for FACT-G with information on sample size, power calculations, and interpretation of cancer clinical trials that use FACT-G.

Introduction

Health-related quality of life (HRQOL) questionnaires have the potential to play a key role in bringing the patient’s voice to evidence-based health care. But to realize this potential, we need to interpret the relevance of HRQOL outcomes to decisions about treatment. Such decisions are made at both the individual level, when a patient (or patient’s clinician, acting as patient’s agent) chooses among treatment options, and the group level, when clinical research is conducted to test the effectiveness of new treatments relative to current routine treatment.Citation1,Citation2 In this article, we focus on the latter. For example, if a clinical trial shows that a new treatment improves the mean HRQOL by 10 points on a 100-point scale at no extra cost and with no adverse effects relative to the current best treatment, which improves mean HRQOL by only 5 points, we need to know whether this difference is big enough to change policy and practice. Investigators planning new studies need this type of knowledge to calculate sample sizes, and end-users need to understand the implications of study results for future clinical practice and policy.

Despite increasing acceptance and use of HRQOL questionnaires as valid and informative measures in clinical research, interpreting the clinical relevance of outcomes measured on their scales remains challenging.Citation3 HRQOL is multifaceted and subjective, and there are a large number and wide range of measurement scales, each of which has a different and somewhat arbitrary scale. Few people have the requisite understanding of psychometrics or the hands-on experience to confidently interpret specific HRQOL scales. Familiarity with a measurement scale, whether it measures physical phenomena such as temperature or perception-based phenomena such as HRQOL, is the key to intuitive understanding of results recorded on that scale. Such familiarity develops with experience of how the numbers from such scales correspond to events in our lives and the significance of those to our actions and decisions. The multitude of HRQOL instruments compounds the problem of developing familiarity with specific HRQOL scales. Some HRQOL questionnaires are now very widely used. The 2 most commonly used cancer-specific instruments are the European Organisation for Research and Treatment of Cancer’s Quality of Life Questionnaire (QLQ)-C30Citation4 and the functional assessment of cancer therapy-general (FACT-G).Citation5 Collective experience with these 2 instruments has amassed a rich evidence base for developing interpretation guidelines.

Clinicians who have specialized in HRQOL research and used particular HRQOL measures over many years in both research and clinical contexts are ideally positioned to develop intuitive interpretations of HRQOL results.Citation2 Over many years of managing patients, administering HRQOL questionnaires, analyzing and scrutinizing the results, and linking HRQOL scores with the condition of their patients, they understand the variation among patients in their levels of HRQOL, how they react to their disease and treatment, and how their HRQOL scores change over time. They also understand how to interpret the mean scores from groups of such individuals.

Some of the most useful evidence for developing interpretations that are meaningful to clinicians comes from studies that report the HRQOL of patients grouped by established clinical criteria, sometimes called ‘clinical anchors’Citation6 or ‘known groups’.Citation4 This type of data is often collected during the validation of an instrument, with patterns of HRQOL scores across clinical groups providing evidence of clinical validity. Quantitatively, these patterns can be used to develop interpretation guidelines about the relative size and significance of mean differences on HRQOL scales. Similarly, the patterns in longitudinal data collected during conventional treatments with well known and understood clear clinical effects help us understand the relative size and significance of mean changes in HRQOL. This method has been applied to the QLQ-C30, a cancer-specific HRQOL questionnaire.Citation7 In this article, we further develop this method and apply it to another cancer-specific HRQOL questionnaire, the FACT-G.Citation5

The FACT-G forms the central core of a suite of questionnaires, referred to as the Functional Assessment of Chronic Illness Therapy (FACIT). Additional questions focus on specific diagnoses such as lung cancer (FACT-L) or breast cancer (FACT-B), and on symptoms such as fatigue (FACT-F) and anemia (FACT-An). These questionnaires are used increasingly in clinical research, adding to the body of evidence about the FACIT suite. In this article, we focus on the FACT-G, which is summarized as a total score and 4 subscales: physical well-being (PWB), social or family well-being (SWB), emotional well-being (EWB), and functional well-being (FWB).

We have previously described a new approach to synthesizing evidence about HRQOL measures with information on the interpretation of effect sizes derived from the FACT-G.Citation8 In that paper, we focused on effect sizes, comparing our results with Cohen’s guidelines for small, medium, and large effect sizes,Citation9 and the proposition that a 0.5 SD is the minimum significant difference.Citation10 The current article presents analogous results for the raw scores from the 5 scales of the FACT-G.

Methods

A detailed description of the methods in relation to effect sizes is given elsewhere.Citation8 Condensed extracts included here allow readers to assess the meta-analysis of the 5 FACT-G scale scores reported in this article.

Data sources

Papers that reported on the FACT-G were identified by searching relevant online databases. Unpublished information was identified through the FACIT Projects Register. Papers were included if they reported the mean difference at least between 2 independent groups (a cross-sectional contrast) or the mean change within at least 1 group over time (a longitudinal contrast). Results that represented duplicate publication were excluded, as were results from a total sample of less than 10 patients (potentially unreliable) or repeated measures from a sample with greater than 20% attrition (potentially biased).

Expert judgment

The clinical relevance of included mean differences was judged by 3 of us (DC, DO, MS). All identifying information on papers was obscured, as were any results and conclusions about the FACT-G scores. Each expert predicted the degree of difference in HRQOL for each mean difference between groups or within a group over time. Judgments were constrained to 8 options: much better (3), moderately better (2), a little better (1), much the same (0), a little worse (−1), moderately worse (−2), much worse (−3), and “don’t know”. After the initial round of judgments, contrasts for which any 2 judges’ scores differed by 2 or more categories were reconsidered. The 3 experts worked independently in both the initial and the consensus phases. Weighted kappa was calculated as a measure of concordance of the final judgment scores, and we interpreted kappa values after Landis and Koch.Citation11 The average of all 3 experts’ judgments, rounded to the nearest integer, was used to determine the final size class for each mean difference.

Definitions of size categories

Prior to the panel judging any papers, we defined 4 size groups explicitly in terms of their clinical relevance, where a clinically relevant difference was one that implied a difference in prognosis or clinical management. When circumstances had obvious and unequivocal clinical relevance (eg, patients with asymptomatic, early stage disease vs those with end-stage disease, or the change induced by a treatment well known to markedly improve the health state of most patients treated), group-level HRQOL was expected to be much better or worse (large effect). When circumstances were likely to have clinical relevance but to a lesser extent (eg, for patients with metastatic disease, the contrast of those who were responding to treatment compared with those who were not responding, or a treatment that was known to be effective for half the patients treated), group-level HRQOL was expected to be moderately better or worse (moderate effect). When effects were expected to be subtle but nevertheless clinically relevant (eg, the contrast of patients with regionally advanced cancer vs those with newly diagnosed metastatic disease, or a treatment that was known to improve the health state of only a small proportion of patients treated), group-level HRQOL was expected to be a little better or worse (small effect). When circumstances were unlikely to have any clinical relevance, group-level HRQOL was not expected to be any better or worse (trivial effect).

Data extraction

The following information was extracted from each paper: sample sizes and attrition; standard deviations (required for the inverse-variance weighting factor)Citation12 or other information from which standard deviations could be derived;Citation13 the clinical classifications and circumstances of patients (including anchors for mean differences in HRQOL); and mean scores of the PWB, FWB, EWB, and SWB scales. The total score was calculated as the sum of the PWB, FWB, EWB, and SWB scales. Each mean difference was then linked with the corresponding average expert judgment score to allocate a size class for the meta-analysis.

Meta-analysis

Inverse-variance weighted mean differences (IWMDs)Citation12,Citation14 were calculated for each of 4 size classes by grouping corresponding negative and positive size classes. Thus, all contrasts with a −1 average judgment score were grouped with those with a +1 score. To maintain the correct relationship between the sign of the reported HRQOL differences and the sign of the experts’ judgment score, the signs of the HRQOL differences with −1 average judgment scores were reversed prior to grouping them with the +1 contrasts. Results for the medium (−2/+2) and large (−3/+3) contrasts were treated analogously. Cross-sectional contrasts were analyzed separately from longitudinal contrasts.

Sensitivity analysis

We assessed the robustness of results based on all mean differences to discordance between experts by considering only those where at least 2 experts were perfectly concordant, and at most only 1 was discordant by 1 point.

Results

Of 210 papers that satisfied the search criteria, 71 were suitable for inclusion. The full citation details and a summary of the characteristics of these papers can be found in the companion paper.Citation8 The purpose of the most common study was to describe the effect of disease and treatment on HRQOL (44%), followed by developing and validating a FACIT instrument (21%) and phase 2 clinical trial (7%). A wide range of clinical anchors were reported (). The most common anchor for differences in HRQOL between groups was routine treatment; 25% of the 71 studies reported mean FACT-G scores by this anchor. The second most commonly used cross-sectional anchor was extent of disease (24%), and the third was performance status (24%) (usually Eastern Cooperative Oncology Group, assessed by the clinician). The most common anchor for change in HRQOL over time was time since starting treatment; 30% of the 71 studies reported mean FACT-G scores by this anchor. The second most commonly used longitudinal anchor was change in performance status (11%). Of the other 22 anchors, most were used in only 1 or 2 papers.

Table 1 Clinical anchors reported in the 71 included studies, and the number of papers that reported mean FACT-G scores by these anchors

The 71 papers and 22 anchors yielded 1,562 mean differences. For 8 of these, none of the 3 experts were able to make a prediction. In the remainder, the experts differed by 2 or more points for only 17% (261) of their initial judgments. The consensus process reduced this to 6% (95 contrasts). A detailed consideration of expert concordance is given in the companion paper.Citation8

shows the results of meta-analysis of the raw scale scores from 1,118 cross-sectional contrasts, 436 longitudinal contrasts, and the number of mean differences in each size class. shows analogous results from 617 (55% of 1,118) cross-sectional contrasts and 216 (50% of 436) longitudinal contrasts, in which at least 2 experts were perfectly concordant and at least only 1 was discordant by 1 point. The results in and are very similar, indicating that our results are robust to the degree of agreement between experts. Therefore, we focus on hereafter. While we cite point estimates here for simplicity, we urge readers to note the range of possible values for each size class and domain suggested by the confidence intervals (CIs) reported in the tables. Across all domains, for both cross-sectional and longitudinal contrasts, IWMDs considered “trivial” by the experts were very small, and their CIs contained zero.

Table 2 Results of meta-analysis of the FACT-G raw scale scores

Table 3 Sensitivity analysis results of meta-analysis of the FACT-G scale scores

For the other size classes, we consider the cross-sectional results first. The PWB and FWB scales were similar, with IWMDs of 2, 4, and 9 for small, medium, and large effects, respectively. Results for the EWB scale were about half that size, with IWMDs of 1 and 2 for small and medium effects, respectively. Results for the SWB scale were smaller again, there was no gradient from small to medium effects, and all CIs contained zero. For the total FACT-G score, small and medium IWMDs were 6 and 11 points, respectively. Only 2 contrasts were predicted to yield a large effect for the total score, and none was predicted to yield large effects for the EWB or SWB domain.

We now consider the longitudinal results in . The IWMDs for the PWB and FWB domains were a little less than half the size of corresponding cross-sectional values, with IWMD for small effects being somewhat less than 1, and medium effects having IWMDs of 1.5. Small effects for the EWB scale had an IWMD of about 1. Very little evidence was available to estimate medium effects for the EWB and SWB domains, and there was virtually no evidence available for large effects, with only 1 predicted in the PWB domain. The expected gradient across size classes was most pronounced in the PWB and FWB domains and not apparent at all in the SWB domain. For the SWB domain, all but one of the contrasts was predicted to yield trivial or small effects, and for the small effects, IWMDs were very small and their CIs contained zero.

Discussion

This study (which focuses on FACT-G raw scores) and its companion (which focuses on effect sizes)Citation8 provide the first formal meta-analysis of anchor-based evidence for a HRQOL instrument, the FACT-G. This evidence covers a wide range of clinically meaningful anchors, and is judged by 3 clinicians with many years of experience managing individual cancer patients and using HRQOL outcomes in cancer clinical trials. In summarizing our results for the FACT-G raw scores, we heed the advice of Guyatt et alCitation2 to avoid misleading oversimplifications and overly complex presentations. We believe that interpretation guidelines for HRQOL scales require some flexibility to accommodate different patient groups and clinical circumstances, so we summarize our results for each size class and domain as likely ranges. Thus, for the PWB and FWB scales, cross-sectional anchors suggest that a small effect is likely to be in the vicinity of 1–3 points, a medium effect in the vicinity of 3–5 points, and a large effect in the vicinity of 6–11 points. For these scales, longitudinal anchors yielded smaller estimates, with a moderate effect from longitudinal data being about the size of a small effect from cross-sectional data (1–3 points). For the EWB scale, both cross-sectional and longitudinal anchors suggest that a small effect is likely to be in the vicinity of 1–2 points. Large effects are unlikely to be observed for either the EWB or the SWB, and even small effects may be unlikely for the SWB scale. (CIs for the latter contained zero for the 95 cross-sectional and 42 longitudinal mean differences, expected to yield small effects by the experts). For the total FACT-G score, cross-sectional anchors suggest that a small effect is likely to be in the vicinity of 4–9 points and a medium effect in the vicinity of 9–14 points, with large effects unlikely. The small effect estimate is similar to the general guideline estimate for the FACT-G minimally important difference (MID), which is 4–7 points.Citation15 For the total score, results from longitudinal anchors were smaller than corresponding cross-sectional results and were rather unconvincing as interpretation guidelines as their CI contained zero despite reasonable sample sizes.

It is worth noting that for a given size class, the longitudinal mean differences were smaller than cross-sectional ones for the PWB and FWB scales (and consequently the total score). This pattern has been observed previously for the FACT-An and FACT-F scales.Citation16 The responsiveness of the FACT-G scales has been demonstrated in many papers,Citation17Citation26 so we discount lack of responsiveness as an explanation. Several other factors provide more plausible explanations. Response shift, response sets, and other factors related to adjustment may diminish the true size of self-reported change.Citation27 Another possibility is that our experts overestimated the magnitude of a health domain change likely to be observed with treatment, change of disease course, or other longitudinal anchor. Yet another is that our clinical experts may not have understood the longitudinal anchors as well as they did the cross-sectional anchors; lower rates of concordance among the experts for the longitudinal contrasts than for the cross-sectional contrasts lend some support to this hypothesis. It is also possible that the longitudinal HRQOL assessments may not have occurred at the best time to capture the effects anticipated by the experts. This is a common problem in longitudinal research since HRQOL is commonly assessed at clinic visits, which is convenient and maximizes response rates but does not necessarily capture the peaks and troughs in HRQOL trajectories.Citation28 Finally, there may have been a minimizing bias introduced by sample attrition; patients who were most likely to deteriorate by the largest amounts were also most likely to be lost to follow-up. In this study, we only included within-group contrasts with less than 20% attrition to minimize this problem. All of these factors made the experts’ task of predicting change more difficult than that of separating groups. Some or all of these factors may be working in concert, underlining the challenges in conducting and interpreting longitudinal HRQOL research.

The other finding of interest was that the IWMDs for the social and emotional well-being scales were smaller than those for the physical and functional domains, a pattern evident also for the QLQ-C30.Citation7 Three general factors may explain these observations. First, the clinical classifications and circumstances prevalent in cancer research may relate more to physical and functional aspects of HRQOL than to psychosocial domains. Second, following the first, our clinical experts may not have been as accurate in their predictions for psychosocial domains as they were for the physical and functional domains. Lower rates of concordance among the experts in the EWB and SWB domains relative to the PWB and FWB domains lend some support to this hypothesis. Third, scales such as the FACT-G’s EWB and SWB and the QLQ-C30’s emotional, social and cognitive functioning scales may not be as sensitive to real differences in psychosocial aspects of HRQOL as scales such as the FACT-G’s PWB and FWB and the QLQ-C30’s physical and role functioning scales are sensitive to real differences in physical and functional aspects of HRQOL. We believe the first 2 reasons are more likely than the third since several studies have shown change in emotional well-being using the EWB scale.Citation5,Citation29Citation32

Whatever the reasons for the observation above and if our results generalize to other HRQOL instruments, then the following implications may hold for choice of HRQOL outcomes in cancer trials. HRQOL domains with physical and functional focus may generally yield larger mean differences and hence provide more powerful measures of outcome than do psychosocially focused domains, where at best small effects may be expected. Scales based on social or family well-being may be suitable primary outcomes only for studies of psychosocial interventions targeted specifically at social and family issues in which pilot studies or phase 2 trials have demonstrated an effect for these outcomes.

There was considerable variation in empirical estimates of mean differences within each size category and for each FACT-G scale. There are 2 obvious contributing factors: sampling variation and a degree of mismatch between our experts’ expectations and the actual patterns in the HRQOL data. Our 3 experts collectively had a wealth of clinical experience with cancer patients and with HRQOL assessment, so their judgments should be as good as any available. The influence of sampling variation on the outcomes of individual studies cannot be discounted since it is well documented that individuals vary markedly in HRQOL levels at a particular time and in their trajectories of HRQOL over time. The degree of variation of component estimates within size classes in this meta-analysis highlights the limitations of individual studies for deriving general interpretation guidelines.

Other authors have produced evidence across clinical anchors and studies to develop interpretation guidelines.Citation7,Citation10,Citation15,Citation16,Citation25 Our method advances this type of research in 2 important ways. First, we used formal methods of meta-analysis to produce weighted average mean differences for each size class. Second, clinical meaningfulness was judged by 3 clinicians with many years’ experience managing individual cancer patients and using HRQOL outcomes in cancer clinical trials. Our experts were blinded to the FACT-G scores because we wanted them to place a value on the significance of differences (as determined by the clinical characteristics and circumstances of the patients) rather than to describe the magnitude of differences. Further, our definitions of size classes explicitly address the relevance of HRQOL results to clinical decision making, thereby providing a direct link to Jaeschke et al’s widely cited definition of the minimum clinically important difference,Citation33 more recently modified by Norman et alCitation10 and Schünemann et al.Citation34 Rather than focusing on the MID, we accommodate the possibility that in some circumstances, the MID may be of a moderate absolute size while in others it may be relatively small.

The results presented in this article augment other interpretation guidelines for the FACT-G, Citation15,Citation16,Citation25 adding a substantial evidence base not previously considered for this purpose. We have thereby provided a comprehensive synthesis of anchor-based evidence for the 5 FACT-G scales, incorporating the collective understanding of 3 clinicians with many years’ experience managing individual cancer patients and using HRQOL outcomes in cancer clinical trials.

Acknowledgements

This research was funded by an educational grant from Astra-Zeneca. We are indebted to Liz Chinchen, the librarian at the Centre for Health Economics Research and Evaluation, University of Technology, Sydney, Australia, for developing and testing our electronic search strategy, for identifying and searching all relevant online bibliographic databases, for helping identify potential source papers, and for collecting them. We would also like to thank Julia Brown, Peter Fayers, Kim Hawkins, and Galina Velikova for helpful comments about the results.

Disclosure

The authors report no conflicts of interest in this work.

References

  • CellaDBullingerMScottCBarofskyIClinical Significance Consensus Meeting GroupGroup versus individual approaches to understanding the clinical significance of difference or changes in quality of lifeMayo Clin Proc20027738439211936936
  • GuyattGOsobaDWuAWWyrwichKWNormanGRClinical Significance Consensus Meeting GroupMethods to explain the clinical significance of health status measuresMayo Clin Proc20027737138311936935
  • OsobaDKingMTInterpreting quality of life in individuals and groups: meaningful differencesFayersPMHaysRDAssessing quality of life in clinical trials: Methods and practice2nd edOxford, UKOxford University Press2005243257
  • AaronsonNKAhmedzaiSBergmanBThe European Organization for Research and Treatment of Cancer : a quality-of-life instrument for use in international clinical trials in oncologyJ Natl Cancer Inst1993853653768433390
  • CellaDFTulskyDSGrayGThe functional assessment of cancer therapy scale: development and validation of the general measureJ Clin Oncol19931135705798445433
  • LydickEEpsteinRSInterpretation of quality of life changesQual Life Res1993232212268401458
  • KingMTThe interpretation of scores from the EORTC quality of life questionnaireQual Life Res1996565555678993101
  • KingMTStocklerMRCellaDFMeta-analysis provides evidence-based effect sizes for a cancer-specific quality-of-life questionnaire, the FACT-GJ Clin Epidemiol201063327028119716264
  • CohenJStatistical power analysis for the behavioural sciences2nd edHillsdale, NJLawrence Erlbaum1988
  • NormanGRSloanJAWyrwichKWInterpretation of changes in health-related quality of life: the remarkable universality of half a standard deviationMed Care20034158259212719681
  • LandisJKochGThe measurement of observer agreement for categorical dataBiometrics197733159174843571
  • DeeksJHigginsJAltmanDAnalysing and presenting resultsHigginsJGreenSCochrane Handbook for Systematic Reviews of Interventions 4.2.6 [Updated Sep 2006]Chichester, UKJohn Wiley & Sons2006
  • FollmanDElliottPSuhIICutlerJVariance imputation for overviews of clinical trials with continuous responseJ Clin Epidemiol19924577697731619456
  • HedgesLVOlkinIStatistical Methods for Meta-analysisOrlandoAcademic Press1985110
  • YostKJEtonDTCombining distribution- and anchor-based approaches to determine minimally important differences: the FACIT experienceEval Health Prof200528217219115851772
  • CellaDEtonDTLaiJSPetermanAHMerkelDECombining anchor and distribution-based methods to derive minimal clinically important differences on the Functional Assessment of Cancer Therapy (FACT) anemia and fatigue scalesJ Pain Symptom Manage200224654756112551804
  • AuchterRMScholtensDAdakSWagnerHCellaDFMehtaMPQuality of life assessment in advanced non-small-cell lung cancer patients undergoing an accelerated radiotherapy regimen: report of ECOG study 4593. Eastern Cooperative Oncology GroupInt J Radiat Oncol Biol Phys20015051199120611483329
  • EsperPMoFChodakGSinnerMCellaDPientaKJMeasuring quality of life in men with prostate cancer using the functional assessment of cancer therapy-prostate instrumentUrology19975069209289426724
  • Watkins-BrunerDScottCLawtonCRTOG’s first quality of life study-RTOG 90-20: a phase II trial of external beam radiation with etanidazole for locally advance prostate cancerInt J Radiat Oncol Biol Phys19953349019067591900
  • SchinkJCWellerEHarrisLSOutpatient taxol and carboplatin chemotherapy for suboptimally debulked epithelial carcinoma of the ovary results in improved quality of life: an Eastern Cooperative Oncology Group Phase II study (E2E93)J Cancer200172155164
  • DemetriGDKrisMWadeJDegosLCellaDQuality of life benefit in chemotherapy patients treated with epoetin alfa is independent of disease response or tumor type: results from a prospective community oncology study – Procrit Study groupJ Clin Oncol19981610341234259779721
  • LangerCJManolaJBernadoPCisplatin-based therapy for elderly patients with advanced non-small-cell lung cancer: implications for Eastern Cooperative Oncology Group 5592, a randomized trialJ Natl Cancer Inst200294317318111830607
  • FallowfieldLGagnonDZagariMMultivariate regression analyses of data from a randomised, double-blind, placebo-controlled study confirm quality of life benefit of epoetin alfa in patients receiving non-platinum chemotherapyBr J Cancer200287121341135312454760
  • HahnEAGlendenningGASorensenMVQuality of life in patients with newly diagnosed chronic phase chronic myeloid leukemia on imatinib versus interferon alfa plus low dose cytarabine: results from the IRIS studyJ Clin Oncol200321112138214612775739
  • EtonDTCellaDYostKJA combination of distribution – and anchor-based approaches determined minimally important differences (MIDs) for four endpoints in a breast cancer scaleJ Clin Epidemiol200457989891015504633
  • McQuellonRPThalerHTCellaDMooreDHQuality of life (QOL) outcomes from a randomized trial of cisplatin versus cisplatin plus paclitaxel in advanced cervical cancer: a Gynecologic Oncology Group studyGynecol Oncol2006101229630416376417
  • SchwartzCEBodeRRepucciNBeckerJSprangersMAGFayersPMThe clinical significance of adaptation to changing health: a meta-analysis of response shiftQual Life Res20061591533155017031503
  • KleeMKingMMachinDHansenHA clinical model for quality of life assessment in cancer patients receiving chemotherapyAnn Oncol Adv Access20001112330
  • McCainNLZellerJMCellaDFUrbasskiPANovakRMThe influence of stress management training in HIV diseaseNurs Res19964542462538700659
  • AroraNKGustafsonDHHawkinsRPImpact of surgery and chemotherapy on the quality of life of younger women with breast carcinoma: a prospective studyCancer20019251288129811571745
  • GustafsonDHHawkinsRPingreeSEffect of computer support on younger women with breast cancerJ Gen Intern Med200116743544511520380
  • VelikovaGBoothLSmithABMeasuring quality of life in routine oncology practice improves communication and patient well-being: a randomized controlled trialJ Clin Oncol200422471472414966096
  • JaeschkeRJSingerJGuyattGMeasurement of health status: ascertaining the minimally clinically important differenceControl Clin Trials1989104074152691207
  • SchünemannHJPuhanMGoldsteinRJaeschkeRGuyattGHMeasurement properties and interpretability of the chronic respiratory disease questionnaire (CRQ)J Chron Obstruct Pulmon Dis2004218189