1,450
Views
122
CrossRef citations to date
0
Altmetric
Invited Symposium

Assessing the Minimally Clinically Significant Difference: Scientific Considerations, Challenges and Solutions

, Ph.D.
Pages 57-62 | Published online: 24 Aug 2009

Abstract

The scientific considerations surrounding the estimation of a minimally clinically important difference (MCID) are a myriad and challenging. There are a considerable number of hurdles to overcome. The good news is that there are solutions to virtually every one of the scientific hurdles. This paper intends to set out the issues, identify the challenges, and offer solutions so that the state of the science may move forward. The ultimate outcome of the paper may not be to provide a definitive answer for estimating the MCID in every situation, but it should provide a starting point and advice for a process or set of guidelines that may be followed toward achieving this goal. The paper begins with a brief synthesis of the literature and state of the science at the time of publication. The relationship between the process for determining MCIDs for other endpoints, such as tumor response or complete blood culture (CBC) variables, versus toxicity and QOL-related variables is described. The ultimate lessons to be learned from this exercise are:

  1. There are many methods available to ascertaining an MCID. None are perfect, but all are useful.

  2. All methods converge to similar answers. Supplementary information may refine answers from one or more of the methods.

  3. Clinical opinion and patient subjective response should trump statistical theory.

  4. A process of MCID estimation involving all approaches to produce a potential range with sensitivity analyses is the optimal solution to producing an MCID based on the most complete knowledge possible.

Introduction

The science of quality of life (QOL) assessment has made great strides in recent years. Literally, thousands of publications have appeared in the medical literature with a focus on QOL methods and application. Since the mid 1980s, QOL research has gone from an interesting idea to entertain when a cure is not forthcoming Citation[[1]], to a concept second only to survival Citation[[2]], to a frustrating endeavor that would seem to have more methodological questions than answers. This has spawned supportive articles Citation[3-5] as well as some questioning whether we should start all over again so we can make sense of it all Citation[[6]].

The fact that there is healthy controversy and intellectual contrariness is an indication that the subject matter is of at least some practical importance to people outside the world of academic methodological research. In essence, the field of QOL assessment is somewhat of a victim of its own success. Methodological researchers have made a convincing argumentthat QOL assessment is an important aspect of state of the science, and modern, multi-disciplinary, medical care Citation[7&8]. Another aspect of modern medical care, namely, efficiency and cost effectiveness, now holds the QOL assessment world to task to prove that the outcomes of their efforts have clinically meaningful implications. It is this vibrant and complex discourse that forms the subject of this manuscript.

Theoretically, the concept of “clinical significance” is a simple one. Basically, it states “this matters in the real world of clinical medicine.” Pragmatically, defining precisely what this means and how it can be implemented is not quite as simple, however. Does it mean significance to the patient, to the clinicians, to both? The short answer is “yes.” Unfortunately, this begs another question regarding how.

The “how” question has been a particular focus of recent literature Citation[9-11]. As with most new scientific endeavors, the field has expanded, scattered somewhat, and lost a bit of its focus. The inherent danger in this meandering is that the consumers of the research can become frustrated with the vacillation and complicated answers that academic researchers ultimately provide. Those in the “real world” of modern medicine need focused, omnibus advice so they can make clinical decisions. They do not have the luxury of academic pontificating and inaction. They also need methods upon which they can rely that will allow them to follow sound scientific criteria. They need not be “right” all the time, but they need to have sound scientific justification for their actions. The remainder of this manuscript is focused on attempting to provide such pragmatic yet scientific guidelines for this target audience. We will start with a brief review of the methods proposed to date.

MCID Methodology

Much scientific work has been done in attempting to develop methods and definitions for clinically important differences. Others have written extensively on the various methods available Citation[[12]]. The genesis of the work arose out of a necessity to validate asthma questionnaires Citation[13&14]. The methods used by Juniper et al. were sound methodological advances. Jaeschke Citation[[13]] described a clinically significant effect as “the smallest difference in score in the domain of interest, which patients perceive as beneficial and which would mandate in the absence of troublesome side-effects and excessive cost, a change in the patient's management” Citation[[14]]. While such definitions are helpful for academic purposes, they leave much to be desired for facilitating clinical applications. Other researchers followed this path Citation[[15]].

There are two simple general approaches to finding a minimally clinically significant difference in clinical settings:

  1. Ask the people involved (anchor-based methods)

  2. Use mathematical criteria (distribution-based methods)

The work in COPD began via the first approach by simply asking people whether they thought a change had occurred in their asthma scores. For those who said they thought they had perceived a change, the average change in their asthma scores was then used to define a reasonable benchmark for what must be a clinically meaningful difference because the patient perceived it. The second approach using mathematical criteria was proposed based on the collective knowledge gathered over hundreds of years regarding how numbers, such as asthma scores, behave in general. In brief, statisticians apply mathematical theory to define what they know to be reasonable estimates of what would be a small, medium, and large difference.

As one might expect, there are strengths and weaknesses to both of the general approaches described above. The focus on the literature has been on the problems such as ceiling effects, floor effects, distributional issues, order effects, tool reliability and validity, population or samples, groups versus individual differences, etc. However, all of these issues apply to virtually every other measurement outcome in medical research. While academically appealing, ultimately they will need to be dismissed however, through sheer pragmatism so that something may be accomplished. The following anecdote exemplifies the state of the science.

A Pragmatic Example for an MCID

The movie “Armageddon” provides an example of the motivation for an MCID and how the complex science of a situation can be translated into the definition of a clinically meaningful effect. In this movie, an asteroid is hurtling towards earth. The President of the United States calls together his scientific advisors. A scientist, somewhat over-caffeinated, puts up blurry images of an indistinct dark blob and states, “this is the anomaly at 16:55, this is the anomaly at 16:58, this is the anomaly at 17:00. Our best volumetric estimates are that is it approximately 96.7 billion cubic meters.” At this point the President interrupts and says, “Enough of this anomaly garbage. What is this thing? How big is it? How much damage are we talking? Others have hit earth before and they destroyed Paris and New York.” The hero of the movie, played by Billy Bob Thornton, interjects to translate the science into meaningful interpretable results by saying, “It's an asteroid. It's the size of Texas. The others were the size of cars and basketballs. Damage? Total. Nothing would survive, not even bacteria.” So, while the message is not encouraging, it is clear. There is no doubt that while the stakeholders in this situation may quibble about the precise size and nature of the problem, there is no doubt that it is “clinically” important. This is the sort of scientific pragmatism which we seek.

Is Meaningful Minimal?

One may argue over the previous anecdote in that “total destruction” is somewhat greater than a “minimal” effect. This is a purposeful weakness of the example to highlight a key issue. The literature is inconsistent in what it calls an “important” difference. There have been many acronyms created to represent a minimally important difference (MID), minimally detectable difference (MDD), minimally perceptibledifference (MPD), and so on. A minimally important difference is presumably meaningful, but a meaningful difference need not be minimal (try saying that five times quickly). For the purposes of this paper, it is proposed that two types of differences be differentiated:

  1. A clinically meaningful difference is a difference that imports a meaning to the clinical consumer of this material that holds some import. It need not be the smallest number that will cause such a happening.

  2. A minimally important difference is one which is the smallest possible that holds some import to the consumer of the material.

It is tempting to come up with an omnibus acronym that will encompass all types of MCIDs. One might posit MPD for “minimally practical difference” to represent a difference that has some pragmatic meaning. This would then encompass situations where one is concerned with the minimal difference, minimally perceptible difference, minimally important difference, clinically important difference, and even the originally-defined difference of Jaeschke that would cause a clinician to take action. While an MPD might be appealing, I hesitate to offer it as a solution because no doubt there are readers out there who will identify a weakness in the attempted word-smithing. Instead, I will continue to use the MCID acronym in this manuscript to represent a difference that holds some practical meaning to the appropriate stakeholders while not necessarily being, strictly speaking, a minimally important difference. My reasons for taking this approach follow directly.

A unified, pragmatic approach?

There is good news: all the approaches converge into a similar neighborhood, giving similar answers irrespective of the approach used. A difference in the neighborhood of ½ times the standard deviation of the endpoint being assessed is generally a useful estimate of a clinically meaningful difference, in the absence of further situationally-specific knowledge.

I hesitate to add that the previous statement is a general observation, not a law, theorem, edict nor any other synonym which would suggest that it is true in all circumstances. It is not intended to replace sober scientific judgement based on experience and some research into the results of a particular situation under consideration. The last clause “in the absence of further situationally-specific knowledge” almost goes without saying as it applies to virtually any scientific observation.

The previous declaration arises from a viewpoint of trying to find a difference that is likely to be recognized as meaningful from a practical standpoint, all else being equal. The philosophical approach as detailed in Sloan Citation[16&17] indicates that the result derived from a desire to find an effect size that fit the following criteria:

  1. it was not so infinitesimally small so as to be clinically meaningless (e.g. a worm)

  2. it was not so overwhelmingly large so as to be of obvious clinical import (e.g. an elephant under the bed)

In essence, what was being sought was an effect that was somewhere between the two extremes that most stakeholders would recognize as one that was clinically meaningful. In keeping with the analogy, a difference was being sought that would look like a duck, sound like a duck, and walk like a duck, and therefore would by consensus be recognized as a duck. Furthermore, the odds of the duck being a worm or an elephant in a clever disguise would be minute. The proposed duck turned out to be the ½ standard deviation (SD) estimate mentioned above.

This result was encouraging because despite the myriad of approaches and situations, an apparent underlying message (or duck) appeared out of the fog of MCID estimation. Irrespective of the method used or the population applied, the ballpark of ½ SD seems to be reasonable. This is not to say that it will be exactly ½ SD for all situations, but it is a reasonable starting point. Special circumstances and further information may impart the need for an adjustment from the ½ SD standard.

Feathering the Duck's Nest

The ½ SD method, or duck, as proposed by Sloan Citation[[17]] was not intended to be a minimum. Indeed, it was intended to be conservative and therefore one might expect it to be a lower bound for the minimum. However, the work done subsequently indicates that the ½ SD is often equivalent to the entity defined as minimally clinically important as obtained via the MCID anchor-driven methods. Hence, the ½ SD could well be used as a starting point for a minimally clinically significant difference. It can then be scrutinized by subsequent anchor-based and clinical-based approaches to finding the minimally important difference to see if the ½ SD might be modified for a given situation.

Some people have expressed concern about applying a simple approach to identify a clinically meaningful “duck” for application into clinical research Citation[[17]]. As both Sloan Citation[[18]] and Norman Citation[[19]] point out, the ½ SD is supportable, from theoretical, philosophical, psychological, statistical, and empirical perspectives. The essence of this endeavor was to detect the signal of the general guideline for clinical significance out of the noise generated by all these ancillary and complicating concerns. In other words, we need to embrace the challenge of clinical significance with a larger perspective so as to make it practically achievable. The so-called “duck” provides a simple foundation for the assessment of clinical significance. The reviews of the literature and the various methods examined have confirmed what basic statistical theory has suggested as a simple ballpark estimate for a clinically significant difference.

Situation-specific information may indeed modify the foundational duck from its half standard deviation definition. A sound analytical process, however, needs to be followed sothat the modifications have as much statistical veracity as the original theory. In other words, quantifying such alterations must be done with more than just guesswork. Nonetheless, it is expected that even after all such measurement error sources are incorporated, what results is not much different than the original estimate in terms of orders of magnitude. In essence, what will appear will still be a buoyant waterfowl that quacks, although the type of duck may have altered slightly in terms of plumage or coloring?

A challenge to even a global approach such as has been proposed here is to communicate effectively how the duck can be incorporated into clinical research and practice. It is insufficient to simply state that the effect is equivalent to a half standard deviation. Translating the clinical significance into simple statements expressed in values for the original QOL tool is vital to make the method accessible for our clinical colleagues. For example, stating that “a difference of 5 points on the symptom distress scale is a clinically meaningful difference is more understandable to clinicians than referring to it as ½ SD.”

Other authors have expressed appropriate concern when simple solutions to complex problems have been proposed Citation[[19]]. No single statistical decision rule or procedure can take the place of a well reasoned consideration of all aspects of the data by a group of concerned, competent, and experienced persons with a wide range of scientific backgrounds and points of view. Nonetheless, the simplicity of our approach and the sound mathematical backdrop provide a sufficient foundation for the method to be practical when no further information is available for a given tool, population, or clinical context.

In terms of describing the “duck,” we need to relate it to the experimental setting, if possible, so that we will recognize it if we see it. The simplest way to accomplish this is to define a priori, a clinically significant effect based on clinical or expert opinion. For example, the context of the number of items that would need to demonstrate a change among those within a QOL tool would be a readily usable setting. To be more concrete, a 10-item QOL assessment tool with each item scored from 0–10 would produce a summated score from 0 to 100. Sloan has demonstrated that a difference of roughly 10 points on such a scale would be clinically meaningful Citation[[18]]. This amounts to answering each question on the assessment tool one category differently. This would seem intuitively to be a reasonable estimate for a clinically meaningful change. As always, however, if the stakeholders of this example would disagree with this ballpark estimate, it would be due to some supplementary knowledge of the situation that no general suggestion could or should supersede. A process of MCID estimation involving all approaches to produce a potential range with sensitivity analyses is the optimal solution to producing an MCID based on the most complete knowledge possible. In many situations, however, using all approaches is likely to be prohibitively expensive in terms of time and resources.

As has been discussed elsewhere, the ½ SD estimate was developed specifically for the situation where no other supplementary information was available. If further information is available, especially clinical judgment, then it should be used to modify the purely statistical estimate. The theory behind the ½ SD rule is statistically conservative admittedly. Nonetheless, Norman et al. Citation[[19]], in reviewing a large number of studies found that it was the average, not an upper bound of the effects observed. It is fair to say, however, that to most people's minds, a difference of ½ SD, either between groups at one time or within a patient over time, is non-ignorable. In other words, if you see ½ SD difference, it is, for most intents and purposes, most likely to be clinically important. Of course, clinical input from both the clinician and the patient should be brought to bear to assess the merit of any such estimate.

Comparison with Tumor Response

In defining our “duck” as a half standard deviation, we do no different than what has been done for other clinical endpoints. Tumor response, for example, has been classified as follows: if the bilateral measurement of the tumor shrinks by 50%, and that amount of shrinkage persists for two consecutive clinical evaluations, then a partial response (PR) will have been said to have occurred. The validity of the 50% cutoff has no more validity than 40% or 60%, but medical science has been able to come to a consensus regarding the definition for a clinically significant effect that is used in virtually every clinical endeavor in oncology. The basic reason for this accomplishment is the recognized need for a consistent and practical basis for the assessment of tumor response. This is not to say, however, that this measure of clinical significance has been set in stone. This particular definition has most recently been revised to involve only a single tumor measurement and to allow for the complex situation where the patient may have multiple tumors at multiple sites of disease Citation[[20]]. There is no reason that similar consensus of an equally uncomplicated nature can be achieved for the definition of a clinically significant effect for QOL endpoints.

The key to this analogy is that virtually every clinical outcome has gone through the process that is presently underway with assessing the clinical significance of QOL measures. At some point, either through consensus or mandate, a clinically meaningful benchmark was established for such endpoints as tumor response, blood tests, lung function, and so on. This was so clinical pathways could ultimately be determined for each of these important outcomes. Similarly, QOL research has taken the first step towards such benchmarking by investigating the estimation of what a clinically meaningful change might be.

The ½ SD for COPD

What is evident from the discussions is that clinicians dealing with COPD seem to feel that ½ SD is more of an upper bound of clinical significance than an average. If clinicians are comfortable with this, then that is fine. There did not seem to be much consensus, however, as to how much smaller than ½ SD should be a better estimate of the MCID.

Basically, the determination of an MCID via modification of the ½ SD estimate boils down to a balancing of the implications of competing errors. If the bar is set too high, then the natural concern is that a Type II error will occur, namely, we will declare a treatment non-efficacious, when in reality it does impart a clinically meaningful effect. Alternatively, if the bar is set too low, then a Type I error will occur, namely, we will declare a treatment efficacious when indeed the clinical import of the treatment is not significant. Proponents of a particular assessment tool or treatment might be more concerned about the Type II error because it indicates that their tool is insensitive or their treatment is not efficacious. Alternatively, others who have the responsibility to scrutinize an assessment tool or treatment might be more likely to be more concerned about a Type I Error, because the implication is that a poor tool is credited with greater sensitivity than it really has or that a treatment which imparts clinically meaningless effects is put into the marketplace. Furthermore, the smaller the MCID is set, the greater the sample size of the studies that will be required to detect that size of difference.

If Clinicians Could Agree, We Would Be Done

Clinicians appropriately express concern over accepting an MCID estimate, which goes against their clinical experience. This reaction to various approaches proposed is a bit of a double standard. On one hand, the clinicians want the methodologists to come up with an MCID estimate. On the other hand, they worry that a mindless application of an arbitrary MCID estimate will do as much harm as good.

Ultimately, when faced with similar decisions in the past, clinicians came to a consensus for MCID estimates for other endpoints. The classical example here is the determination that a 50% reduction in the size of a tumor being classified as clinically meaningful and, therefore, a tumor response. The genesis of this benchmark was not steeped in psychometrics nor was it achieved through multiple anchors or any of the more complicated methodology that has been applied to determining an MCID for QOL. Part of the reason for this might be that in terms of tumor response, the subject matter was more centrally located in the area of clinical expertise and clinicians felt comfortable commenting on how much of a reduction would be clinically significant.

Interestingly enough, no patient input was made into whether or not a patient thought a 50% reduction in tumor size was clinically important. Neither was it demonstrated that a tumor reduction of 50% was related to survival. The correlation of tumor response and survival over the years has had a long-demonstrated tenuous relationship. It is generally recognized as a usual indicator but not a consistent one.

The determination of the 50% reduction in tumor size was made largely through the need for a definition Citation[[21]]. There is an inherent measurement error and misclassification rate in tumor response, just as there is in QOL. The difference is that it is readily accepted in tumor response, whereas in QOL it is more uncertain. The point is that over time the tumor response benchmark was accepted over time and is in general use today. If clinicians could do the same thing for MCID estimates for appropriate outcomes in COPD, we would be done. Whether or not the political climate is conducive to this consensus approach and whether clinicians would summon the political will to make such a determination is open to debate.

A Final Solution?

While a simple convergence to a single method is an appealing goal, it might ultimately prove to be too simplistic for some situations. There is an obvious solution, however, to resolve the differences among the various alternative methodological approaches to constructing an MCID estimate. A three-step process might apply to arrive at a consensus MCID:

  1. Start with the statistical methods to derive an initial estimate such as the ½ SD

  2. Explore the application of this rule via the use of multiple anchors to validate or modify this initial estimate

  3. Obtain feedback from clinicians and patients on the revised estimate from the previous step to refine the MCID even further.

One might apply a range or interval estimate rather than a singular point estimate for the MCID. However, ultimately we will find that there is a greater consensus among the estimates than there are stark disagreements. Indeed, if such discrepancies are observed, then it is likely to be due to differences in population or definitional parameters than the estimates themselves. This feedback loop could be repeated if convergence is not observed. The likelihood is that differences seen here will be similar to what is seen when alternative statistical procedures are used on the same dataset. While difference might be observed in test statistics for a t-test or Wilcoxon procedure, for example, for the vast majority of situations, the answer will be the same. Indeed, if differences are observed, then it tells you more about the lack of stability in the experimental environment, not a weakness in the technique.

Summary

This is an exciting time to work in QOL assessment, especially in applied areas such as COPD. Clinicians are interested in QOL and are asking hard, scientific questions that need to be asked. Recent work, and events that spawned these manuscripts, are encouraging signs that the science is moving forward. It is doubtful that a perfect definition for an MCID will ever be found. All that is really needed, however, is to find a practical definition that is supported by sufficient consensus. Given the amount of attention that MCID estimation has seen of late, I am optimistic that such a goal is within sight. No doubt the conversation will continue.

REFERENCES

  • Tannock I F. Treating the patient, not just the cancer. N Engl J Med 1987; 3(17):1534–1535.
  • American Society of Clinical Oncology. Outcomes of cancer treatment for technology assessment and cancer treatment guidelines. J Clin Oncol 1996; 14:671–679.
  • Chassany O, Sagnier P, Marquis P, Fullerton S, Aaronson N. Patient-reported outcomes: the example of health-related quality of life-a european guidance document for the improved integration of health-related quality of life assessment in the drug regulatory process. Drug Inform J 2002; 36:209–238.
  • Sloan J, Symonds T. Health related quality of life measurement in clinical trials: when does a statistically significant change become clinically relevant?. J Drug Inform Journal 2003; 37:23–31.
  • Patrick D L, Chiang Y P. Measurement of health outcomes in treatment effectiveness evaluations: conceptual and methodological challenges. Med Care 2000; 38(9 Suppl):II14–II25. [PUBMED], [INFOTRIEVE]
  • Leplege A, Hunt S. The problem of quality of life in medicine. JAMA 1997; 278(1):47–50. [PUBMED], [INFOTRIEVE], [CROSSREF]
  • Cella D F. Quality of life outcomes: measurement and validation. Oncology 1996; 10((11)Suppl):233–246. [PUBMED], [INFOTRIEVE]
  • Frost M H, Sloan J A. Quality of life measurements: a soft outcome-or is it?. Am J Manage Care 2002; 8(18):S574–S579. [CSA]
  • Samsa G, Edelman D, Rothman M L, Williams G R, Lipscomb J, Matchar D. Determining clinically important differences in health status measures. Pharmacoeconomics 1999; 2:142–155.
  • Wyrwich K W. Minimal important difference thresholds and the standard error of measurement: is there a connection?. J Biopharm Stat 2004; 14(1):97–110. [PUBMED], [INFOTRIEVE], [CSA], [CROSSREF]
  • Redelmeier D A, Guyatt G H, Goldstein R S. Assessing the minimal important different in symptoms: a comparison of two techniques. J Clin Epidemiol 1996; 49:1215–1219. [PUBMED], [INFOTRIEVE], [CSA], [CROSSREF]
  • Guyatt G H, Osoba D, Wu A W, Wyrwich K W, Norman G R, Clinical Significance Consensus Meeting Group. Methods to explain the clinical significance of health status measures. Mayo Clin Proc 2002; 371–383.
  • Jaeschke R, Singer J, Juyatt G H. Ascertaining the minimal clinically important difference. Control Clin Trials 1989; 10:407–415. [PUBMED], [INFOTRIEVE], [CROSSREF]
  • Juniper E, Guyatt G H, Willan A, Griffith L E. Determining a minimal important change in a disease-specific quality of life questionnaire. J Clin Epidemiol 1994; 47(1):81–87. [PUBMED], [INFOTRIEVE], [CSA], [CROSSREF]
  • Guyatt G, Juniper E F, Walter S D, Griffith L E, Goldstein R S. Interpreting treatment effects in randomised trials. Br Med J 1998; 316:690–693.
  • Sloan J A, Cella D, Frost M, Guyatt G H, Sprangers M, Symonds T, Clinical Significance Consensus Meeting Group. Assessing clinical significance in measuring oncology patient quality of life: introduction to the symposium, content overview, and definition of terms. Mayo Clin Proc 2002; 77:367–370. [PUBMED], [INFOTRIEVE], [CSA]
  • Farivar S S, Liu H, Hays R D. Another look at the half standard deviation estimate on the minimally important difference in health-related quality of life scores. Exp Rev Pharmacoecon Outcomes Res 2004; 4(5):521–529. [CSA]
  • Sloan J A, Vargas-Chanes D, Kamath C C, Sargent D J, Novotny P J, Atherton P, Allmer C, Fridley B L, Frost M H, Loprinzi C L. Detecting worms, ducks and elephants: a simple approach for defining clinically relevant effects in quality-of-life measures. J Cancer Integr Med 2003; 1(1):41–47. [CSA]
  • Norman G R, Sloan J A, Wyrwich K W. Interpretation of changes in health-related quality of life. The remarkable universality of a half a standard deviation. Med Care 2003; 41(5):582–592. [PUBMED], [INFOTRIEVE], [CROSSREF]
  • Thrasse P, Arbuck S G, Eisenhauer E A, Wanders J, Kaplan R S, Rubinstein L, Verweij J, van Glabbeke M, van Oostenom A T, Christian M C, Crwyther S CT. New guidelines to evaluate the response to treatment in solid tmors. J Natl Cancer Inst 2000; 92(3):205–216. [CROSSREF]
  • Moertel C G, Hanley J A. The effect of measuring error on the results of therapeutic trials in advanced cancer. Cancer 1976; 38(1):388–394. [PUBMED], [INFOTRIEVE]

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.