2,466
Views
11
CrossRef citations to date
0
Altmetric
Professional

Using ‘big data’ to validate claims made in the pharmaceutical approval process

, , &
Pages 1013-1019 | Accepted 02 Oct 2015, Published online: 07 Nov 2015

Abstract

Big Data in the healthcare setting refers to the storage, assimilation, and analysis of large quantities of information regarding patient care. These data can be collected and stored in a wide variety of ways including electronic medical records collected at the patient bedside, or through medical records that are coded and passed to insurance companies for reimbursement. When these data are processed it is possible to validate claims as a part of the regulatory review process regarding the anticipated performance of medications and devices. In order to analyze properly claims by manufacturers and others, there is a need to express claims in terms that are testable in a timeframe that is useful and meaningful to formulary committees. Claims for the comparative benefits and costs, including budget impact, of products and devices need to be expressed in measurable terms, ideally in the context of submission or validation protocols. Claims should be either consistent with accessible Big Data or able to support observational studies where Big Data identifies target populations. Protocols should identify, in disaggregated terms, key variables that would lead to direct or proxy validation. Once these variables are identified, Big Data can be used to query massive quantities of data in the validation process. Research can be passive or active in nature. Passive, where the data are collected retrospectively; active where the researcher is prospectively looking for indicators of co-morbid conditions, side-effects or adverse events, testing these indicators to determine if claims are within desired ranges set forth by the manufacturer. Additionally, Big Data can be used to assess the effectiveness of therapy through health insurance records. This, for example, could indicate that disease or co-morbid conditions cease to be treated. Understanding the basic strengths and weaknesses of Big Data in the claim validation process provides a glimpse of the value that this research can provide to industry. Big Data can support a research agenda that focuses on the process of claims validation to support formulary submissions as well as inputs to ongoing disease area and therapeutic class reviews.

Introduction

In this supplement to JME the case has been argued that, to be accepted for patient care, a formulary committee or other healthcare decision-maker should require claims for new and existing products to be presented in a form that is capable of being verified in a timeframe that is meaningful to decision-makersCitation1. To reinforce this perspective, a companion paper puts forward the suggestion that formulary committees should require manufacturers to submit a protocol detailing how the claims proposed are to be verifiedCitation2. The purpose of this paper is to explore how Big Data may be utilized to support claims verification and how manufacturers can recognize both the strengths and current limitation of Big Data in framing their evaluable claimsCitation3–5. At the same time, Big Data can play a key role in providing feedback where comparative claims are routinely scrutinized to meet quality standards. If claims made, including claims against comparator products, are not substantiated, the formulary committees may drop these products or devices from coverage.

Big Data in the healthcare context refers to the routine collection, storage; assimilation and analysis of patient level data as they are collected in a real time, live setting. Data collection can occur in a wide number of ways. Electronic Medical Records (EMRs) which exist in the hospital inpatient and outpatient settings are a potential source of data, together with supplementary laboratory data as a further asset. Capturing attributes such as physical testing results (X-ray, MRI, CT results) as well as blood chemistry. These are regularly collected and compiled and then fed back to providers for use in patient care within hours if not minutes after they are collectedCitation6. Additionally, health insurance claims data are generated at the point of service by care providers in a wide variety of settings and are filtered up through the health insurance companies where they are used for reimbursement of services that were provided to the patient. Extending the incorporation of Big Data to financial and purchasing data could also provide for the linking of health information with behavioral characteristics. Patients with disease and linked health data could provide real-time assessments and advice regarding food purchases, travel patterns, real-time feedback with regard to grocery store habits and advice. Additionally, medication purchased over-the-counter could be monitored and any conflicts with prescribed medication could be immediately fed to the patient, physician or pharmacist, regardless of where the purchase occurs. Of course ethical issues with the use of these data abound.

While access to these various data sources can be readily achieved, care has to be taken to protect patient confidentiality and to ensure, if records are linked, that all potential identifiers are removed. Meeting strict confidentiality standards is essential to the successful interrogation and integration of Big Data, justifying to end users, advocacy groups and others that privacy is respected and, where required, the appropriate permissions are obtained.

Big Data also allow target patient populations to be tracked and monitored for a wide variety of conditions and outcomes identifiedCitation7,Citation8. The universality of data collection with regard to diagnosis related groups (DRG), reimbursement codes, DSM-V, ICD-9 (ICD-10), laboratory tests and many others make Big Data a valuable tool not only for healthcare administrators, but for researchers from industry, academia, pharmaceutics, and from the health insurance industryCitation9–11.

Of particular interest is the evaluation of claims made through the pharmaceutical submission and approval process. Big Data are a valuable and essential tool as input to the validation of claims and in providing feedback to formulary committees and other decision-makers. Products can be monitored as part of disease area and therapeutic class reviews not only for the cost-outcome and budget impact claims made by the manufacturer, but to perform risk and safety studies on a population level. In addition, National Institute of Health, CDC, and other national cancer registries contain clinical data that, when linked to other sources of Big Data, could easily be used to assess recurrent disease or new disease in patients who were enrolled in pharmaceutical studies during the drug approval process (phases I, II, and III). Based on the need for pharmaceutical claim validation within a period of 2–3 years, these data could provide an invaluable source of information. This real world environment testing outside of the approval process is invaluable not only to industry, but to patients, physicians, and healthcare decision-makers concerned about delivering effective quality care.

Advantages of Big Data

There are a number of advantages to using Big Data outside of the regulatory approval process and as a foundation for claims validation. First, Big Data provide a broader and more flexible observation window to evaluate outcomes. Big Data can provide information for longer periods of time for an individual patient as compared to the clinical trials, complementing the protocol driven inputs to the product approval process. Tracking a newly approved product quickly (depending on disease and use) generates feedback from thousands of patients (or individual data points when disease is tracked over time). Insurance claims data can be made ready for data analysis in a period of a few months, if not weeks following adjudication for accuracy. This short-term availability allows ample time for the design and implementation of research that is to be completed in the first 2–3 years following drug approval. Additionally, Pragmatic Clinical Trials can be conducted within this timeframe as a mix of claims and prospective randomized research design as a supplement to, or standalone research at the Phase III level. Second, statistical testing provides increased power (1 − β) over other kinds of data monitoring carried out in other post-approval trials at the Phase IV level. Third, Big Data are the closest to the real world environment setting that is possible. The accuracy of the claims is truly tested on the population of patients with the disease, and claims made regarding efficacy and complications, given the volume of data collected, can be quickly determined. Fourth, Big Data provide samples across a tiered demographic of patients. Detailed analyses can be undertaken for a variety of sub-populations and other patient strata that are not possible on the limited data-sets that typically underpin the regulatory approval process. At the same time, with access to a more diverse data source, Big Data can capture patients across disease states and associated comorbidities, by age and gender, by geographic region and location of care, and for example by adherence behavior. Finally, the ability of Big Data to capture competing products in disease or therapeutic areas provide a unique basis for evaluating the claims for competing products. The ability to monitor and report on competing claims for product impact is a key element, not only in initial formulary submissions, but also in ongoing disease area and therapeutic class reviews. Where a formulary committee asks, as part of a submission protocol, for a manufacturer to make claims against a competing product or products, rather than the claim resting on a model which may not be capable of generating evaluable claims for product superiority, access to Big Data allows manufacturers to couch their claims for comparative effectiveness in evaluable terms and report on these to a formulary committee.

Linking claims to Big Data

In order to document properly the claims that are made in the medication and device formulary submission process the standard definition of ‘validity’ must be applied. In measurement, validity refers to the ability of a tool to measure what it purports to measure. In the context of claims validation the claim must be paired to Big Data in such a way that the data are shown (or known) to be an accurate indicator of the claim being made. As such, the variable that is identified is taken as an indicator as to the claim being true or false. This is an essential part of any protocol that may be proposed by a manufacturer for claims validation. The protocol has to put claims in the context of data that are available, demonstrating how a target population is to be defined, the data elements that are required to be identified, the time frame for the assessment and relevant outcome measures. The protocol should also detail the preferred techniques for evaluating claims and whether the end-points are surrogate markers for longer-term or chronic conditions. The potential for surrogate end-points reflects, of course, the fact that the pharmaceutical exposure to medications differs based on the disease type. The type of patient condition has a direct effect on the ability of Big Data to monitor such outcomes (see ). Long-term or lifetime use of medications such as statins, anti-diabetics, hypertensives, and many others are intended for lifetime use will necessarily rely on surrogate markers. While Big Data will have high utility in the assessment of adverse events and other short-term comorbidities, the lifetime or longitudinal effect of these medications may not be known. On the other hand acute medications such as antibiotics, steroids, and analgesics have a shorter-term use where side-effects may be tolerated and effectiveness can be easily measured through Big Data.

Table 1. Efficacy differences in the use of Big Data to measure efficacy claims.

For other pharmaceutical products intended for use in cancer, for example, there is a high likelihood of death when a drug is not used; therefore, the existence of co-morbid conditions during treatment while observed in Big Data may not carry the same cautionary tone as the same co-morbid conditions used on medications intended for lifetime exposure. As a result, a specific condition, co-morbidity, adverse event, or effectiveness of therapy is dependent upon the tolerability presented by the severity of disease. Definitions regarding the existence of a co-morbid condition must be taken in context of the disease being treated, the duration of treatment, and the ability of Big Data to capture an outcome longitudinally.

Basic methods of claim validation using direct and proxy measures

The most common method for the validation of claims made through the pharmaceutical process is through the application of health insurance data. Using health insurance records, the occurrence treatment codes that are a part of the patient’s health profile make up Big Data. By using health insurance records it is possible to build research databases that, when carefully constructed, offer a view into the claims made by the pharmaceutical industry around medication or devices. This research often involves either direct measures or proxy algorithms that are applied to the data.

Direct measures exist when the specificity of a claim is linked to a directly observable variable in Big Data. For example if a claim is made to reduce a specific co-morbid condition, and where treatment of that co-morbid condition would require additional care or treatment by a physician, then the existence of a medical claim submitted to insurance would be a positive direct measure of the condition occurring. It would then be possible to query the data source, pulling all patients with the treatment, and perform an analysis on all patients looking for the existence of the code for the co-morbid condition.

Proxy measures of a claim are generally constructed algorithms and occur when a sequence of events are observed in Big Data that would rule in (or rule out) the occurrence of a specific claim. As a simplistic example, a constructed algorithm for toxicity or tolerance of chemotherapy asserted by a claim could be validated in Big Data by looking at the scheduled sequence of chemotherapy within a particular timeframe. When Big Data confirms that the timeframe for treatment is maintained (perhaps every 4–5 days) then by proxy measurement of the toxicity or tolerance claim could be validated. Similarly if the timeframe is interrupted within the specified period then assessments of the claim could be based on the proxy measure alone.

Bias in Big Data

Frequently in research, the emphasis is on developing the research question and then setting out to collect data specific to that question, and building a data source that is analyzable. This situation is typically reversed when Big Data is used. Researchers may view the opportunity to use Big Data as a panacea, a means to conduct research using data that are tailor-made to solve quickly a particular question or problem. However, this is not always the case. Researchers need to be aware of the many ways in which Big Data may or may not be applicable to their particular question regarding validation of pharmaceutical claimsCitation12. Big Data are collected for specific purposes, and may have biases inherent in the source of collectionCitation13. Data may not cover all individuals, may be viewed as a secondary data source, and may not represent the target population the researcher wishes to use to capture the analyzable data elements for a particular study. In addition, researchers need to be aware of the interpretation of ‘free-text’ data fields that may exist within the patient’s medical record. Physicians and other healthcare providers have various notations for similar illnesses or events. Incorporating a strict strategy for the determination of an event should be flexible to allow for the inclusion of codes for actual writing in the chart which refer to identical or very similar conditions. The solution may be to adopt online electronic medical records with restrictions which require data entry through such mechanisms as drop-down boxes for standardized notation to prevent these errors from entering Big Data sources.

There are several ways in which bias within Big Data can influence pharmaceutical claim analysis. First, administrative claims and commercial databases as well as the population of the insured patient are known to be different than the US census data within geographic regionsCitation14. As a result, they would not capture the exact demographics of the diseased population, nor would they capture all patients in a particular geographic region or population strata with the disease. Rather they would be reflective of all patients that subscribe to the commercial plan, are treated within a region or provider that bills back to the plan data in an accurate and reliable manner. For example, research has demonstrated that commercial databases, because they are purchased by working individuals or families that are employed by companies that provide health insurance coverage, contain higher percentages of members (consumers) that are of employment age (22–65 years), have a higher education, and have higher socio-economic status than others in a geographic region that are not insured, younger (or older), unemployed, and otherwise not a member of the ‘Big-Data’ source. Also within this bias category must come the realization that not all patients that are within a Big Data source are an exclusive ‘capture’ of the population. Patients within a geographic region or demographic, etc., are represented by a variety of commercial and governmental insurance plans. Therefore, if there is an assumption by those that use Big Data that the source captures all patients or even a majority of patients with a disease or using a particular drug, this assumption is most likely not true and should be questioned.

Second, Big Data within the administrative claims context are data that are collected for one purpose and used for another, the very definition of ‘secondary data’. Administrative claims data are largely collected for reimbursement purposes; to pay healthcare professionals for services that were given. Researchers using these data need to understand when they use these data that practitioners are savvy and select codes on one hand that accurately describe the patient condition, but on the other hand possibly reimburse them at the highest rate. As a result, codes or descriptions in databases that are or are not used may be more reflective of costs or reimbursement than disease or other indicators.

Both of these bias characteristics must be considered when the claims from products or devices are intended to be validated through the use of Big Data. There are statistical corrections and adjustments that can be applied to the data to correct for these biases. Researchers need to be aware of these situations and understand when they occur and how they might affect research regarding pharmaceutical claims. For example, researchers should look carefully at the claims that are being made for a product and determine if the claims can be validated by administrative claims Big Data. Pertinent questions are those regarding the use of the product, whether an insurance company carries the drug on the formulary, whether use of the drug is covered by the plan, or whether the patient with disease is covered. Similarly, Big Data would not currently capture other ‘out of pocket’ expenses made by the patient, such as over-the-counter medication, or use of other alternative therapies that are not a part of the medical claim.

Resource utilization and direct medical costs

Big Data, in particular administrative claims, are potentially important for validating claims for resource utilization and, if the appropriate cost algorithm is applied, claims for direct medical costs. If claims are being proposed for cost savings associated with reduced resource utilization—physician visits, tests, hospitalizations, emergency room visits, specialist visits—the protocol for claims assessment should detail, at a disaggregated level, which resource units are the target for cost savings. Blanket claims for cost-effectiveness give little guidance to profiling and tracking resource utilization in assessing comparative claims. Indeed, where claims are made in incremental cost-effectiveness terms, formulary committees should be explicit in asking how such claims are to be translated to evaluable clinical impact claims and costs with existing data.

A protocol should also stipulate how resources used are to be valued. In fact, a protocol should stipulate both the potential savings in resources used at a disaggregated level as well as the notional costs that may be applied. Care should also be taken in potential biases where the patterns of resource use reflect their relative cost.

Treatments for conditions that are prevented are also used as claims from the pharmaceutical industry. It is possible to use Big Data to quantify the financial differences between therapies shown to be equally effective. This analysis can be performed by examining resource utilization captured and comparing products for equivalence and cost. During the approval process claims can be made regarding the variety of aspects of treatment. Fewer hospitalizations, less testing of blood chemicals, and fewer co-morbidities are just a few of these claims commonly made. Each of these claims can be linked to financial aspects of the product.

Resource utilization and cost analysis can be very specific in the Big Data environment. Costs can be tracked and allowable expense reimbursements can be determined. Costs of episodes of care can be computed by aggregating within the period of treatment. Through comparing these costs within the same patients prior to their illness, or comparing costs to a similar cohort of patients without illness, accurate estimates of the impact of cost by disease can be made. Big Data also allow for comparisons of cost within stratifications of disease patients.

As well as claims for clinical effectiveness, resource utilization, and costs, a protocol may include claims for budget impact between comparator products. Claims for budget impact, which reflect, in large part, claims for resource utilization (including drug switching) are typically put forward on fixed time frames which may not correspond to the individual course of treatment. A submission protocol, therefore, should stipulate the time frame for generating resource utilization and cost impacts from Big Data.

Discussion

This paper has focused primarily on issues outside of financial analysis. Of course the primary application of the data must remain ethically focused on aspects that will directly bring quality of care to the patient. To identify highly effective treatments that improve patient outcomes is seen as a major goal of Big Data. Finding the most cost-effective treatment is an additional bonus and the ultimate goal. The key point, however, is that access to Big Data, in which the US is in an enviably unique position, provides a substantive framework for validating claims for pharmaceutical products and devices that are central to formulary submissions. Rather than relying on models or simulated claims that are justified in terms of the ‘apparent realism’ of the ideal world they describe, a more pragmatic and methodologically acceptable approach is possible where claims can be validated or falsified with existing Big Data in a timeframe that is meaningful to formulary committees and other healthcare decision-makers. However, if Big Data are to be utilized efficiently, guidance must be given as to how the claims are to be evaluated. The need for guidance points to the role for a protocol which sets out the data requirements, the techniques, and the deliverables. This is, of course, no different from requests that are made to Big Data vendors for claims and hypothesis testing. It is simply formalized as part of the drug submission process.

The use of Big Data extends beyond the claims that are made by pharmaceutical manufacturers. Healthcare providers and insurance companies have a vested interest in using these data in assessing the performance of single therapy and combination treatments for effectiveness on their own patients and subscribers. Insurance companies, through comprehensive detailed analysis on medications that are intended to treat identical cohorts and diseases, can determine which therapies work best. Performance guidelines can be established giving the provider and patients with the best known evidence that is available. The presence of evaluable claims along with methods for interpretation meet the needs of a wide range of health professionals. Another benefit with regard to the use of Big Data includes data uses that may not currently be understood or developed. The use of Big Data in aspects of pharmacovigilance, epidemiology, economics, and research methodology and biostatistics methods, as well as possible use within academics, has yet to be realized.

As emphasized in this paper, as part of the formulary evaluation process, pharmaceutical products that do not show efficacy and do not meet claims standards set by the manufacturer should be considered as candidates for removal from the formulary (or at least re-contracting or re-positioning). An ongoing commitment to evaluating comparative claims as part of ongoing disease area and therapeutic class reviews should, therefore, be seen as an integral part of ongoing quality assessment, where such reporting is routine.

There are many opportunities within the Big Data environment for improvement. Currently health insurance databases provided little information about patient hospitalizations. While these data will show that hospitalizations occurred, the data generated to the insurance company provide only general information such as date of admission and discharge, some detail regarding diagnosis or procedure, but very little else. Also, laboratory data regarding testing of blood and other outcomes from routinely administered tests are only available on a sub-set of patients who use major laboratories and can share their data. EMR data provide considerable detail regarding the specifics of inpatient treatments. However, EMR data generally do not provide specifics regarding outpatient care and other routinely scheduled non-hospital care. Once there is ready access to linked insurance, EMR, and testing data, together with hospital records, the opportunities for validation would be considerably enhancedCitation15. Data collected at this level could even be used to conduct prospective randomized clinical trials, replacing much of the work that is being done in the pharmaceutical industry todayCitation1.

Big Data can only look at claim-based records and not convenient testing that might occur in a physician office as a part of routine or continued care. For example, patients that are treated in a physician’s office using standard and accepted testing techniques such as the Beck Depression Inventory, or gait tests for multiple sclerosis, or concussion testing using vestibular ocular motor screening (VOMS), or thousands of other tests for particular conditions, are unquestionably used as a part of inpatient and outpatient care. However, unlike laboratory testing, the results of these tests which track patient progress through their disease course are never known without an actual chart review.

A further area for improvement, therefore, is the capture of patient reported outcomes. There are hundreds of patient reported outcome instruments that have been developed. Few of these are actually embedded in electronic medical records; none (as far as we are aware) are captured in administrative or hospital records. This creates a problem for claims that are presented in terms of patient reported outcomes. At the present time, the only basis for evaluating such claims would be for a manufacturer to commit to a prospective protocol design that replicated the choice of instruments utilized in RCTs. Unless the manufacturer could convince a formulary committee that there are surrogate measures captured in claims or laboratory data that mapped into patient reported outcome scores, the protocol would have to detail the design, for example, of a prospective observational study with feedback to a formulary committee in a meaningful timeframe. In designing the trial, there is the possibility of identifying target patients from administrative claims and utilizing these patients as study subjects. If an appropriate protocol cannot be agreed, then claims based on PROs may have to be put to one side. Unfortunately, the issue is made more complex by the fact that, as PRO outcomes are typically secondary end-points in trial design, they are usually under-powered. Claims based upon PRO secondary end-points may not be accepted by a formulary committee.

It is also important to note that, while it is possible to compute some statistics that support the use of medications in patients, such as the medication possession ratio (MPR), it is not possible to determine the actual use or compliance of a medication in the patient population. Use of other medications such as over the counter or ‘Nutraceuticals’ do not generate claims and like non-traditional medical providers would not be trackable through Big Data sources.

Other limitations of Big Data sources include patients migrating from one health plan to another, often as a result of employment changes. Age-based patient migration from one health plan to Medicare or health insurance plans in retirement maintained by different providers impair the ability of large data sources from being used as a longitudinal measure.

There are also losses of data from the traditional care setting to the hospital-based setting. Patients who may be ill and are easy to track in insurance-based Big Data often have their data flow interrupted as they move to a hospitalization where their inpatient data may be tracked by an EMR. In these situations there is a growing need to have EMR data fit and link in with more traditional large insurance data so that the continuum of care in the records can be maintained and researched.

Conclusions

If formulary committees, in their commitment to improving the management and quality of care, mandate protocols to support the assessment of testable comparative claims in target patient populations, Big Data has a central role to play in providing the framework for claims evaluation. Big Data can not only support protocols that accompany initial submissions for the formulary listing of pharmaceutical products and devices, but support ongoing disease area and therapeutic class reviews over the product’s life cycle. Clearly, in making claims for products, manufacturers should be aware both of the strengths and limitations of Big Data, and issues of confidentiality in identifying and reporting on drug utilization in target populations. The overwhelming advantage of Big Data is that it is a resource that allows formulary committees to put to one side claims modeled or simulations which generate either non-testable claims or claims that are impossible to validate in a timeframe that makes sense to formulary committees.

Transparency

Declaration of funding

This manuscript has not received any funding.

Declaration of financial/other relationships

None declared. JME peer reviewers on this manuscript have no relevant financial or other relationships to disclose.

Acknowledgments

The authors thank Paul Langley for his valuable guidance and insights during the preparation of this manuscript. The authors also thank the reviewers for valuable comments in the construction of the final version of this paper.

References

  • Langley P. Validation of modeled pharmacoeconomic claims in formulary submissions. J Med Econ 2015; 18(12): 993-999
  • Schommer J, Carlson A, Rhee G. Validating pharmaceutical product claims: questions a formulary committee should ask. J Med Econ 2015; 18(12): 1000-1006
  • Joshi P. Analyzing Big data tools and deployment plarforms. Int J Multi Approach Studies 2015;2:45-56
  • Shah N, Tenenbaum J. The coming age of data-driven medicine: translational bioinformatics’ next frontier. JAMA 2012;19:2-4
  • Rothstein M. Ethical issues in Big Data health research. J Law Med Ethics 2015;43:425-9
  • Alyass A, Turcotte M, Meyre D. From Big data to personalized medicine for all: Challenges and opportunities. Med Genom 2015;43:425-9
  • Sessler D. Big data and its contribution to peri-operative medicine. Anaesthesia 2014;69:100-4
  • Abbott R. Big Data and pharmacovigilance: using health information exchanges to revolutionize drug safety. Iowa Law Rev 2013;99:225-73
  • Hoffman S, Podgurski A. Big Bad Data: law, public health, and biomedical databases. J Law Med Ethics 2012;40:425-9
  • Beck A. Open access to large scale datasets is needed to translate knowledge of cancer heterogeneity into better patient outcomes. PLoS One 2015;12:21001794
  • Lister C, Davies M. Big Data of the people, for the people. Anaesthesia 2014;69:513-14
  • Massie A, Kuricka L, Segev D. Big Data in organ transplantation: registries and administrative claims. Am, J Transplant 2014;14:1723-30
  • Kaplan R, Chambers D, Glasgow R. Big Data and large sample size: a cautionary note on the potential for bias. Clin Trans Sci J 2014;7:342-6
  • Wasser T, Wu J, Turnceli O. Applying weighting methodologies to a commercial database to project United States census demographic data. Am J Man Care 2015;3(3):33-7
  • Kuiler E. From Big data to knowledge: an ontological approach to Big Data analytics. Ren Policy Res 2014;31:311-18

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.