905
Views
2
CrossRef citations to date
0
Altmetric
Articles

An “Obama effect” on the GRE General Test?

&
Pages 11-18 | Received 12 May 2013, Accepted 19 Dec 2013, Published online: 24 Jan 2014

Abstract

Previous research assessing Obama's effectiveness as a role model in alleviating the effects of stereotype threat on Black Americans' test performance yielded provocative though conflicting results. A field study with research participants observed that Black–White mean differences were not detectable at points in his 2008 presidential campaign when he clearly succeeded—his nomination and election, although they persisted at other points. But a laboratory experiment found that prompts to think positively about Obama had no effect. The present study extended this research to actual test-takers and an operational test (GRE General Test). Black–White mean differences just after the election in November 2008 were substantial and comparable to earlier differences, in November 2006; the level of Black test-takers' performance was also unchanged.

Obama's 2008 presidential campaign provided a natural experiment for assessing the effects of a successful Black American role-model on the attitudes and behavior of both Black and White Americans, occasioning a number of studies. Coincidentally, two of these investigations concerned stereotype threat on standardized tests (Aronson, Jannone, McGlone, & Johnson-Campbell, Citation2009; Marx, Ko, & Friedman, Citation2009). Stereotype threat is a concern about fulfilling a negative stereotype regarding the ability of one's group when placed in a situation where this ability is being evaluated. The result of this concern is that performance on the ability assessment is adversely affected (see Steele, Spencer, & Aronson, Citation2002). These effects have been documented in numerous investigations using a variety of manipulations of stereotype threat, a wide range of populations of research participants, and a number of different tests and other measures of ability and achievement (see Inzlicht & Schmader, Citation2012).

The Marx et al. and Aronson et al. studies were motivated by research which established that a positive role-model can alleviate the effects of stereotype threat on test performance (Marx & Goff, Citation2005; Marx & Roman, Citation2002; McIntyre, Paulson, & Lord, Citation2003; McIntyre et al., Citation2005). Marx et al., in a field study, administered a test composed of verbal ability items from the GRE General Test to nationwide samples of Black and White adults at different points in the presidential campaign. Stereotype threat was heightened by telling participants that the test was diagnostic of their ability and asking them about their ethnicity (Steele & Aronson, Citation1995). Samples were drawn at several points during the campaign. The mean difference on the test between the ethnic groups (controlling for age, English proficiency, and educational level) was large and statistically significant right before the Democratic convention in August, just after it for those who said that they did not see Obama's acceptance speech, and again in October, with White participants outperforming Blacks at each point. But the mean differences were much smaller and not significant right after the convention for those who said that they watched the speech and immediately after the election in November. Marx et al. suggest that the absence of a significant mean difference at both points, an “Obama effect,” is due to the Black participants' awareness of his success.

The Aronson et al. study was a laboratory experiment conducted in June and July, after the Democratic primaries, when Obama had been designated the party's nominee, but before the convention. Undergraduates were either prompted to think positively about Obama, McCain, or “an American politician,” or, in a control condition, not prompted at all. They then took a test made up of verbal ability items from the Medical College Admission Test, described as diagnostic of their ability, in order to elevate stereotype threat. The prompting did not significantly affect the test performance of either the Black or White participants. The Black–White mean difference on the test did not diverge across conditions, and the Black participants' means were also similar across conditions. Hence, an Obama effect was not evident.

This research is of special interest because of the Marx et al. study's dramatic effects outside of the laboratory with members of the general public, the implications that Obama's presidential campaign buffered stereotype threat and shrank the pervasive Black–White gap in test scores (see Jencks & Phillips, Citation1998) to the point that it was undetectable, and the seeming conflict with the negative findings in the Aronson et al. experiment. The availability of archival data for the GRE General Test for the period of the presidential campaign and other periods (the test is administered daily) provides the means to extend this line of research from studies of research participants taking ad hoc tests to an investigation of large samples of actual test-takers and an operational test: applicants to graduate school taking an admission test. Many students take the test: 504,391 in 2006–2007 (Educational Testing Service, Citation2008). Substantial differences exist in the mean performance of Black and White test-takers: for American citizens taking the test that year, the mean Verbal, Quantitative, and Analytical Writing scores were, respectively, 395, 419, and 3.6 for Black test-takers and 493, 562, and 4.3 for White test-takers (d = − .93, − 1.06, and − .88, respectively).Footnote1

Accordingly, the purpose of this study was to evaluate the performance of Black and White test-takers on the test's Verbal section (the parent of the ad hoc test used in the Marx et al. study) and Quantitative section at two time points: right after Obama's 2008 election, when his success was salient (a time period used in the Marx et al. study), and at an earlier point, when he was relatively unknown.Footnote2

Methods

Sample

The initial sample consisted of all Black and White American citizens taking the GRE General Test in testing centers in the USA in two 4-day time periods: 5–7 and 10 November 2008Footnote3 (4 November was Election Day) and 6–9 November 2006, the equivalent days that year.Footnote4

The November 2006 period was chosen for two reasons. First, Obama was not well known at this time. A November 2006 poll of adults about opinions of people in the news, by CNN (Citation2006), found that 41% of college graduates had either never heard of Obama or had no opinion about him. The corresponding figure was 3% for a subsequent poll in November 2008 by USA Today (Citation2008). (2006 was the first year such polling was done; breakdowns by ethnicity are unavailable.)

Second, it controlled for consistent and substantial variation in test-takers' level of performance on the GRE General Test during the year, resulting from self-selection as they chose when to take the test and apply to graduate school. In 2007, for instance, the mean Verbal score for American citizens for eight 5-day periods scattered through the year ranged from 367 in April to 411 in December (d = .46) for Black test-takers, and from 469 in February to 508 in December (d = .36) for White test-takers.

After eliminating 221 test-takers in 2006 and 291 in 2008 who reported that they were disabled,Footnote5 there were 431 Black and 4828 White test-takers in the 2006 period, and 520 Black and 5248 White test-takers in the 2008 period.

Variables

ETS files were the source of the GRE General Test scores (Verbal, Quantitative, and Analytical Writing) and background variables. The background variables, primarily used to select the sample and as covariates, were derived from a registration form filled out when applying for the test and a Background Information questionnaire completed at the test administration. The covariates are associated with performance on the test (Hartle, Baratz, & Clark, Citation1983; Stricker & Rock, Citation1995; Wilson, Citation1982), unaffected by stereotype threat, and preceded Obama's presidential campaign. The 11 covariates are age, English proficiency,Footnote6 and educational level (the Marx et al. study's covariates), plus gender, father's education, mother's education, geographical region of residence, attended a selective college (“most competitive” classification; Barron's Profiles of American Colleges, 2007 (Citation2006)), attended a research university (“research universities—very high research activity”) classification; CitationThe Carnegie Foundation for the Advancement of Teaching (n.d.), college major, and recency of college graduation. (Several covariates were sets of dummy variables: educational level, geographical region, and college major.)

Analysis

Missing data for the test scores and covariates—1% or less of the total sample for all variables except college major (2%), attended a selective college (22%), and attended a research university (22%)—were imputed (Graham, Citation2009; Schafer & Graham, Citation2002). An iterative, Markov Chain Monte Carlo, multiple-imputation method used the pooled 2006 and 2008 data for the covariates and test scores, plus 2006–2008 time period, ethnicity, college GPA, and a composite of attended a selective college, attended a research university, and attended an Historically Black College and University (HBCU; White House Initiative on Historically Black Colleges and Universities, 2011).Footnote7 A single imputation for each missing value was drawn from the imputed data-sets.

The main analysis was carried out in two stages. First, Black and White test-takers were matched within each period using propensity scores (Pruzek, Citation2011; Schafer & Kang, Citation2008) to deal with the potential lack of overlap in covariates between the two ethnic groups and the substantial difference in the size of the groups (Schafer & Kang, Citation2008). Propensity score methods enable Black and White test-takers to be matched simultaneously on the 11 covariates. For each period, the propensity scores, here indexing the probability of being a Black test-taker, were calculated from a logistic regression analysis of the covariates against the Black–White dichotomy. (In sets of dummy variables, one variable was dropped to avoid collinearity.) One-to-one matching on the propensity score was then done, selecting the White test-taker with the closest score to each Black test-taker. The result was matched samples of (a) all 431 Black test-takers in the 2006 period and 431 matching White test-takers in that period, and (b) all 520 Black test-takers in the 2008 period and 520 matching White test-takers in that period. In both pairs of matched samples, none of the mean differences in the covariates between the Black and White test-takers was significant (p>.05), and the corresponding R2 was .00 (p = 1.00) for the logistic regression of the covariates against the Black–White dichotomy.

Second, analysis of covariance was used to adjust for differences between the test-takers in the matched samples for the 2006 and 2008 periods, using the same covariates. A 2 (ethnicity) × 2 (period) factorial, multivariate analysis of covariance of the set of GRE General Test scores was carried out, supplemented by separate analysis of covariance of each score.

Results

The GRE General Test's Verbal score means (covariance adjusted) in the 2006 period were 407.46 for Black test-takers and 494.19 for White test-takers; the corresponding means in the 2008 period were 414.04 and 491.00 (the pooled standard deviation was 92.27). The test's Quantitative score means in the 2006 period were 441.23 for Black test-takers and 547.08 for White test-takers; the means in the 2008 period were 448.24 and 546.77 (the standard deviation was 116.22). The two scores correlated .59 ([1900], p < .01).

In the MANCOVA of the set of scores, the main effect for ethnicity was statistically significant (F [3, 1876] = 175.41, p = .01, ηp2 = .22), as was the main effect for period (F [3, 1876] = 28.90, p = .01, ηp2 = .04), but the interaction between ethnicity and period was not (F [3, 1876] = 1.23, p = .30, ηp2 = .00). In an analysis of simple effects, the means of the set of scores for the 2006 and 2008 periods were not significantly different for either Black or White test-takers (F [2, 1877] = .67, p = .51, ηp2 = .00 for Black test-takers; F = .18, p = .84, ηp2 = .00 for White test-takers).

The ANCOVA results were similar for the two scores. In common with the MANCOVA findings, the main effect for ethnicity was significant (F [1, 1878] = 369.56, p = .01, ηp2 = .16 for Verbal; F = 363.16, p = .01, ηp2 = .16 for Quantitative), and the interaction between ethnicity and period was not significant (F [1, 1878] = 1.31, p = .25, ηp2 = .00 for Verbal; F = .46, p = .50, ηp2 = .00 for Quantitative). But, unlike the MANCOVA, the main effect for period was not significant (F [1, 1878] = .15, p = .69, ηp2 = .00 for Verbal; F = .38, p = .54, ηp2 = .00 for Quantitative).

Consistent with the MANCOVA simple-effects analysis, the mean scores for the 2006 and 2008 periods were not significantly different for either Black or White test-takers (Verbal: F [1, 1878] = 1.18, p = .28, ηp2 = .00 for Black test-takers, and F = .28, p = .60, ηp2 = .00 for White test-takers; Quantitative: F = .85, p = .36, ηp2 = .00 for Black test-takers, and F = .00, p = .97, and ηp2 = .00 for White test-takers).

Discussion

The substantial Black–White gap in the GRE General Test's Verbal and Quantitative scores before Obama's 2008 election persisted after it, and the level of Black test-takers' performance was also unchanged.Footnote8 Neither result accords with an Obama effect.

The precise reason for the negative results in this study cannot be pinpointed, given the host of factors that set it apart from the previous studies (i.e., population of participants, test, test administration setting, experimental design, controls, and analytical methods). Perhaps, the most likely explanation is the combination of participants and test: graduate-school applicants taking an admission test. The GRE General Test is apt to elicit more stereotype threat than an ad hoc test taken by research participants, even if the ad hoc test's instructions are designed to produce stereotype threat, as was done in the Aronson et al. (Citation2009) and Marx et al. (Citation2009) studies. But this burden may be offset for GRE test-takers by differences in their motivation. These test-takers are ego involved in doing well on the test and hence being admitted to graduate school, whether or not they identify with the verbal, quantitative, and writing domains represented by the test (Aronson et al., Citation1999; Ryan & Sackett, Citation2013). Identification with the domain being evaluated is an important component of stereotype-threat theory and a robust moderator of the effect of stereotype threat in low-stakes laboratory experiments (see Steele et al., Citation2002). Domain identification can also play a part in high-stakes testing situations, but so can ego involvement in the outcome. As Sackett and Ryan (Citation2012) point out, this ego involvement may override any adverse effects of stereotype threat experienced in taking high-stakes tests, an explanation offered for the negative results obtained in the few extant studies of stereotype threat with operational tests (see Aronson & Dee, Citation2012; Sackett & Ryan, Citation2012).

Another possible explanation for the null results in this study is that stereotype threat was so strong for GRE test-takers that Obama's salience was unable to buffer the threat's impact on test performance. Nothing is known about the ambient level of threat on this test, but a 1999 survey of students who took the Graduate Management Admission Test suggests that the level of conscious stereotype threat on that high-stakes test was not excessive. Asked, immediately after taking the test, how people evaluate the student's verbal and quantitative ability, 13% of Black students thought that the estimates of their verbal ability were a little too low or much too low; the corresponding percentage was 19% for quantitative ability (B. Bridgeman, personal communication, 7 July 1999).

Other possible but unlikely explanations deserve mention. One is that the testing centers where the GRE General Test is administered may cue stereotype threat (Cheryan, Plaut, Davies, & Steele, Citation2009). A survey of the environments of these centers (Walters, Lee, & Trapani, Citation2004), covering the gender and ethnicity of the test proctors, and the centers' size, income level of their location, activity level, and social atmosphere (e.g., warm/friendly), found few significant relations with the performance of Black test-takers on the GRE General Test, none of them congruent with stereotype-threat theory. The focus was on the match between the proctors and the test-takers in ethnicity and gender (Marx & Goff, Citation2005; Marx & Roman, Citation2002). The match was unrelated to Black test-takers' performance. For these test-takers, only one center characteristic had a consistent relationship: size was positively associated with mean scores on the tests.

Two other explanations are related. The first, discussed by Aronson et al. (Citation2009) in interpreting their negative findings, is that Black test-takers may have seen Obama as a superstar, his success too extraordinary and unattainable for him to be a role model (Lockwood & Kunda, Citation1997). The second explanation is that they may have thought Obama was not deserving of success, thus impairing his effectiveness as a model (McIntyre, Paulson, Taylor, Morin, & Lord, Citation2011; Taylor, Lord, McIntyre, & Paulson, Citation2011). Both speculations are contradicted by poll data. An August 2008 ABC News and Washington Post (Citation2008) poll found that 87% of Black adults thought that Obama, if elected, “would serve as a leading role model to young Black men.” And a November 2008 Pew Research Center for the People and the Press (Citation2008) poll found that 99% of Black voters thought that Obama was “inspiring.”

This field study, in a real-life, high-stakes setting, joins the Aronson et al. laboratory experiment in failing to find an Obama effect on Black–White differences in test performance. Although limited by its quasi-experimental character, the present investigation adds weight to the Aronson et al. conclusion that this phenomenon, if it exists, is more circumscribed than the striking findings in the Marx et al. investigation would suggest.

The present work adds to the small but growing body of research on stereotype threat in non-laboratory settings (see Aronson & Dee, Citation2012; Sackett & Ryan, Citation2012). Their results make it possible to gauge the extrapolations of laboratory findings to the real world, particularly critical for a socially important phenomenon like stereotype threat.

Acknowledgements

Thanks are due to Brent Bridgeman, Ida Lawrence, and Cathy Wendler for encouraging this research; Jacqueline Briel, Steven Szyskiewicz, and Christino Wijaya for providing the data; and Yigal Attali, Brent Bridgeman, Daniel Eignor, Nathan Kogan, and Donald Powers for reviewing a draft of this article. The GRE General Test is published by Educational Testing Service (ETS). Any opinions expressed in this article are those of the authors and not necessarily of ETS.

Notes

1. The GRE General Test was revised in 2011; this version is not exactly comparable to the previous version because of changes in content and scoring (CitationEducational Testing Service, n.d.-a, Citationn.d.-b). In 2011–2012, 534,761 students took the test (Educational Testing Service, Citation2013). The Black–White mean differences are similar to those for the previous version. For American citizens that year, the mean Verbal, Quantitative, and Analytical Writing scores were, respectively, 146.7, 143.3, and 3.3 for Black test-takers, and 154.1, 150.7, and 3.9 for White test-takers (d = –1.03, − 1.04, and –.84, respectively).

2. The test's Analytical Writing section was excluded because scores may not be comparable between 2006 and 2008. Expert raters' scoring of the essays became more rigorous over those years, more closely aligned with the scoring rubrics (Trapani & Bridgeman, Citation2011).

3. The day of 8 November, a Saturday, was excluded because observant Jews may avoid taking the test that day of the week; 9 November was a Sunday, a day that the test is not given.

4. The day of 5 November was a Sunday.

5. It is unknown whether disabled test-takers took it with special accommodations (typically including extra time), making scores not comparable; self-reported disability is used as a proxy.

6. In this study, “Do you communicate better (or as well) in English than in any other language?”

7. College GPA and attended an HBCU were not used as covariates; the former may be affected by stereotype threat, the latter is collinear with other covariates, attended a selective college and attended a research university.

8. The pattern of results was similar without controls and imputations. In November 2008, for example, the Black–White mean difference on the test's Verbal score was 411.58 (N = 520) versus 505.89 (N = 5245), d = –.89 without them and 414.04 versus 491.00, d = − .83 with them. The mean difference on the Quantitative score was 447.04 (N = 520) versus 578.28 (N = 5244), d = − 1.02 without them and 448.24 versus 546.77, d = − .85 with them. (N = 520 for all means with controls and imputations.)

References

  • ABC News, & Washington Post. (2008). August monthly—2008 presidential election (USABCWASH2008-1068 version 2) [Data file]. Retrieved from http://www.ropercenter.uconn.edu/data_access/ipoll/ipoll.html.
  • Aronson, J., & Dee, T. (2012). Stereotype threat in the real world. In M.Inzlicht & T.Schmader (Eds.), Stereotype threat: Theory, process, and application (pp. 264–279). New York, NY: Oxford University Press.
  • Aronson, J., Janone, S., McGlone, M., & Johnson-Campbell, T. (2009). The Obama effect: An experimental test. Journal of Experimental Social Psychology, 45, 957–960.
  • Aronson, J., Lustina, M. J., Good, C., Keough, K., Steele, C. M., & Brown, J. (1999). When White men can't do math: Necessary and sufficient factors in stereotype threat. Journal of Experimental Social Psychology, 35, 29–46.
  • Barron's Profiles of American Colleges, 2007. (2006). Hauppauge, NY: Barron's Educational Series.
  • Cheryan, S., Plaut, V. C., Davies, P. G., & Steele, C. M. (2009). Ambient belonging: How stereotypical cues impact gender participation in computer science. Journal of Personality and Social Psychology, 97, 1045–1060.
  • CNN. (2006). November 2006 elections/war in Iraq/2008 presidential election (USORCCNN2006-028 version 3) [Data file]. Retrieved from http://www.ropercenter.uconn.edu/data_access/ipoll/ipoll.html.
  • Educational Testing Service. (n.d.-a). Frequently asked questions about the GRE revised General Test—for test takers. Retrieved from http://www.ets.org/gre/revised_general./faq.
  • Educational Testing Service. (n.d.-b). Frequently asked questions about the GRE tests—for institutions. Retrieved from http://www.ets.org/gre/institutions/faq.
  • Educational Testing Service. (2008). Factors that can influence performance on the GRE General Test, 2006–2007. Princeton, NJ: Author.
  • Educational Testing Service. (2013). A snapshot of the individuals who took the GRE revised General Test. Princeton, NJ: Author.
  • Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549–576.
  • Hartle, T., Baratz, J., & Clark, M. J. (1983). Older students and the GRE aptitude test. (Research Report No. 83-20).Princeton, NJ: Educational Testing Service.
  • Inzlicht, M., & Schmader, T. (Eds.). (2012). Stereotype threat: Theory, process, and application. New York, NY: Oxford University Press.
  • Jencks, C., & Phillips, M. (Eds.). (1998). The Black–White score gap. Washington, DC: Brookings Institution.
  • Lockwood, P., & Kunda, Z. (1997). Superstars and me: Predicting the impact of role models on the self. Journal of Personality and Social Psychology, 73, 91–103.
  • Marx, D. M., & Goff, P. A. (2005). Clearing the air: The effect of experimenter race on target's test performance and subjective experience. British Journal of Social Psychology, 44, 645–657.
  • Marx, D. M., Ko, S. J., & Friedman, R. A. (2009). The “Obama Effect”: How a salient role model reduces race-based performance differences. Journal of Experimental Social Psychology, 45, 953–956.
  • Marx, D. M., & Roman, J. S. (2002). Female role models: Protecting women's math test performance. Personality and Social Psychology Bulletin, 28, 1183–1193.
  • McIntyre, R. B., Lord, C. G., Gresky, D. M., Ten Eyck, L. L., Jay Frye, G. D., & Bond, C. F.Jr. (2005). A social impact trend in the effects of role models on alleviating women's mathematics stereotype threat. Current Research in Social Psychology, 10, 116–136.
  • McIntyre, R. B., Paulson, R. M., & Lord, C. G. (2003). Alleviating women's mathematics stereotype threat through salience of group achievements. Journal of Experimental Social Psychology, 39, 83–90.
  • McIntyre, R. B., Paulson, R. M., Taylor, C. A., Morin, A. L., & Lord, C. G. (2011). Effects of role model deservingness on overcoming performance deficits induced by stereotype threat. European Journal of Social Psychology, 41, 301–311.
  • Pew Research Center for the People & the Press. (2008). November 2008 reinterview (USPEW2008-11POST version 2) [Data file]. Retrieved from http://www.ropercenter.uconn.edu/data_access/ipoll/ipoll.html.
  • Pruzek, R. M. (Ed.). (2011). Propensity score analysis [Special issue]. Multivariate Behavioral Research, 46, 389–566.
  • Ryan, A. M., & Sackett, P. R. (2013). Stereotype threat in workplace assessments. In K. F.Greisinger (Ed.), APA handbook of testing and assessment in psychology: Test theory and testing and assessment in industrial and organizational psychology (Vol. 1, pp. 661–673). Washington, DC: American Psychological Association.
  • Sackett, P. R., & Ryan, A. M. (2012). Concerns about generalizing stereotype threat research findings to operational high-stakes testing. In M.Inzlicht & T.Schmader (Eds.), Stereotype threat: Theory, process, and application (pp. 249–263). New York, NY: Oxford University Press.
  • Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7, 147–177.
  • Schafer, J. L., & Kang, J. (2008). Average causal effects from nonrandomized studies: A practical guide and simulated example. Psychological Methods, 13, 279–313.
  • Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality and Social Psychology, 69, 797–811.
  • Steele, C. M., Spencer, S. J., & Aronson, J. (2002). Contending with group image: The psychology of stereotype and identity threat. Advances in Experimental Social Psychology, 34, 379–440.
  • Stricker, L. J., & Rock, D. A. (1995). Examinee background characteristics and GRE General Test performance. Intelligence, 21, 49–67.
  • Taylor, C. A., Lord, C. G., McIntyre, R. B., & Paulson, R. M. (2011). The Hillary Clinton effect: When the same role model inspires or fails to inspire improved performance under stereotype threat. Group Processes & Intergroup Relations, 13, 447–459.
  • The Carnegie Foundation for the Advancement of Teaching. (n. d.). The Carnegie classification of institutions of higher education, 2005. Retrieved from http://classifications.carnegiefoundation.org.
  • Trapani, C., & Bridgeman, B. (2011, April). Using automated scoring as a trend score: The implications of score separation over time. Paper presented at the meeting of the National Council of Measurement in Education, New Orleans, LA.
  • USA Today. (2008). November wave 1: Barack Obama (USAIPOUSA2008-44 version 2) [Data file]. Retrieved from http://www.ropercenter.uconn.edu/data_access/ipoll/ipoll.html.
  • Walters, A. M., Lee, S., & Trapani, C. (2004). Stereotype threat, the test-center environment, and performance on the GRE General Test (Research Report 04-37).Princeton, NJ: Educational Testing Service.
  • White House Initiative on Historically Black Colleges. (2011). Retrieved from http://www.ed.gov/edblogs/whhbcu.
  • Wilson, K. M. (1982). GMAT and GRE Aptitude Test performance in relation to primary language and scores on TOEFL (Research Report No. 82-28).Princeton, NJ: Educational Testing Service.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.