899
Views
9
CrossRef citations to date
0
Altmetric
Rorschach Research Dialogues

On Conducting Construct Validity Meta-Analyses for the Rorschach: A Reply to Tibon Czopp and Zeligman (2016)

, , &
Pages 343-350 | Received 07 Feb 2016, Published online: 06 May 2016
 

ABSTRACT

We respond to Tibon Czopp and Zeligman's (2016) critique of our systematic reviews and meta-analyses of 65 Rorschach Comprehensive System (CS) variables published in Psychological Bulletin (2013). The authors endorsed our supportive findings but critiqued the same methodology when used for the 13 unsupported variables. Unfortunately, their commentary was based on significant misunderstandings of our meta-analytic method and results, such as thinking we used introspectively assessed criteria in classifying levels of support and reporting only a subset of our externally assessed criteria. We systematically address their arguments that our construct label and criterion variable choices were inaccurate and, therefore, meta-analytic validity for these 13 CS variables was artificially low. For example, the authors created new construct labels for these variables that they called “the customary CS interpretation,” but did not describe their methodology nor provide evidence that their labels would result in better validity than ours. They cite studies they believe we should have included; we explain how these studies did not fit our inclusion criteria and that including them would have actually reduced the relevant CS variables’ meta-analytic validity. Ultimately, criticisms alone cannot change meta-analytic support from negative to positive; Tibon Czopp and Zeligman would need to conduct their own construct validity meta-analyses.

Acknowledgment

We thank Manali Roy for her assistance with data reporting.

Disclosure

Joni L. Mihura and Gregory J. Meyer receive royalties from a Rorschach test manual (Meyer, Viglione, Mihura, Erard, & Erdberg, 2011) and associated products.

Notes

1 Not counting comments and replies, which are shorter than original articles, the average page length for articles in Psychological Bulletin over the last five years (2011–Citation2015) was 31 pages; at 58 pages, ours was the second longest out of 204 original articles.

2 Examples include, but are not limited to, in Mihura et al. (Citation2013): (a) the Abstract, “Using Hemphill's (2003) data-driven guidelines for interpreting the magnitude of assessment effect sizes with only externally assessed criteria [italics added], we found 13 variables had excellent support [etc.]” (p. 548); (b) the Method section, “We focused our primary analyses on the Rorschach validity coefficients that used externally assessed criteria [italics added]” (p. 561); (c) the Note to Table 3 in which the validity classifications are reported, “The strength of the validity evidence is derived from the meta-analytic results for the external assessment method [italics added] in Table 2” (p. 570); and (d) the Discussion, “We emphasized the Rorschach as a performance-based test and focused our primary analyses on studies that used externally assessed criteria (e.g., observer ratings, diagnoses) rather than introspectively assessed criteria to establish validity [italics added]” (p. 573).

3 In the review process for the meta-analyses we were asked to cast the broadest net possible in order to include as many studies as possible. To contend with situations in which the number of responses was a confounding variable—that is, when it was significantly related to the criterion variable—we controlled for R using semipartial correlations.

4 Tibon Czopp and Zeligman (Citation2016) consistently refer to our systematic reviews and meta-analyses of 65 Rorschach variables in relation to introspectively and externally assessed criterion variables as a “meta-analysis.” We conducted almost 100 (i.e., 95) meta-analyses, which required several thousand hours.

5 Tibon Czopp and Zeligman's (Citation2016) construct labels would not have met the journal's brevity requirement. Their constructs for these 13 variables used a total of 143 words and ours used 73 (an average of 11.0 vs. 5.6 words per construct label), and we already had to argue to use as many words as we did.

6 Tibon Czopp and Zeligman also suggested that studies on dissociation by Brand and colleagues (Brand, Armstrong, & Loewenstein, Citation2006; Brand, Armstrong, Loewenstein, & McNary, Citation2009) could have been used to validate FD. However, these authors also did not hypothesize a relationship with FD and we are not aware of any conclusive data that dissociative patients have superior introspective capacity. In addition, Brand et al.’s (2009) dissociative sample is a subset of their 2006 dissociative sample; therefore, these are not two different samples.

7 We counted findings that were included in articles published in 2002 or before.

8 The correct author name is Andronikof-Sanglade.

9 Of note, it should be obvious that this was a tremendously arduous and time-consuming task, given there were 2,467 articles to re-review; each Rorschach article typically reports dozens of findings; and there were 2,468 author-hypothesized predictor–criterion associations identified in our original review. We can estimate that about 500 articles contained potentially usable CS data (most of the articles were either studying non-CS variables or were not Rorschach validity studies), multiplied by, on average, at least 25 Rorschach findings per article multiplied by 2,468 predictor–criterion associations to review for in the articles, which equals 30,850,000 decisions. The first author rechecked the articles twice for errors, which added another 61,700,000 decisions. Therefore, just the second stage of the review process that was required to test for Rorschach-researcher hindsight bias introduced almost a hundred million possibilities (30,850,000 + 61,700,000 = 92,550,000) to make an error. This second pass through the literature identified an additional 606 non-author-hypothesized predictor–criterion associations to create a grand total of 3,074 potentially relevant validity coefficients. Subsequently, each of these 3,074 findings had to be reviewed by us in relation to all of the other exclusion criteria in our study. The reader can also start to see why we could not report every single decision that we made. At the same time, the most vocal Rorschach critics did not find any errors when they reviewed our article (Wood et al., 2015).

10 Importantly, this additional review and analysis did not support the reviewer's hypothesis of hindsight bias. In fact, the aggregated effect size for the author-hypothesized associations were actually a little lower than that for non-author-hypothesized findings (i.e., r = .26 vs. .29), which is in the opposite direction of the reviewer's hypothesis.

11 The correct spelling for the author's name is Colucci.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 344.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.