Abstract
The Rorschach Performance Assessment System (R-PAS; Meyer, Viglione, Mihura, Erard, & Erdberg, Citation2011) introduced R-optimized administration to reduce variability in the number of Responses (R). We provide new data from six studies of participants randomly assigned to receive a version of this method or Comprehensive System (CS; Exner, Citation2003) administration. We examine how administration methods affect 3 types of codes most likely to contain potential projective material and the frequency of these codes for the 1st, 2nd, 3rd, 4th, or last response to a card (R in Card). In a meta-analytic summary, we found 37% of responses have 1 type of code, 19% have 2 types, and 3% have all 3 types, with stable proportions across responses within cards. Importantly, administration method had no impact on potential projective variable means. Differential skew across samples made variability harder to interpret. Initial results suggesting differences in 3 of the 18 specific Type by R in Card pairs did not follow a coherent pattern and disappeared when using raw counts from all participants. Overall, data do not support concerns that R-optimized administration might alter potential projective processes, or make potentially “signature” last responses to the card any different in R-PAS than the CS.
Disclosure of interest
The authors Gregory J. Meyer, Donald J. Viglione, and Joni L. Mihura have a financial interest in the company that publishes the R-PAS manual and associated products.
Notes
1 The protocol-level data examined in Hosseininasab et al. (Citation2017) was from Resende’s (Citation2011) study-specific SPSS file, using sequentially numbered cases and scores contributed by several examiners. This file included information about the examiner who administered and coded the protocol, the site of the assessment, and the type of administration. The response-level data were obtained from Resende’s Access database generated by the fourth edition of the Rorschach Interpretive Assistance Program (RIAP–4). Although that database had responses from 565 protocols in it, the protocols had their own sequentially assigned identification numbers, which did not correspond to those in the study-specific file, and we could locate exact matches for just 53 of the protocols used in the original analyses. The others either were scored by hand or by a RIAP–4 program residing on a different computer.
2 Study 2 is largely responsible for this seeming finding. Table 4 indicates it produced a standard deviation proportion of 2.45. In both the CS and alternative administrations, one person had a fourth response containing all three types of potential projective indicators. In the CS protocol, that respondent had three fourth responses to a card, leading to a proportion of .33. In the alternative protocol, that respondent had just a single fourth response to a card, thus producing a proportion of 1.0. After square root transformation, these two protocols with a single coded response contributed to similar standard deviations in each sample when considering the raw counts of 3 types to the fourth response (CS = 0.13, alternative = 0.15). However, they contributed to notably different standard deviations in each sample when considering the proportion of 3 types to the fourth response (CS = .1443, alternative = .3536), which in turn generated the standard deviation proportion of 2.45 reported in Table 4 (i.e., .3536/.1443 = 2.45045). Although the math is accurate, these findings emerge from just a single coded response in each sample. What differs is the divisor; the CS records have more fourth responses than the alternative records and thus the single response is associated with a smaller proportion in the CS records and a larger proportion in the alternative records. The Reese and Resende samples showed the same pattern. The alternative protocols had fewer fourth responses than the CS protocols, such that the presence of a single 3 types code then led to a higher value as a proportion of all fourth responses for the alternative protocol, which in turn increased the standard deviation ratio for the proportion scores (but not the standard deviation for the raw counts). Further, the Resende sample was like Study 2, in that the results emerged from each method of administration having just a single protocol with just one response coded for all three types of potential projective indicators. Because the alternative administration had fewer fourth responses than CS administration, it produced a larger proportion score.