ABSTRACT
Valid self-report assessment of psychopathology relies on accurate and credible responses to test questions. There are some individuals who, in certain assessment contexts, cannot or choose not to answer in a manner typically representative of their traits or symptoms. This is referred to, most broadly, as test response bias. In this investigation, we explore the effect of response bias on the Personality Inventory for DSM–5 (PID–5; Krueger, Derringer, Markon, Watson, & Skodol, Citation2013), a self-report instrument designed to assess the pathological personality traits used to inform diagnosis of the personality disorders in Section III of DSM–5. A set of Minnesota Multiphasic Personality Inventory Restructured Form (MMPI–2–RF; Ben-Porath & Tellegen, Citation2008/Citation2011) validity scales, which are used to assess and identify response bias, were employed to identify individuals who engaged in either noncredible overreporting (OR) or underreporting (UR), or who were deemed to be reporting or responding to the items in a “credible” manner—credible responding (CR). A total of 2,022 research participants (1,587 students, 435 psychiatric patients) completed the MMPI–2–RF and PID–5; following protocol screening, these participants were classified into OR, UR, or CR response groups based on MMPI–2–RF validity scale scores. Groups of students and patients in the OR group scored significantly higher on the PID–5 than those students and patients in the CR group, whereas those in the UR group scored significantly lower than those in the CR group. Although future research is needed to explore the effects of response bias on the PID–5, results from this investigation provide initial evidence suggesting that response bias influences scale elevations on this instrument.
Notes
1 In addition, Hopwood and Sellbom (Citation2013) also argued for the development of a scale to index inconsistent responding.
2 We only used scores on the F-r and F-r validity scales to assign participants to the OR group, as these scales are associated with overreporting a broad range of psychopathology, whereas Fs, FBS-r, and RBS are associated with the overreporting of a specific set of symptoms (e.g., somatic, somatic/cognitive, and cognitive, respectively).
3 These cut scores are based on normative T scores and the developments of them are outlined in Ben-Porath and Tellegen (Citation2008/Citation2011). Sellbom and Bagby (Citation2010) and Goodwin, Sellbom, and Arbisi (Citation2013) reported that the recommended cut scores for F-r and Fp-r provided outstanding positive predictive power (PPP) and negative predictive power (NPP). Although the PPP and NPP for the L-r and K-r scales have not been independently evaluated, Sellbom and Bagby (Citation2008) did report the effectiveness of them in three archival samples.
4 The American Psychiatric Association does not specifically recommend this particular PD scoring algorithm, but we employ it as a metric because it is similar in its calculation to average domain and facet scores and provides information on the effect of response bias on Section III PD trait expression.
5 Initial MANOVAs were conducted as each of the PID–5 scales is correlated because some of the scales share items, specific facets, or sets of facets.
6 We thank an anonymous reviewer for this suggestion.
7 These estimates for each scale were calculated using items that composed each of the scales, which provides the best statistical estimate of internal consistency; estimates of internal consistency within response bias groups are available on request. Group difference analyses were performed using the average scores, which provide greater clinical utility (see Krueger et al., Citation2013).