2,375
Views
12
CrossRef citations to date
0
Altmetric
Articles

The Psychometric Costs of Applicants' Faking: Examining Measurement Invariance and Retest Correlations Across Response Conditions

, &
Pages 510-523 | Received 02 May 2016, Published online: 16 Mar 2017
 

ABSTRACT

This study examines the stability of the response process and the rank-order of respondents responding to 3 personality scales in 4 different response conditions. Applicants to the University College of Teacher Education Styria (N = 243) completed personality scales as part of their college admission process. Half a year later, they retook the same personality scales in 1 of 3 randomly assigned experimental response conditions: honest, faking-good, or reproduce. Longitudinal means and covariance structure analyses showed that applicants' response processes could be partially reproduced after half a year, and respondents seemed to rely on an honest response behavior as a frame of reference. Additionally, applicants' faking behavior and instructed faking (faking-good) caused differences in the latent retest correlations and consistently affected measurement properties. The varying latent retest correlations indicated that faking can distort respondents' rank-order and thus the fairness of subsequent selection decisions, depending on the kind of faking behavior. Instructed faking (faking-good) even affected weak measurement invariance, whereas applicants' faking behavior did not. Consequently, correlations with personality scales—which can be utilized for predictive validity—may be readily interpreted for applicants. Faking behavior also introduced a uniform bias, implying that the classically observed mean raw score differences may not be readily interpreted.

Notes

1 The widely used standardized root mean square residual (SRMR) was omitted in our study. Studies suggest the SRMR is influenced by the sample size, and advise against using ΔSRMR for measurement invariance analysis (e.g., Meade et al., Citation2008). In accordance with these studies, Monte Carlo simulation studies on our final models suggested the SRMR to be heavily biased in evaluating our model fits, with the sample size causing Type I errors of 99% to 100%. A detailed summary of the Monte Carlo simulation studies is available from the first author on request.