459
Views
8
CrossRef citations to date
0
Altmetric
Articles

Modeling Multidimensional Forced Choice Measures with the Zinnes and Griggs Pairwise Preference Item Response Theory Model

, &

References

  • Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43(4), 561–573. https://doi.org/10.1007/BF02293814
  • Andrich, D. (1989). A probabilistic IRT model for unfolding preference data. Applied Psychological Measurement, 13(2), 193–216. https://doi.org/10.1177/014662168901300211
  • Andrich, D. (1995). Hyperbolic cosine latent trait models for unfolding direct responses and pairwise preferences. Applied Psychological Measurement, 19(3), 269–290. https://doi.org/10.1177/014662169501900306
  • Andrich, D., & Luo, G. (1993). A hyperbolic cosine latent trait model for unfolding dichotomous single-stimulus responses. Applied Psychological Measurement, 17, 253–276.
  • Andrich, D., & Luo, G. (2019). A law of comparative preference: Distinctions between models of personal preference and impersonal judgment in pair comparison designs. Applied Psychological Measurement, 43(3), 181–194. https://doi.org/10.1177/0146621617738014
  • Anguiano-Carrasco, C., MacCann, C., Geiger, M., Seybert, J. M., & Roberts, R. D. (2015). Development of a forced-choice measure of typical-performance emotional intelligence. Journal of Psychoeducational Assessment, 33(1), 83–97. https://doi.org/10.1177/0734282914550387
  • Aon Hewitt. (2015). 2015 Trends in global employee engagement report. Aon Corp.
  • Barrick, M. R., & Parks-Leduc, L. (2019). Selection for fit. Annual Review of Organizational Psychology and Organizational Behavior, 6(1), 171–193. https://doi.org/10.1146/annurev-orgpsych-012218-015028
  • Baron, H. (1996). Strengths and limitations of ipsative measurement. Journal of Occupational and Organizational Psychology, 69(1), 49–56. https://doi.org/10.1111/j.2044-8325.1996.tb00599.x
  • Berger, J., Bayarri, M. J., & Pericchi, L. R. (2014). The effective sample size. Econometric Reviews, 33, 197–217. https://doi.org/10.1080/07474938.2013.807157
  • Borman, W. C., Penner, L. A., Allen, T. D., & Motowidlo, S. J. (2001). Personality predictors of citizenship performance. International Journal of Selection and Assessment, 9(1&2), 52–69. https://doi.org/10.1111/1468-2389.00163
  • Brown, A., & Bartram, D. (2009). Development and psychometric properties of OPQ32r. (Supplement to the OPQ32 technical manual). SHL Group Limited.
  • Brown, A., & Maydeu-Olivares, A. (2011). Item response modeling of forced-choice questionnaires. Educational and Psychological Measurement, 71(3), 460–502. https://doi.org/10.1177/0013164410375112
  • Brown, A., & Maydeu-Olivares, A. (2012). Fitting a Thurstonian IRT model to forced-choice data using Mplus. Behavior Research Methods, 44(4), 1135–1147. https://doi.org/10.3758/s13428-012-0217-x
  • Bürkner, P. C., Schulte, N., & Holling, H. (2019). On the statistical and practical limitations of Thurstonian IRT models. Educational and Psychological Measurement, 79(5), 827–854. https://doi.org/10.1177/0013164419832063
  • Cao, M., & Drasgow, F. (2019). Does forcing reduce faking? A meta-analytic review of forced-choice personality measures in high-stakes situations. Journal of Applied Psychology, 104(11), 1347–1368. https://doi.org/10.1037/apl0000414
  • Cao, M., Drasgow, F., & Cho, S. (2015). Developing ideal intermediate personality items for the ideal point model. Organizational Research Methods, 18(2), 252–275. https://doi.org/10.1177/1094428114555993
  • Carter, N. T., Dalal, D. K., Boyce, A. S., O'Connell, M. S., Kung, M. C., & Delgado, K. M. (2014). Uncovering curvilinear relationships between conscientiousness and job performance: How theoretically appropriate measurement makes an empirical difference. The Journal of Applied Psychology, 99(4), 564–586. https://doi.org/10.1037/a0034688
  • CEB. (2010). Global personality inventory—Adaptive technical manual. CEB. https://doi.org/10.1002/da.23201
  • Chernyshenko, O. S., Stark, S., Drasgow, F., & Roberts, B. W. (2007). Constructing personality scales under the assumptions of an ideal point response process: Toward increasing the flexibility of personality measures. Psychological Assessment, 19(1), 88–106. https://doi.org/10.1037/1040-3590.19.1.88
  • Chernyshenko, O. S., Stark, S., Prewett, M. S., Gray, A. A., Stilson, F. R., & Tuttle, M. D. (2009). Normative scoring of multidimensional pairwise preference personality scales using IRT: Empirical comparisons with other formats. Human Performance, 22(2), 105–127. https://doi.org/10.1080/08959280902743303
  • Chernyshenko, O. S., Stark, S., & Williams, A. (2009). Latent trait theory approach to measuring person-organization fit: Conceptual rationale and empirical evaluation. International Journal of Testing, 9(4), 358–380. https://doi.org/10.1080/15305050903351919
  • Christiansen, N. D., Burns, G. N., & Montgomery, G. E. (2005). Reconsidering forced-choice item formats for applicant personality assessment. Human Performance, 18(3), 267–307. https://doi.org/10.1207/s15327043hup1803_4
  • Coombs, C. H. (1964). A theory of data. Wiley.
  • de la Torre, J., Ponsoda, V., Leenen, I., & Hontangas, P. (2012). Some extensions of the multiunidimensional pairwise preference model [Paper presentation]. Paper presented at the 26th Annual Meeting of the Society for Industrial and Organizational Psychology, Chicago, IL.
  • de la Torre, J., Stark, S., & Chernyshenko, O. S. (2006). Markov chain Monte Carlo estimation of item parameters for the generalized graded unfolding model. Applied Psychological Measurement, 30(3), 216–232. https://doi.org/10.1177/0146621605282772
  • Drasgow, F., Chernyshenko, O. S., & Stark, S. (2010). 75 years after Likert: Thurstone was right!. Industrial and Organizational Psychology, 3(4), 465–476. https://doi.org/10.1111/j.1754-9434.2010.01273.x
  • Drasgow, F., Stark, S., Chernyshenko, O. S., Nye, C. D., Hulin, C. L., & White, L. A. (2012). Development of the Tailored Adaptive Personality Assessment System (TAPAS) to support army selection and classification decisions. US Army Research Institute for the Behavioral and Social Sciences.
  • Doornik, J. A. (2009). An object-oriented matrix programming language Ox 6 [Computer software]. London: Timberlake Consultants Press.
  • Fisher, P. A., Robie, C., Christiansen, N. D., Speer, A. B., & Schneider, L. (2019). Criterion-related validity of forced-choice personality measures: A cautionary note regarding Thurstonian IRT versus classical test theory scoring. Personnel Assessment and Decisions, 5(1), 49–61. https://doi.org/10.25035/pad.2019.01.003
  • Gelman, A. (1995). Inference and monitoring convergence. In W. R. Gilks, D. J. Spiegelhalter, & S. Richardson (Eds.), Practical Markov chain Monte Carlo. (pp. 131–144). Chapman and Hall.
  • Gelman, A., Meng, X. L., & Stern, H. S. (1996). Posterior predictive assessment of model fitness via realized discrepancies. Statistica Sinica, 6, 733–807.
  • Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7(4), 457–472. https://doi.org/10.1214/ss/1177011136
  • Griffith, R. L., Chmielowski, T., & Yoshita, Y. (2007). Do applicants fake? An examination of the frequency of applicant faking behavior. Personnel Review, 36(3), 341–355. https://doi.org/10.1108/00483480710731310
  • Guenole, N., Brown, A. A., & Cooper, A. J. (2018). Forced-choice assessment of work-related maladaptive personality traits: Preliminary evidence from an application of Thurstonian item response modeling. Assessment, 25(4), 513–526. https://doi.org/10.1177/1073191116641181
  • Heggestad, E. D., Morrison, M., Reeve, C. L., & McCloy, R. A. (2006). Forced-choice assessments of personality for selection: Evaluating issues of normative assessment and faking resistance. Journal of Applied Psychology, 91(1), 9–24. https://doi.org/10.1037/0021-9010.91.1.9
  • Hirsh, J. B., & Peterson, J. B. (2008). Predicting creativity and academic success with a “fake-proof” measure of the Big Five. Journal of Research in Personality, 42(5), 1323–1333. https://doi.org/10.1016/j.jrp.2008.04.006
  • Hontangas, P. M., de la Torre, J., Ponsoda, V., Leenen, I., Morillo, D., & Abad, F. J. (2015). Comparing traditional and IRT scoring of forced-choice tests. Applied Psychological Measurement, 39(8), 598–612. https://doi.org/10.1177/0146621615585851
  • Hoogland, J. J., & Boomsma, A. (1998). Robustness studies in covariance structure modeling: An overview and a meta-analysis. Sociological Methods & Research, 26(3), 329–367. https://doi.org/10.1177/0049124198026003003
  • Hough, L. M., Eaton, N. K., Dunnette, M. D., Kamp, J. D., & McCloy, R. A. (1990). Criterion-related validities of personality constructs and the effect of response distortion on those validities. Journal of Applied Psychology, 75(5), 581–595. https://doi.org/10.1037/0021-9010.75.5.581
  • Jackson, D. N., Wroblewski, V. R., & Ashton, M. C. (2000). The impact of faking on employment tests: Does forced choice offer a solution? Human Performance, 13(4), 371–388. https://doi.org/10.1207/S15327043HUP1304_3
  • Joo, S. H., Chun, S., Stark, S., & Chernyshenko, O. S. (2019). Item parameter estimation with the general hyperbolic cosine ideal point IRT model. Applied Psychological Measurement, 43(1), 18–33. https://doi.org/10.1177/0146621618758697
  • Joo, S. H., Lee, P., & Stark, S. (2017). Evaluating anchor-item designs for concurrent calibration with the GGUM. Applied Psychological Measurement, 41(2), 83–96. https://doi.org/10.1177/0146621616673997
  • Joo, S. H., Lee, P., & Stark, S. (2018). Development of information functions and indices for the GGUM‐RANK multidimensional forced choice IRT model. Journal of Educational Measurement, 55(3), 357–372. https://doi.org/10.1111/jedm.12183
  • Joo, S. H., Lee, P., & Stark, S. (2020). Adaptive testing with the GGUM-RANK multidimensional forced choice model: Comparison of pair, triplet, and tetrad scoring. Behavior Research Methods, 52(2), 761–772. https://doi.org/10.3758/s13428-019-01274-6
  • Joubert, T., Inceoglu, I., Bartram, D., Dowdeswell, K., & Lin, Y. (2015). A comparison of the psychometric properties of the forced choice and Likert Scale versions of a personality instrument. International Journal of Selection and Assessment, 23(1), 92–97. https://doi.org/10.1111/ijsa.12098
  • Kim, J. S., & Bolt, D. M. (2007). Estimating item response theory models using Markov chain Monte Carlo methods. Educational Measurement: Issues and Practice, 26, 38–51.
  • Lee, P., Joo, S. H., & Stark, S. (2017). Linking methods for the Zinnes-Griggs pairwise preference IRT model. Applied Psychological Measurement, 41(2), 130–144. https://doi.org/10.1177/0146621616675836
  • Lee, P., Joo, S. H., & Stark, S. (2020). Detecting DIF in multidimensional forced choice measures using the Thurstonian item response theory model. Organizational Research Methods. Advance online publication. https://doi.org/10.1177/1094428120959822
  • Lee, P., Joo, S. H., Stark, S., & Chernyshenko, O. S. (2019). GGUM-RANK statement and person parameter estimation with multidimensional forced choice triplets. Applied Psychological Measurement, 43(3), 226–240. https://doi.org/10.1177/0146621618768294
  • Lee, P., Stark, S., & Chernyshenko, O. S. (2014). Detecting aberrant responding on unidimensional pairwise preference tests: An application of lz based on the Zinnes–Griggs ideal point IRT model. Applied Psychological Measurement, 38(5), 391–403. https://doi.org/10.1177/0146621614526636
  • Meade, A. W. (2004). Psychometric problems and issues involved with creating and using ipsative measures for selection. Journal of Occupational and Organizational Psychology, 77(4), 531–551. https://doi.org/10.1348/0963179042596504
  • Morillo, D., Leenen, I., Abad, F. J., Hontangas, P., de la Torre, J., & Ponsoda, V. (2016). A dominance variant under the multi-unidimensional pairwise-preference framework: Model formulation and Markov chain Monte Carlo estimation. Applied Psychological Measurement, 40(7), 500–516. https://doi.org/10.1177/0146621616662226
  • Muthen, L. K., & Muthen, B. O. (1998–2020). Mplus [computer software]. Muthén & Muthén.
  • Ng, V., Lee, P., Ho, M. H. R., Kuykendall, L., Stark, S., & Tay, L. (2021). The development and validation of a multidimensional forced-choice format character measure: Testing the Thurstonian IRT approach. Journal of Personality Assessment, 103, 224–237.
  • Nye, C. D., Joo, S. H., Zhang, B., & Stark, S. (2020). Advancing and evaluating IRT model data fit indices in organizational research. Organizational Research Methods, 23(3), 457–486. https://doi.org/10.1177/1094428119833158
  • O’Neill, T. A., Lewis, R. J., Law, S. J., Larson, N., Hancock, S., Radan, J., Lee, N., & Carswell, J. J. (2017). Forced-choice pre-employment personality assessment: Construct validity and resistance to faking. Personality and Individual Differences, 115, 120–127. https://doi.org/10.1016/j.paid.2016.03.075
  • Patz, R. J., & Junker, B. W. (1999). A straightforward approach to Markov chain Monte Carlo methods for item response models. Journal of Educational and Behavioral Statistics, 24(2), 146–178. https://doi.org/10.3102/10769986024002146
  • Roberts, J. S., Donoghue, J. R., & Laughlin, J. E. (2000). A general item response theory model for unfolding unidimensional polytomous responses. Applied Psychological Measurement, 24(1), 3–32. https://doi.org/10.1177/01466216000241001
  • Roberts, G. O., & Rosenthal, J. S. (2001). Optimal scaling for various Metropolis-Hastings algorithms. Statistical Science, 16(4), 351–367. https://doi.org/10.1214/ss/1015346320
  • Rosenthal, J. S. (2011). Optimal proposal distributions and adaptive MCMC. In S. Brooks, A. Gelman, G. Jones, & X. L. Meng (Eds.), Handbook of Markov chain Monte Carlo. Chapman and Hall–CRC Press.
  • Salgado, J. F., Anderson, N., & Tauriz, G. (2015). The validity of ipsative and quasi‐ipsative forced‐choice personality inventories for different occupational groups: A comprehensive meta‐analysis. Journal of Occupational and Organizational Psychology, 88(4), 797–834. https://doi.org/10.1111/joop.12098
  • Salgado, J. F., & Lado, M. (2018). Faking resistance of a quasi-ipsative forced-choice personality inventory without algebraic dependence. Journal of Work and Organizational Psychology, 34, 213–216.
  • Schmitt, N., & Oswald, F. L. (2006). The impact of corrections for faking on the validity of noncognitive measures in selection settings. Journal of Applied Psychology, 91(3), 613–621. https://doi.org/10.1037/0021-9010.91.3.613
  • Seybert, J., (2013). A new item response theory model for estimating person ability and item parameters for multidimensional rank order responses [Doctoral dissertation]. University of South Florida.
  • Seybert, J., Stark, S., & Chernyshenko, O. S. (2014). Detecting DIF with ideal point models: A comparison of area and parameter difference methods. Applied Psychological Measurement, 38(2), 151–165. https://doi.org/10.1177/0146621613508306
  • Sinharay, S., Johnson, M. S., & Stern, H. S. (2006). Posterior predictive assessment of item response theory models. Applied Psychological Measurement, 30, 298–321.
  • Stark, S., & Chernyshenko, O. S. (2011). Computerized adaptive testing with the Zinnes and Griggs pairwise preference ideal point model. International Journal of Testing, 11(3), 231–247. https://doi.org/10.1080/15305058.2011.561459
  • Stark, S., Chernyshenko, O. S., Chan, K. Y., Lee, W. C., & Drasgow, F. (2001). Effects of the testing situation on item responding: Cause for concern. The Journal of Applied Psychology, 86(5), 943–953. https://doi.org/10.1037/0021-9010.86.5.943
  • Stark, S., Chernyshenko, O. S., & Drasgow, F. (2005). An IRT approach to constructing and scoring pairwise preference items involving stimuli on different dimensions: The multi-unidimensional pairwise-preference model. Applied Psychological Measurement, 29(3), 184–203. https://doi.org/10.1177/0146621604273988
  • Stark, S., Chernyshenko, O. S., & Drasgow, F. (2012). Constructing fake-resistant personality tests using item response theory: High-stakes personality testing with multidimensional pairwise preferences. In M. Ziegler, C. MacCann, & R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp. 214–239). Oxford University Press.
  • Stark, S., Chernyshenko, O. S., Drasgow, F., Nye, C. D., White, L. A., Heffner, T., & Farmer, W. L. (2014). From ABLE to TAPAS: A new generation of personality tests to support military selection and classification decisions. Military Psychology, 26(3), 153–164. https://doi.org/10.1037/mil0000044
  • Stark, S., Chernyshenko, O. S., Drasgow, F., & White, L. A. (2012). Adaptive testing with multidimensional pairwise preference items: Improving the efficiency of personality and other noncognitive assessments. Organizational Research Methods, 15(3), 463–487. https://doi.org/10.1177/1094428112444611
  • Stark, S., Chernyshenko, O. S., Drasgow, F., & Williams, B. A. (2006). Examining assumptions about item responding in personality assessment: Should ideal point methods be considered for scale development and scoring? Journal of Applied Psychology, 91(1), 25–39. https://doi.org/10.1037/0021-9010.91.1.25
  • Stark, S., & Drasgow, F. (2002). An EM approach to parameter estimation for the Zinnes and Griggs paired comparison IRT model. Applied Psychological Measurement, 26(2), 208–227. https://doi.org/10.1177/01421602026002007
  • Tay, L., Drasgow, F., Rounds, J., & Williams, B. A. (2009). Fitting measurement models to vocational interest data: Are dominance models ideal? The Journal of Applied Psychology, 94(5), 1287–1304. https://doi.org/10.1037/a0015899
  • Tay, L., & Ng, V. (2018). Ideal point modeling of non-cognitive constructs: Review and recommendations for research. Frontiers in Psychology, 9, 2423. https://doi.org/10.3389/fpsyg.2018.02423
  • Thurstone, L. L. (1927). A law of comparative judgment. Psychological Review, 34(4), 273–286. https://doi.org/10.1037/h0070288
  • Tierney, L. (1994). Markov chains for exploring posterior distributions. The Annals of Statistics, 22(4), 1701–1728. https://doi.org/10.1214/aos/1176325750
  • Vasilopoulos, N. L., Cucina, J. M., Dyomina, N. V., Morewitz, C. L., & Reilly, R. R. (2006). Forced-choice personality tests: A measure of personality and cognitive ability? Human Performance, 19(3), 175–199. https://doi.org/10.1207/s15327043hup1903_1
  • Wang, W., Lee, P., Joo, S. H., Stark, S., & Louden, R. (2016). MCMC Z-G: An IRT computer program for forced-choice noncognitive measurement. Applied Psychological Measurement, 40(7), 551–553. https://doi.org/10.1177/0146621616663682
  • Wang, W. C., Qiu, X. L., Chen, C. W., Ro, S., & Jin, K. Y. (2017). Item response theory models for ipsative tests with multidimensional pairwise comparison items. Applied Psychological Measurement, 41(8), 600–613. https://doi.org/10.1177/0146621617703183
  • Watrin, L., Geiger, M., Spengler, M., & Wilhelm, O. (2019). Forced-choice versus likert responses on an occupational big five questionnaire. Journal of Individual Differences, 40(3), 134–148. https://doi.org/10.1027/1614-0001/a000285
  • Wetzel, E., & Frick, S. (2020). Comparing the validity of trait estimates from the multidimensional forced-choice format and the rating scale format. Psychological Assessment, 32(3), 239–253. https://doi.org/10.1037/pas0000781
  • Wetzel, E., Frick, S., & Greiff, S. (2020). The multidimensional forced-choice format as an alternative for rating scales. European Journal of Psychological Assessment, 36(4), 511–515. https://doi.org/10.1027/1015-5759/a000609
  • Zeng, L. (1997). Implementation of marginal Bayesian estimation with four-parameter beta prior distributions. Applied Psychological Measurement, 21(2), 143–156. https://doi.org/10.1177/01466216970212004
  • Zinnes, J. L., & Griggs, R. A. (1974). Probabilistic, multidimensional unfolding analysis. Psychometrika, 39(3), 327–350. https://doi.org/10.1007/BF02291707

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.