3,741
Views
1
CrossRef citations to date
0
Altmetric
Articles

Animated videos in assessment: comparing validity evidence from and test-takers’ reactions to an animated and a text-based situational judgment test

ORCID Icon, ORCID Icon & ORCID Icon
Pages 57-79 | Received 28 Mar 2020, Accepted 19 Feb 2021, Published online: 17 May 2021

References

  • Abedi, J. (2004). The No Child Left Behind Act and English language learners: Assessment and accountability issues. Educational Researcher, 33(1), 4–14. https://doi.org/10.3102/0013189X033001004
  • Abedi, J. (2006). Language issues in item development. In S. M. Downing & T. Haladyna (Eds.), Handbook of test development (pp. 377–398). Lawrence Erlbaum Associates.
  • Abedi, J. (2010). Linguistic factors in the assessment of English language learners. In G. Walford, E. Tucker, & M. Viswanathan (Eds.), The SAGE handbook of measurement (pp. 129–150). SAGE Publications Ltd.
  • American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (2014). Standards for educational and psychological testing. American Educational Research Association.
  • Archer, D., & Akert, R. (1980). The encoding of meaning: A test of three theories of social interaction. Sociological Inquiry, 50(3/4), 393–419. https://doi.org/10.1111/j.1475-682X.1980.tb00028.x
  • Bardach, L., Rushby, J. V., Kim, L. E., & Klassen, R. M. (2020). Using video- and text-based situational judgement tests for teacher selection: A quasi-experiment exploring the relations between test format, subgroup differences, and applicant reactions. European Journal of Work and Organizational Psychology, 30(2), 251–264. https://doi.org/10.1080/1359432X.2020.1736619
  • Bauer, T., Truxillo, D., Sanchez, R., Craig, J., Ferrara, P., & Campion, M. (2001). Applicant reactions to selection: Development of the Selection Procedural Justice Scale (SPJS). Personnel Psychology, 54(2), 387–419. https://doi.org/10.1111/j.1744-6570.2001.tb00097.x
  • Bergman, M. E., Drasgow, F., Donovan, M. A., Henning, J. B., & Juraska, S. E. (2006). Scoring situational judgment tests: Once you get the data, your troubles begin. International Journal of Selection and Assessment, 14(3), 223–235. https://doi.org/10.1111/j.1468-2389.2006.00345.x
  • Boyce, A. S., Corbet, C. E., & Adler, S. (2013). Simulations in the selection context: Considerations, challenges, and opportunities. In M. Fetzer & K. Tuzinski (Eds.), Simulations for personnel selection (pp. 17–41). Springer.
  • Bruk-Lee, V., Lanz, J., Drew, E. N., Coughlin, C., Levine, P., Tuzinski, K., & Wrenn, K. (2016). Examining applicant reactions to different media types in character-based simulations for employee selection. International Journal of Selection and Assessment, 24(1), 77–91.https://doi.org/10.1111/ijsa.12132
  • Chan, D., & Schmitt, N. (1997). Video-based versus paper-and-pencil method of assessment in situational judgment tests: Subgroup differences in test performance and face validity perceptions. Journal of Applied Psychology, 82(1), 143–159. https://doi.org/10.1037/0021-9010.82.1.143
  • Chan, D., & Schmitt, N. (2004). An agenda for future research on applicant reactions to selection procedures: A construct-oriented approach. International Journal of Selection and Assessment, 12(1/2), 9–23. https://doi.org/10.1111/j.0965-075X.2004.00260.x
  • Chan, D., Schmitt, N., DeShon, R. P., Clause, C. S., & Delbridge, K. (1997). Reactions to cognitive ability tests: The relationships between race, test performance, face validity perceptions, and test-taking motivation. Journal of Applied Psychology, 82(2), 300–310. https://doi.org/10.1037/0021-9010.82.2.300
  • Christian, M. S., Edwards, B. D., & Bradley, J. C. (2010). Situational judgement tests: Constructs assessed and a meta-analysis of their criterion related reliabilities. Personnel Psychology, 63(1), 83–117. https://doi.org/10.1111/j.1744-6570.2009.01163.x
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.
  • Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education (7th ed.). Routledge.
  • Dancy, M. H., & Beichner, R. (2006). Impact of animation on assessment of conceptual understanding in physics. Physical Review Special Topics - Physics Education Research, 2(1), 1–7. https://doi.org/10.1103/PhysRevSTPER.2.010104
  • Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of state policy evidence previous research. Education Policy Analysis Archives, 8(1), 1–44. https://doi.org/10.14507/epaa.v8n1.2000
  • Eklöf, H. (2010). Student motivation and effort in the Swedish TIMSS advanced field study. 4th IEA International Research Conference.
  • Elliott, J. G., Stemler, S. E., Sternberg, R. J., Grigorenko, E. L., & Hoffman, N. (2011). The socially skilled teacher and the development of tacit knowledge. British Educational Research Journal, 37(1), 83–103. https://doi.org/10.1080/01411920903420016
  • Fetzer, M., & Tuzinski, K. (Eds.). (2013). Simulations for personnel selection. Springer. https://doi.org/10.1007/978-1-4614-7681-8
  • Gravetter, F., & Forzano, L. (2012). Research methods for the behavioral sciences (4th ed.). Wadsworth.
  • Harlen, W. (2012). The role of assessment in developing motivation for learning. In J. Gardner (Ed.), Assessment and learning (2nd ed., pp. 171–183). SAGE Publication Ltd. http://doi.org/10.4135/9781446250808.n11
  • Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57(3), 639–683. https://doi.org/10.1111/j.1744-6570.2004.00003.x
  • Hopfenbeck, T. N., & Kjærnsli, M. (2016). Students’ test motivation in PISA: The case of Norway. The Curriculum Journal, 27(3), 406–422. https://doi.org/10.1080/09585176.2016.1156004
  • Kan, A., Bulut, O., & Cormier, D. C. (2018). The impact of item stem format on the dimensional structure of mathematics assessments. Educational Assessment, 24(1), 13–32. https://doi.org/10.1080/10627197.2018.1545569
  • Kanning, U. P., Grewe, K., Hollenberg, S., & Hadouch, M. (2006). From the subjects’ point of view: Reactions to different types of situational judgment items. European Journal of Psychological Assessment, 22(3), 168–176. https://doi.org/10.1027/1015-5759.22.3.168
  • Lievens, F., & Sackett, P. R. (2006). Video-based versus written situational judgment tests: A comparison in terms of predictive validity. The Journal of Applied Psychology, 91(5), 1181–1188. https://doi.org/10.1037/0021-9010.91.5.1181
  • Macan, T. H., Avedon, M., Paese, M., & Smith, D. (1994). The effects of applicants’ reactions to cognitive ability tests and an assessment center. Personnel Psychology, 47, 715–739. https://doi.org/10.1111/j.1744-6570.1994.tb01573.x
  • MacCann, C., Lievens, F., Libbrecht, N., & Roberts, R. D. (2016). Differences between multimedia and text-based assessments of emotion management: An exploration with the multimedia emotion management assessment (MEMA). Cognition and Emotion, 30(7), 1317–1331. https://doi.org/10.1080/02699931.2015.1061482
  • Motowidlo, S. J., Dunnette, M. D., & Carter, G. W. (1990). An alternative selection procedure: The low-fidelity simulation. Journal of Applied Psychology, 75(6), 640–647. https://doi.org/10.1037/0021-9010.75.6.640
  • Norcini, J. J., Lipner, R. S., & Grosso, L. J. (2013). Assessment in the context of licensure and certification. Teaching and Learning in Medicine, 25(sup1), S62–S67. https://doi.org/10.1080/10401334.2013.842909
  • O’Leary, M., Scully, D., Karakolidis, A., & Pitsia, V. (2018). The state-of-the-art in digital technology-based assessment. European Journal of Education, 53(2), 160–175. https://doi.org/10.1111/ejed.12271
  • Popp, E. C., Tuzinski, K., & Fetzer, M. (2016). Actor or avatar? Considerations in selecting appropriate formats for assessment content. In F. Drasgow (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 79–103). Routledge. https://doi.org/10.4324/9781315871493
  • Richman-Hirsch, W. L., Olson-Buchanan, J. B., & Drasgow, F. (2000). Examining the impact of administration medium on examinee perceptions and attitudes. Journal of Applied Psychology, 85(6), 880–887. https://doi.org/10.1037/0021-9010.85.6.880
  • Scott, J. C., & Mead, A. D. (2011). Foundations for measurement. In N. Tippins & S. Adler (Eds.), Technology-enhanced assessment of talent (pp. 21–65). Jossey-Bass.
  • Scully, D. (2017). Constructing multiple-choice items to measure higher-order thinking. Practical Assessment, Research & Evaluation, 22, 1–13.https://doi.org/10.7275/ca7y-mm27
  • Smither, J., Reilly, R., Millsap, R., Pearlman, K., & Stoffey, R. (1993). Applicant reactions to selection procedures. Personnel Psychology, 46(1), 49–76. https://doi.org/10.1111/j.1744-6570.1993.tb00867.x
  • Stemler, S. E., Elliott, J. G., Grigorenko, E. L., & Sternberg, R. J. (2006). There’s more to teaching than instruction: Seven strategies for dealing with the practical side of teaching. Educational Studies, 32(1), 101–118. https://doi.org/10.1080/03055690500416074
  • Stemler, S. E., & Sternberg, R. J. (2006). Using situational judgment tests to measure practical intelligence. In J. A. Weekley & R. Ployhart (Eds.), Situational judgment tests (pp. 107–131). Lawrence Erlbaum Associates. https://doi.org/10.4324/9780203774878
  • Sternberg, R. J. (1999). The theory of successful intelligence. Review of General Psychology, 3(4), 292–316. https://doi.org/10.1037/1089-2680.3.4.292
  • Sternberg, R. J., & Grigorenko, E. L. (2001). Practical intelligence and the principal. Institute of Education Sciences.
  • Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. Springer.
  • Thompson, W. (2018). Construct irrelevance. In B. Frey (Ed.), The SAGE encyclopedia of educational research, measurement, and evaluation (pp. 375–376). SAGE Publications, Inc. https://www.doi.org/10.4135/9781506326139.n143