3,226
Views
12
CrossRef citations to date
0
Altmetric
Original Articles

Controlling construct-irrelevant factors through computer-based testing: disengagement, anxiety, & cheating

References

  • Bennett, R. E., & Zhang, M. (2016). Validity and automated scoring. In F. Drasgow (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 142–173). New York: Routledge.
  • Bunderson, C. V., Inouye, D. K., & Olsen, J. B. (1989). The four generations of computerised educational measurement. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 367–408). New York: Macmillan.
  • Eklöf, H. (2006). Development and validation of scores from an instrument measuring student test-taking motivation. Educational and Psychological Measurement, 66, 643–656.
  • Eklöf, H. (2010). Skill and will: Test-taking motivation and assessment quality. Assessment in Education: Principles, Policy & Practice, 17, 345–356.
  • Foster, D. (2016). Testing technology and its effects on test security. In F. Drasgow (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 235–255). New York: Routledge.
  • Foster, D., & Miller, H. L., Jr. (2009). A new format for multiple-choice testing: Discrete-option multiple-choice. Results from early studies1. Psychology Science Quarterly, 51, 355.
  • Haladyna, T. M., & Downing, S. M. (2004). Construct-irrelevant variance in high-stakes testing. Educational Measurement: Issues and Practice, 23(1), 17–27.
  • Harmes, J. C., & Wise, S. L. (2016). Assessing engagement during the online assessment of real-world skills. In Y. Rosen, S. Ferrara, & M. Mosharraf (Eds.), Handbook of research on technology tools for real-world skill development (pp. 804–823). Hershey, PA: IGI Global.
  • Hembree, R. (1988). Correlates, causes, effects, and treatment of test anxiety. Review of Educational Research, 58, 47–77.
  • Hill, K. T. (1984). Debilitating motivation and testing: A major educational problem, possible solutions, and policy applications. Research on Motivation in Education: Student Motivation, 1, 245–274.
  • Huff, K. L., & Sireci, S. G. (2001). Validity issues in computer‐based testing. Educational Measurement: Issues and Practice, 20(3), 16–25.
  • Knekta, E. (2017). Are all pupils equally motivated to do their best on all tests? Differences in reported test-taking motivation within and between tests with different stakes. Scandinavian Journal of Educational Research, 61(1), 95–111.
  • Kong, X. J., Wise, S. L., Harmes, J. C., & Yang, S. (2006, April). Motivational effects of praise in response-time based feedback: A follow-up study of the effort-monitoring CBT. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco.
  • Lazarus, R. S., & Folkman, S. (1984). Stress, coping and appraisal. New York: Springer.
  • Leeson, H. V. (2006). The mode effect: A literature review of human and technological issues in computerised testing. International Journal of Testing, 6, 1–24.
  • Luecht, R. M. (2016). Computer-based test delivery models, data, and operational implementation issues. In F. Drasgow (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 179–205). New York: Routledge.
  • Meijer, R. R. (2003). Diagnosing item score patterns on a test using item response theory-based person-fit statistics. Psychological Methods, 8, 72–87.
  • Meijer, R. R., & Sotaridona, L. S. (2006). Detection of advance item knowledge using response times in computer adaptive testing. Law School Admission Council Computerised Testing Report 03-03. Newtown, PA: Law School Admission Council.
  • Messick, S. (1984). The psychology of educational measurement. Journal of Educational Measurement, 21, 215–237.
  • Ortner, T. M., & Caspers, J. (2011). Consequences of test anxiety on adaptive versus fixed item testing. European Journal of Psychological Assessment, 27(3), 157–163.
  • Pitkin, A. K., & Vispoel, W. P. (2001). Differences between self‐adapted and computerised adaptive tests: A meta‐analysis. Journal of Educational Measurement, 38, 235–247.
  • Qian, H., Staniewska, D., Reckase, M., & Woo, A. (2016). Using response time to detect item preknowledge in computer‐based licensure examinations. Educational Measurement: Issues and Practice, 35(1), 38–47.
  • Rocklin, T. R., & O’Donnell, A. M. (1987). Self-adapted testing: A performance-improving variant of computerised adaptive testing. Journal of Educational Psychology, 79, 315–319.
  • Schnipke, D. L. (1995). Assessing speededness in computer-based tests using item response times ( Unpublished doctoral dissertation). Johns Hopkins University.
  • Sundre, D. L., & Moore, D. L. (2002). The student opinion scale: A measure of examinee motivation. Assessment Update, 14(1), 8–9.
  • Wainer, H. (2000). Introduction and history. In H. Wainer (Ed.), Computerised adaptive testing: A primer (pp. 1–21). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Way, W. D., Davis, L. L., Keng, L., & Strain-Seymour, E. (2016). From standardization to personalization: The comparability of scores based on different testing conditions, modes, and devices. In F. Drasgow (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 260–284). New York: Routledge.
  • Weiss, D. J., & Kingsbury, G. G. (1984). Application of computerised adaptive testing to educational problems. Journal of Educational Measurement, 21, 361–375.
  • Wise, S. L. (1994). Understanding self-adapted testing: The perceived control hypothesis. Applied Measurement in Education, 7, 3–14.
  • Wise, S. L. (2006). An investigation of the differential effort received by items on a low-stakes, computer-based test. Applied Measurement in Education, 19, 93–112.
  • Wise, S. L. (2014). The utility of adaptive testing in addressing the problem of unmotivated examinees. Journal of Computerised Adaptive Testing, 2(1), 1–17.
  • Wise, S. L. (2015). Effort analysis: individual score validation of achievement test data. Applied Measurement in Education, 28, 237–252. doi:10.1080/08957347.2015.1042155
  • Wise, S. L. (2017). Rapid-guessing behavior: Its identification, interpretations, and implications. Educational Measurement: Issues and Practice, 36(4), 52–61.
  • Wise, S. L. (in press). An intelligent CAT that can deal with disengaged test taking. In H. Jiao & R. W. Lissitz (Eds.), Applications of artificial intelligence (AI) to assessment. Charlotte, NC: Information Age Publishing, Inc.
  • Wise, S. L., Bhola, D., & Yang, S. (2006). Taking the time to improve the validity of low-stakes tests: The effort-monitoring CBT. Educational Measurement: Issues and Practice, 25(2), 21–30.
  • Wise, S. L., & DeMars, C. E. (2005). Low examinee effort in low-stakes assessment: Problems and potential solutions. Educational Assessment, 10, 1–17.
  • Wise, S. L., & DeMars, C. E. (2006). An application of item response time: The effort-moderated IRT model. Journal of Educational Measurement, 43, 19–38.
  • Wise, S. L., & Gao, L. (2017). A general approach to measuring test-taking effort on computer-based tests. Applied Measurement in Education, 30, 343–354.
  • Wise, S. L., & Kingsbury, G. G. (2016). Modeling student test-taking motivation in the context of an adaptive achievement test. Journal of Educational Measurement, 53, 86–105.
  • Wise, S. L., & Kong, X. (2005). Response time effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education, 18, 163–183.
  • Wise, S. L., Kuhfeld, M. R., & Soland, J. (2018, April). The effects of effort monitoring with proctor notification on test-taking engagement, test performance, and validity. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.
  • Wise, S. L., & Plake, B. S. (1989). Research on the effects of administering tests via computers. Educational Measurement: Issues and Practice, 8, 5–10.
  • Wise, S. L., Plake, B. S., Johnson, P. L., & Roos, L. L. (1992). A comparison of self‐adapted and computerised adaptive tests. Journal of Educational Measurement, 29, 329–339.
  • Wise, S. L., Roos, L. L., Plake, B. S., & Nebelsick-Gullett, L. J. (1994). The relationship between examinee anxiety and preference for self-adapted testing. Applied Measurement in Education, 7, 81–91.
  • Wolf, L. F., & Smith, J. K. (1995). The consequence of consequence: Motivation, anxiety, and test performance. Applied Measurement in Education, 8, 227–242.
  • Wolf, L. F., Smith, J. K., & Birnbaum, M. E. (1995). Consequence of performance, test motivation, and mentally taxing items. Applied Measurement in Education, 8, 341–351.
  • Zeidner, M. (1998). Test anxiety: The state of the art. New York: Plenum Press.