612
Views
2
CrossRef citations to date
0
Altmetric
Research Articles

Using Process Data to Improve Classification Accuracy of Cognitive Diagnosis Model

, &

References

  • Albert, J. H. (1992). Bayesian estimation of normal ogive item response curves using Gibbs sampling. Journal of Educational Measurement, 17, 251–269.
  • Bergner, Y., Shu, Z., & von Davier, A. A. (2014). Visualization and confirmatory clustering of sequence data from a simulation-based assessment task. Proceedings of 7th International Conference on Educational Data Mining, 177–184.
  • Bergstrom, B., Gershon, R., & Lunz, M. E. (1994). Computer-adaptive testing: Exploring examinee response time using hierarchical linear modeling. Paper Presented at the Annual Meeting of the National Council on Measurement in Education. LA.
  • Bezirhan, U., Davier, M. V., & Grabovsky, I. (2021). Modeling item revisit behavior: The hierarchical speed–accuracy–revisits model. Educational and Psychological Measurement, 81(2), 363–387. https://doi.org/10.1177/0013164420950556
  • Bolsinova, M., & Tijmstra, J. (2018). Improving precision of ability estimation: Getting more from response times. The British Journal of Mathematical and Statistical Psychology, 71(1), 13–38. https://doi.org/10.1111/bmsp.12104
  • Bolsinova, M., de Boeck, P., & Tijmstra, J. (2017). Modelling conditional dependence between response time and accuracy. Psychometrika, 82(4), 1126–1148. https://doi.org/10.1007/s11336-016-9537-6
  • Brooks, S. P., & Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7(4), 434–455. https://doi.org/10.2307/1390675
  • Chen, J., & de la Torre, J. (2013). A general cognitive diagnosis model for expert-defined polytomous attributes. Applied Psychological Measurement, 37(6), 419–437. https://doi.org/10.1177/0146621613479818
  • de la Torre, J., & Chiu, C.-Y. (2016). A general method of empirical Q-matrix validation. Psychometrika, 81(2), 253–273. https://doi.org/10.1007/s11336-015-9467-8
  • de la Torre, J., & Douglas, J. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69(3), 333–353. https://doi.org/10.1007/BF02295640
  • DeCarlo, L. T. (2011). On the analysis of fraction subtraction data: The DINA model, classification, latent class sizes, and the Q-matrix. Applied Psychological Measurement, 35(1), 8–26. https://doi.org/10.1177/0146621610377081
  • DiCerbo, K. E., & Behrens, J. T. (2012). Implications of the digital ocean on current and future assessment. In R. Lissitz & H. Jiao (Eds.), Computers and their impact on State Assessment: Recent history and predictions for the future (pp. 273–306). Information Age Publishing.
  • Edition, S. (2013). Bayesian data analysis (3rd ed.). CRC Press.
  • Ferrando, P. J., & Lorenzo-Seva, U. (2007). An item response theory model for incorporating response time data in binary personality items. Applied Psychological Measurement, 31(6), 525–543. https://doi.org/10.1177/0146621606295197
  • Fox, J.-P. (2010). Bayesian item response modeling: Theory and applications. Springer.
  • Gelfand, A. E., & Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85(410), 398–409. https://doi.org/10.1080/01621459.1990.10476213
  • Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2014). Bayesian data analysis. CRC Press.
  • Goldhammer, F., Naumann, J., Stelter, A., Tóth, K., Rölke, H., & Klieme, E. (2014). The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights from a computer-based large-scale assessment. Journal of Educational Psychology, 106(3), 608–626. https://doi.org/10.1037/a0034716
  • Greiff, S., Wustenberg, S., & Avvisati, F. (2015). Computer-generated log-file analyses as a window into students’ minds? A showcase study based on the PISA 2012 assessment of problem solving. Computers & Education, 91, 92–105. https://doi.org/10.1016/j.compedu.2015.10.018
  • Halpin, P. F., & De Boeck, P. (2013). Modelling dyadic interaction with Hawkes processes. Psychometrika, 78(4), 793–814. https://doi.org/10.1007/s11336-013-9329-1
  • He, Q., & von Davier, M. (2016). Analyzing process data from problem-solving items with n-grams: Insights from a computer-based large-scale assessment. In Y. Rosen, S. Ferrara, & M. Mosharraf (Eds.), Handbook of research on technology tools for real-world skill development (pp. 750–777). IGI Global. https://doi.org/10.4018/978-1-4666-9441-5.ch029
  • Henson, R., Templin, J., & Willse, J. (2009). Defining a family of cognitive diagnosis models using loglinear models with latent variables. Psychometrika, 74(2), 191–210. https://doi.org/10.1007/s11336-008-9089-5
  • Jeon, M., Boeck, P. D., Luo, J., Li, X., & Lu, Z. L. (2021). Modeling within-item dependencies in parallel data on test responses and brain activation. Psychometrika, 86(1), 239–271. https://doi.org/10.1007/s11336-020-09741-2
  • Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25(3), 258–272. https://doi.org/10.1177/01466210122032064
  • Kohli, N., Peralta, Y., & Bose, M. (2019). Piecewise random effects modeling software programs. Structural Equation Modeling: A Multidisciplinary Journal, 26(1), 156–164. https://doi.org/10.1080/10705511.2018.1516507
  • Lee, Y.-H., & Chen, H. (2011). A review of recent response-time analyses in educational testing. Psychological Test and Assessment Modeling, 53, 359–379.
  • Logan, S., Medford, E., & Hughes, N. (2011). The importance of intrinsic motivation for high and low ability readers’ reading comprehension performance. Learning and Individual Differences, 21(1), 124–128. https://doi.org/10.1016/j.lindif.2010.09.011
  • Man, K., & Harring, J. R. (2021). Assessing preknowledge cheating via innovative measures: A multiple-group analysis of jointly modeling item responses, response times, and visual fixation counts. Educational and Psychological Measurement, 81(3), 441–465. https://doi.org/10.1177/0013164420968630
  • Man, K., Harring, J. R., Jiao, H., & Zhan, P. (2019). Joint modeling of compensatory multidimensional item responses and response times. Applied Psychological Measurement, 43(8), 639–654. https://doi.org/10.1177/0146621618824853
  • Meng, X.-B., Tao, J., & Chang, H.-H. (2015). A conditional joint modeling approach for locally dependent item responses and response times. Journal of Educational Measurement, 52(1), 1–27. https://doi.org/10.1111/jedm.12060
  • Meyer, J. P. (2010). A mixture Rasch model with item response time components. Applied Psychological Measurement, 34(7), 521–538. https://doi.org/10.1177/0146621609355451
  • Minchen, N. D., de la Torre, J., & Liu, Y. (2017). A cognitive diagnosis model for continuous response. Journal of Educational and Behavioral Statistics, 42(6), 651–677. https://doi.org/10.3102/1076998617703060
  • Organisation for Economic Co-operation and Development (OECD). (2019). PISA 2018 assessment and analytical framework. OECD Publishing.
  • Peng, S., Cai, Y., Wang, D., Luo, F., & Tu, D. (2021). A generalized diagnostic classification modeling framework integrating differential speediness: advantages and illustrations in psychological and educational testing. Multivariate Behavioral Research, 57(6), 940–959. https://doi.org/10.1080/00273171.2021.1928474
  • Plummer, M. (2015). Jags: Just another Gibbs sampler (version 4.0.0). http://mcmc-jags.sourceforge.net/
  • Qian, H., Staniewska, D., Reckase, M., & Woo, A. (2016). Using response time to detect item preknowledge in computer-based licensure examinations. Educational Measurement: Issues and Practice, 35(1), 38–47. https://doi.org/10.1111/emip.12102
  • R Core Team. (2016). R: A language and environment for statistical computing [Computer software manual]. https://www.R-project.org
  • Ranger, J. (2013). A note on the hierarchical model for responses and response times in tests of van der Linden (2007). Psychometrika, 78(3), 538–544. https://doi.org/10.1007/s11336-013-9324-6
  • Ren, H., Xu, N., Lin, Y., Zhang, S., & Yang, T. (2021). Remedial teaching and learning from a cognitive diagnostic model perspective: Taking the data distribution characteristics as an example. Frontiers in Psychology, 12, 628607. https://doi.org/10.3389/fpsyg.2021.628607
  • Rupp, A. A., & Templin, J. L. (2008). Unique characteristics of diagnostic classification models: A comprehensive review of the current state-of-the-art. Measurement: Interdisciplinary Research & Perspective, 6(1), 219–262. https://doi.org/10.1080/15360802490866
  • Schnipke, D. L., & Scrams, D. J. (2002). Exploring issues of examinee behavior: Insights gained from response-time analyses. In C. N. Mills, M. T. Potenza, J. J. Fremer, & W. C. Ward (Eds.), Computer-based testing: Building the foundation for future assessments (pp. 237–266). Lawrence Erlbaum.
  • Sireci, S., & Zenisky, A. (2006). Innovative item formats in computer-based testing: In pursuit of improved construct representation. In S. Downing & T. Haladyna (Eds.), Handbook of test development. Lawrence Erlbaum. https://doi.org/10.4324/9780203874776.ch14
  • Su, Y. S., & Yajima, M. (2015). R2jags: Using R to run JAGS (version 0.5).
  • Templin, J. L., & Henson, R. A. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11(3), 287–305. https://doi.org/10.1037/1082-989X.11.3.287
  • Templin, J., & Bradshaw, L. (2013). Measuring the reliability of diagnostic classification model examinee estimates. Journal of Classification, 30(2), 251–275. https://doi.org/10.1007/s00357-013-9129-4
  • Ulitzsch, E., von Davier, M., & Pohl, S. (2020). Using response times for joint modeling of response and omission behavior. Multivariate Behavioral Research, 55(3), 425–453. https://doi.org/10.1080/00273171.2019.1643699
  • van der Linden, W. (2005). Linear models for optimal test design. Springer. https://doi.org/10.1007/0-387-29054-0
  • van der Linden, W. J. (2007). A hierarchical framework for modeling speed and accuracy on test items. Psychometrika, 72(3), 287–308. https://doi.org/10.1007/s11336-006-1478-z
  • van der Linden, W. J. (2009). Predictive control of speededness in adaptive testing. Applied Psychological Measurement, 33(1), 25–41. https://doi.org/10.1177/0146621607314042
  • van der Linden, W. J., & Fox, J.-P. (2015). Joint hierarchical modeling of responses and response times. In W. J. van der Linden (Ed.), Handbook of item response theory: Vol. 1. Models (pp. 481–500). Chapman & Hall/CRC.
  • van der Linden, W. J., & Xiong, X. (2013). Speededness and adaptive testing. Journal of Educational and Behavioral Statistics, 39, 418–438.
  • Vandekerckhove, J., Tuerlinckx, F., & Lee, M. D. (2011). Hierarchical diffusion models for two-choice response times. Psychological Methods, 16(1), 44–62. https://doi.org/10.1037/a0021765
  • von Davier, M. (2014a). The DINA model as a constrained general diagnostic model: Two variants of a model equivalency. The British Journal of Mathematical and Statistical Psychology, 67(1), 49–71. https://doi.org/10.1111/bmsp.12003
  • von Davier, M., Khorramdel, L., He, Q., Shin, H. J., & Chen, H. (2019). Developments in psychometric population models for technology-based large-scale assessments: An overview of challenges and opportunities. Journal of Educational and Behavioral Statistics, 44(6), 671–705. https://doi.org/10.3102/1076998619881789
  • Wang, S., Zhang, S., & Shen, Y. (2020). A joint modeling framework of responses and response times to assess learning outcomes. Multivariate Behavioral Research, 55(1), 49–68. https://doi.org/10.1080/00273171.2019.1607238
  • Wise, S. L., & DeMars, C. E. (2006). An application of item response time: The effortmoderated IRT model. Journal of Educational Measurement, 43(1), 19–38. https://doi.org/10.1111/j.1745-3984.2006.00002.x
  • Wise, S. L., & Kong, X. (2005). Response time effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education, 18(2), 163–183. https://doi.org/10.1207/s15324818ame1802_2
  • Xu, H., Fang, G., & Ying, Z. (2020). A latent topic model with Markov transition for process data. The British Journal of Mathematical and Statistical Psychology, 73(3), 474–505. https://doi.org/10.1111/bmsp.12197
  • Yan, D., Mislevy, R. J., & Almond, R. G. (2003). Design and analysis in a cognitive assessment. ETS Research Report Series, 2003(2), i–47. https://doi.org/10.1002/j.2333-8504.2003.tb01924.x
  • Zenisky, A. L., & Baldwin, P. (2006). Using response time data in test development and validation: Research with beginning computer users. Paper Presented at the Annual Meeting of the National Council on Measurement in Education. CA.
  • Zhan, P., Jiao, H., & Liao, D. (2018). Cognitive diagnosis modelling incorporating item response times. The British Journal of Mathematical and Statistical Psychology, 71(2), 262–286. https://doi.org/10.1111/bmsp.12114

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.