195
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Item Parameter Drift in a Time-Varying Predictor

References

  • Babcock, B., & Albano, A. D. (2012). Rasch scale stability in the presence of item parameter and trait drift. Applied Psychological Measurement, 36, 565–580. doi:10.1177/0146621612455090
  • Bandalos, D. L., & Leite, W. L. (2013). Use of Monte Carlo studies in structural equation modeling research. In G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling: A second course (2nd ed., pp. 625–665). Greenwich, CT: Information Age Publishing.
  • Bock, R. D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters: An application of an EM algorithm. Psychometrika, 46, 443–459. doi:10.1007/BF02293801
  • Bock, R. D., & Mislevy, R. J. (1982). Adaptive EAP estimation of ability in a microcomputer environment. Applied Psychological Measurement, 6, 431–444. doi:10.1177/014662168200600405
  • Bock, R. D., Muraki, E., & Pfeiffenberger, W. (1988). Item pool maintenance in the presence of item parameter drift. Journal of Educational Measurement, 25, 275–285. doi:10.1007/BF02291262
  • Curran, P. J., & Bauer, D. J. (2011). The disaggregation of within-person and between-person effects in longitudinal models of change. Annual Review of Psychology, 62, 583–619. doi:10.1146/annurev.psych.093008.100356
  • Curran, P. J., Lee, T., Howard, A. L., Lane, S., & MacCallum, R. (2012). Disaggregating within-person and between-person effects in multilevel and structural equation growth models. In R. J. Harrring & G. R. Hancock (Eds.), Advances in longitudinal methods in the social and behavioral sciences (pp. 217–254). Charlotte, NC: Information Age Publishing.
  • Cushing, L. S., Carter, E. W., Clark, N., Wallis, T., & Kennedy, C. H. (2009). Evaluating inclusive educational practices for students with severe disabilities using the program quality measurement tool. Journal Of Special Education, 42, 195–208.
  • Goldstein, H. (1983). Measuring changes in educational attainment over time: Problems and possibilities. Journal of Educational Measurement, 20, 369–377.
  • Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: SAGE Publications.
  • Han, K. T., Wells, C. S., & Sireci, S. G. (2012). The impact of multidirectional item parameter drift on IRT scaling coefficients and proficiency estimates. Applied Measurement in Education, 25, 97–117. doi:10.1080/08957347.2012.660000
  • Hoffman, L. (2015). Longitudinal analysis: Modeling within-person fluctuation and change. New York, NY: Routledge.
  • Hofmann, D. A., & Gavin, M. B. (1998). Centering decisions in hierarchical linear models: Implications for research in organizations. Journal of Management, 24, 623–641.
  • Hoogland, J. J., & Boomsma, A. (1998). Robustness studies in covariance structure modeling: An overview and a meta-analysis. Sociological Methods Research, 26, 329–367. doi:10.1177/0049124198026003003
  • Kim, J., Anderson, C. J., & Keller, B. (2014). Multilevel analysis of assessment data. In L. Rutkowski, M. Von Davier, & D. Rutkowski (Eds.), Handbook of international large-scale assessment: Background,technical issues, and methods of data analysis (pp. 389–424). Boca Raton, FL: CRC Press.
  • Kreft, G. G., De Leeuw, J., & Aiken, L. S. (1995). The effect of different forms of centering in hierarchical linear models. Multivariate Behavioral Research, 30, 1–21.
  • Kyllonen, P. C., & Bertling, J. P. (2014). Innovative questionnaire assessment methods to increase cross-country comparability. In L. Rutkowski, M. Von Davier, & D. Rutkowski (Eds.), Handbook of international large-scale assessment: Background,technical issues, and methods of data analysis (pp. 277–285). Boca Raton, FL: CRC Press.
  • Levine, M. V., Drasgow, F., & Stark, S. (2001). Program MODFIT. Urbana: University of Illinois, Measurement and Evaluation Laboratory, Department of Educational Psychology.
  • Li, Y. (2012). Examining the impact of drifted polytomous anchor items on test characteristic curve (TCC) Linking and IRT true score equating. (Report No. 12-09). ETS Research Report Series (Vol. 2012). Princeton, NJ. doi:10.1002/j.2333-8504.2012.tb02291.x
  • Martin, M. O., & Mullis, I. V. S. (Eds.). (2012). Methods and procedures in TIMSS and PIRLS 2011. Chestnut Hill, MA: Boston College and International Association for the Evaluation of Educational Achievement.
  • Martin, M. O., Mullis, I. V. S., Arora, A., & Preuschoff, C. (2014). Context questionnaire scales in TIMSS and PIRLS 2011. In L. Rutkowski, M. Von Davier, & D. Rutkowski (Eds.), Handbook of international large-scale assessment: Background,technical issues, and methods of data analysis (pp. 299–316). Boca Raton, FL: CRC Press.
  • Martin, M. O., Mullis, I. V. S., Hooper, M., Yin, L., Foy, P., & Palazzo, L. (2016). Creating and interpreting the TIMSS 2015 context questionnaire scales. In M. O. Martin, I. V. S. Mullis, & M. Hooper (Eds.), Methods and procedures in TIMSS 2015 (pp. 15.1–15.312). Retrieved from Boston College, TIMSS & PIRLS International Study Center website http://timss.bc.edu/publications/timss/2015-methods/chapter-15.html
  • Monseur, C., Sibberns, H., & Hastedt, D. (2007). Equating errors in international surveys in education. In Proceedings of the IRC-2006: Vol. 2. (pp. 61–66). Amsterdam, The Netherlands: International Association for the Evaluation of Educationa Achievement (IEA). Retrieved from http://orbi.ulg.ac.be/handle/2268/121430
  • Muthén, B. O., Kaplan, D., & Hollis, M. (1987). On structural equation modeling with data that are not missing completely at random. Psychometrika, 52, 431–462.
  • Muthén, L. K., & Muthén, B. O. (1998–2012). Mplus user’s guide (7th ed.). Los Angeles, CA: Muthén & Muthén.
  • Neuschmidt, O., Barth, J., & Hastedt, D. (2008). Trends in gender differences in mathematics and science (TIMSS 1995-2003). Studies in Educational Evaluation, 34, 56–72. doi:10.1016/j.stueduc.2008.04.002
  • Pinheiro, J., Bates, D., DebRoy, S., & Sarkar, D., & R Core Team. (2014). _nlme: Linear and nonlinear mixed effects models. R package version 3.1-118.
  • Rizopoulos, D. (2006). ltm: An R package for latent variable modelling and item response theory analyses. Journal of Statistical Software, 17(5), 1–25.
  • Rutkowski, L., Gonzalez, E., Joncas, M., & Von Davier, M. (2010). International large-scale assessment data: Issues in secondary analysis and reporting. Educational Researcher, 39, 142–151. doi:10.3102/0013189X10363170
  • Schagen, I., Twist, L., & Rutt, S. (2008, September). Estimating trends in national performance from international surveys, with a focus on PIRLS results for England. Paper presented at 3rd IEA International Research Conference, Taipei, Chinese Taipei.
  • Shen, C., & Tam, H. P. (2008). The paradoxical relationship between student achievement and self-perception: A cross-national analysis based on three waves of TIMSS data. Educational Research and Evaluation, 14, 87–100. doi:10.1080/13803610801896653
  • Stocking, M. L., & Lord, F. M. (1983). Developing a common metric in Item Response Theory. Applied Psychological Measurement, 7, 201–210. doi:10.1177/014662168300700208
  • Tanner, M. J. (2014). Digital vs. print: Reading comprehension and the future of the book. School of Information Student Research Journal, 4 (2). Retrieved from http://scholarworks.sjsu.edu/slissrj/vol4/iss2/6
  • Tveit, Å. K., & Mangen, A. (2014). A joker in the class: Teenage readers’ attitudes and preferences to reading on different devices. Library and Information Science Research, 36, 179–184. doi:10.1016/j.lisr.2014.08.001
  • U.S. Department of Education. (1996). NAEP 1994 trends in academic progress. Washington, DC: U.S. Government Printing Office.
  • Van den Heuvel-Panhuizen, M., Robitzsch, A., Treffers, A., & Köller, O. (2009). Large-scale assessment of change in student achievement: Dutch primary school students’ results on written division in 1997 and 2004 as an example. Psychometrika, 74, 351–365. doi:10.1007/S11336-009-9110-7
  • Wagemaker, H. (2014). International large-scale assessments: From research to policy. In L. Rutkowski, M. von Davier, & D. Rutkowski (Eds.), Handbook of international large-scale assessment: Background, technical issues, and methods of data analysis (pp. 11–36). Boca Raton, FL: CRC Press.
  • Weeks, J. P. (2010). plink: An R package for linking mixed-format tests using irt-based methods. Journal of Statistical Software, 35(12), 1–33.
  • Wells, C. S., Subkoviak, M. J., & Serlin, R. C. (2002). The effect of item parameter drift on examinee ability estimates. Applied Psychological Measurement, 26, 77–87. doi:10.1177/0146621602261005
  • Yoshino, A. (2012). The relationship between self-concept and achievement in TIMSS 2007: A comparison between American and Japanese students. International Review of Education, 58, 199–219. doi:10.1007/s11159-012-9283-7

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.