675
Views
7
CrossRef citations to date
0
Altmetric
ARTICLES

Modeling Local Item Dependence Due to Common Test Format With a Multidimensional Rasch Model

&

REFERENCES

  • Adams, R.J. (2005). Reliability as a measurement design effect. Studies in Educational Evaluation, 31, 162–172.
  • Adams, R., Wilson, M., & Wang, W. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21, 1–23.
  • Aryadoust, V. (2012). Differential item functioning in while-listening performance tests: The case of IELTS listening test. International Journal of Listening, 26, 40–60.
  • Aryadoust, V., & Goh, C. (2013). Exploring the relative merits of cognitive diagnostic models and confirmatory factor analysis for assessing listening comprehension. In E.D. Galaczi & C.J. Weir (Eds.), Studies in language testing, volume of proceedings from the ALTE Krakow Conference, 2011. Cambridge, UK: University of Cambridge ESOL Examinations and Cambridge University Press.
  • Bachman, L.F. (1990). Fundamental considerations in language testing. Oxford, UK: OUP.
  • Baghaei, P. (2010). A comparison of three polychotomous Rasch models for super-item analysis. Psychological Test and Assessment Modeling, 52, 313–322.
  • Baghaei, P. (2012). The application of multidimensional Rasch models in large scale assessment and validation: An empirical example. Electronic Journal of Research in Educational Psychology, 10, 233–252.
  • Baghaei, P., Hohensinn, C., & Kubinger, K.D. (2014). The Persian adaptation of the foreign language reading anxiety scale: A psychometric analysis. Psychological Reports, 114, 315–325.
  • Bock, R.D., & Aitken, M. (1981). Marginal maximum likelihood estimation of item parameters: An application of the EM algorithm. Psychometrika, 46, 443–459.
  • Bond, T. G., & Fox, C. M. (2007). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah, NJ: Lawrence Erlbaum.
  • Bozdogan, H. (1987). Model selection and Akaike's Information Criterion (AIC): The general theory and its analytic extensions. Psychometrika, 52, 345–370.
  • Bradlow, E.T., Wainer, H., & Wang, X. (1999). A Bayesian random effects model for testlets. Psychometrika, 64, 153–168.
  • Brandt, S. (2012). Robustness of multidimensional analyses against local item dependence. Psychological Test and Assessment Modeling, 54, 36–53.
  • DeMars, C.E. (2012). Confirming testlet effects. Applied Psychological Measurement, 36, 104–121.
  • Field, J. (2009). A cognitive validation of the lecture-listening component of the IELTS listening paper. In L. Taylor (Ed.), IELTS research reports (Vol. 15). Canberra, Australia: IELTS Australia, Pty Ltd & British Council.
  • Hohensinn, C., & Kubinger, K.D. (2011). Applying item response theory methods to examine the impact of different response formats. Educational and Psychological Measurement, 71, 732–746.
  • International English Language Testing Service. (n.d.). Official IELTS practice materials order form. Retrieved from http://www.ielts.org/pdf/IELTS_SpecMatsOrderFormJan05.pdf
  • Ip, E.H. (2010). Interpretation of the three-parameter testlet response model and information function. Applied Psychological Measurement, 34, 467–482.
  • Janssen, R., & De Boeck, P. (1999). Confirmatory analyses of componential test structure using multidimensional item response theory. Multivariate Behavioral Research, 34, 245–268.
  • Kang, T., & Cohen, A.S. (2007). IRT model selection methods for dichotomous items. Applied Psychological Measurement, 31, 331–358.
  • Lord, F.M., & Novick, M.R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley.
  • Marais, I.D., & Andrich, D. (2008). Effects of varying magnitude and patterns of response dependence in the unidimensional Rasch model. Journal of Applied Measurement, 9, 105–124.
  • Maul, A. (2012). Examining the structure of emotional intelligence at the item level: New perspectives, new conclusions. Cognition & Emotion, 26, 503–520.
  • Maul, M. (2013). Method effects and the meaning of measurement. Frontiers in Quantitative Psychology and Measurement, 4, 1–13.
  • Mayer, J.D., Salovey, P., & Caruso, D.R. (2002). Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) user's manual. Toronto, Canada: MHS.
  • Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (pp. 13–103). New York, NY: Macmillan.
  • Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6, 461–464.
  • Sechrest, L., Davis, M. F., Stickle, T. R., & McKnight, P. E. (2000). Understanding ‘method’ variance. In L. Bickman (Ed.), Research design: Donald Campbell's legacy (pp. 63–87). Thousand Oaks, CA: Sage Publications.
  • Smith, R.M. (1988). The distributional properties of Rasch standardized residuals. Educational and Psychological Measurement, 48, 657–667.
  • Smith, R.M. (1991). The distributional properties of Rasch item fit statistics. Educational and Psychological Measurement, 51, 541–565.
  • Wainer, H. (1995). Precision and differential item functioning on a testlet-based test: The 1991 Law School Admissions Test as an example. Applied Measurement in Education, 8, 157–186.
  • Wainer, H., Bradlow, E.T., & Du, Z. (2000). Testlet response theory: An analog for the 3-PL useful in testlet-based adaptive testing. In W.J. van der Linden & C.A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 245–270). Boston, MA: Kluwer-Nijhoff.
  • Wainer, H., Bradlow, E.T., & Wang, X. (2007). Testlet response theory and its applications. Cambridge, UK: Cambridge University Press.
  • Wainer, H., & Wang, X. (2000). Using a new statistical model for testlets to score TOEFL. Journal of Educational Measurement, 37, 203–220.
  • Wang, W.-C., & Chen, C.-T. (2005). Item parameter recovery, standard error estimates, and fit statistics of the WINSTEPS program for the family of Rasch models. Educational and Psychological Measurement, 65, 376–404.
  • Wang, W.-C., Cheng, Y.-Y., & Wilson, M. (2005). Local item dependence for items across tests connected by common stimuli. Educational and Psychological Measurement, 65, 5–27.
  • Wang, W.-C., & Wilson, M. (2005a). The Rasch testlet model. Applied Psychological Measurement, 29, 126–149.
  • Wang, W.-C., & Wilson, M. (2005b). Exploring local item dependence using a random-effects facet model. Applied Psychological Measurement, 29, 296–318.
  • Wickens, C.D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3, 159–177.
  • Wickens, C.D. (2007). Attention to attention and its applications: A concluding view. In A.F. Kramer, D.A. Wiegmann, & A. Kirlik (Eds.), Attention: From theory to practice (pp. 209–224). New York, NY: Oxford University Press.
  • Wolfe, E.W. (2008). RBF.sas (Rasch Bootstrap Fit): A SAS macro for estimating critical values for Rasch model fit statistics. Applied Psychological Measurement, 32, 585–586.
  • Wolfe, E.W. (2013). A bootstrap approach to evaluating person and item fit to the Rasch model. Journal of Applied Measurement, 14, 1–9.
  • Wright, B.D., & Masters, G.N. (1982). Rating scale analysis. Chicago, IL: MESA Press.
  • Wu, M. (2005). The role of plausible values in large scale surveys. Studies in Educational Evaluation, 31, 114–128.
  • Wu, M., & Adams, R.J. (2013). Properties of Rasch residual fit statistics. Journal of Applied Measurement, 14, 339–355.
  • Wu, M.L., Adams, R.J., & Haldane, S.A. (2007). ACER ConQuest [Computer software]. Camberwell, Australia: Australian Council for Educational Research.
  • Yen, W.M. (1993). Scaling performance assessments: Strategies for managing local item dependence. Journal of Educational Measurement, 30, 187–213.
  • Yen, W.M., & Fitzpatrick, A.R. (2006). Item response theory. In R.L. Brennan (Ed.), Educational measurement (4th ed., pp. 111–153). Westport, CT: American Council on Education/Praeger.
  • Zhang, B. (2010). Assessing the accuracy and consistency of language proficiency classification under competing measurement models. Language Testing, 27, 119–140.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.