1,067
Views
5
CrossRef citations to date
0
Altmetric
Articles

Diagnosing Linguistic Problems in English Academic Writing of University Students: An Item Bank Approach

ORCID Icon

References

  • Alderson, J. C. (2005). Diagnosing foreign language proficiency: The interface between learning and assessment. London, UK: Continuum.
  • Alderson, J. C., Brunfaut, T., & Harding, L. (2015). Towards a theory of diagnosis in second and foreign language assessment: Insights from professional practice across diverse fields. Applied Linguistics, 36(2), 236–260. doi:10.1093/applin/amt046
  • Alderson, J. C., & Huhta, A. (2005). The development of a suite of computer-based diagnostic tests based on the Common European Framework. Language Testing, 22(3), 301–320. doi:10.1191/0265532205lt310oa
  • Alderson, J. C., & Huhta, A. (2011). Can research into the diagnostic testing of reading in a second or foreign language contribute to SLA research? EUROSLA Yearbook, 11, 30–52. doi:10.1075/eurosla.11
  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
  • Bachman, L., & Palmer, A. (2010). Language assessment in practice. Oxford, UK: Oxford University Press.
  • Baker, R. (1997). Classical test theory and item response theory in test analysis. Special Report No 2: Language Testing Update.
  • Banerjee, J., & Wall, D. (2006). Assessing and reporting performances on pre-sessional EAP courses: Developing a final assessment checklist and investigating its validity. Journal of English for Academic Purposes, 5(1), 50–69. doi:10.1016/j.jeap.2005.11.003
  • Bond, T. G., & Fox, C. M. (2015). Applying the Rasch model: Fundamental measurement in the human sciences (3rd ed.). New York, NY: Routledge.
  • Bruce, E., & Hamp-Lyons, L. (2015). Opposing tensions of local and international standards for EAP writing programmes: Who are we assessing for? Journal of English for Academic Purposes, 18, 64–77. doi:10.1016/j.jeap.2015.03.003
  • Evans, S., & Green, C. (2007). Why EAP is necessary: A survey of Hong Kong tertiary students. Journal of English for Academic Purposes, 6(1), 3–17. doi:10.1016/j.jeap.2006.11.005
  • Gass, S. M. (2013). The reliability of second-language grammaticality judgments. In E. E. Tarone, S. M. Gass, & A. D. Cohen (Eds.), Research methodology in second-language acquisition (pp. 303–322). New York, NY: Routledge.
  • Godshalk, F. I., Swineford, F., & Coffman, W. E. (1966). The measurement of writing ability. New York, NY: College Entrance Examination Board.
  • Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. New York, NY: Routledge.
  • Henning, G. (1986). Item banking via DBASE II: The UCLA ESL proficiency examination experience. In C. W. Stansfield (Ed.), Technology and language testing(pp. 69–77). Washington, DC: TESOL.
  • Henning, G., Johnson, P. J., Boutin, A. J., & Rice, H. R. (1994). Automated assembly of pre-equated language proficiency tests. Language Testing, 11(1), 15–28. doi:10.1177/026553229401100103
  • Hoang, G., & Kunnan, A. J. (2016). Automated writing instructional tool for English language learners: A case study of MY access. Language Assessment Quarterly, 13, 359–376. doi:10.1080/15434303.2016.1230121
  • Jang, E. E. (2009). cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for fusion model application to languedge assessment. Language Testing, 26(1), 31–73.
  • Janssen, G., Meier, V., & Trace, J. (2015). Building a better rubric: Mixed methods rubric revision. Assessing Writing, 26, 51–66. doi:10.1016/j.asw.2015.07.002
  • Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 17–64). Westport, CT: American Council on Education/Praeger.
  • Kim, Y. H. (2011). Diagnosing EAP writing ability using the reduced reparameterized unified model. Language Testing, 28(4), 509–541. doi:10.1177/0265532211400860
  • Knoch, U. (2009). Diagnostic assessment of writing: A comparison of two rating scales. Language Testing, 26(2), 275–304. doi:10.1177/0265532208101008
  • Lee, Y. W. (2015). Diagnosing diagnostic language assessment. Language Testing, 32(3), 299–316. doi:10.1177/0265532214565387
  • Linacre, J. M. (2007). A user’s guide to WINSTEPS Rasch-model computer programs. Chicago, Illinois: MESA Press.
  • Liu., S., & Kunnan, A. J. (2016). Investigating the application of automated writing evaluation to Chinese undergraduate English majors: A case study of WriteToLearn. CALICO Journal, 33(1), 71–91.
  • McGee, T., & Ericsson, P. (2008). The politics of the program: MS Word as the invisible grammarian. In M. Sidler, E. O. Smith, & R. Morris (Eds.), Computers in the composition classroom: A critical sourcebook(pp. 453–470). Boston, MA: Bedford/St. Martins.
  • Sinharay, S. (2010). How often do subscores have added value? Results from operational and simulated data. Journal of Educational Measurement, 47(2), 150–174. doi:10.1111/(ISSN)1745-3984
  • Spolsky, B. (1995). Measured words: The development of objective language testing. Oxford, UK: Oxford University Press.
  • Szabó, G. (2008). Applying item response theory in language test item bank building (Vol. 94). Frankfurt, Germany: Peter Lang GmbH.
  • Tate, R. (2002). Test dimensionality. In G. Tindal & T. M. Haladyna (Eds.), Large-scale assessment programs for all students: Validity, technical adequacy, and implementation (pp. 181–211). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Urmston, A., Raquel, M., & Tsang, C. (2013). Diagnostic testing of Hong Kong tertiary students’ English language proficiency: The development and validation of DELTA. Hong Kong Journal of Applied Linguistics, 14(2), 60–82.
  • Vernon, A. (2000). Computerized grammar checkers 2000: Capabilities, limitations, and pedagogical possibilities. Computers and Composition, 17(3), 329–349. doi:10.1016/S8755-4615(00)00038-4
  • Vernon, P. E. (1957). Secondary school selection. London, UK: Methuen.
  • Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing, 18, 85–99. doi:10.1016/j.asw.2012.10.006
  • Xie, Q. (2017). Diagnosing university students’ academic writing in english: Is cognitive diagnostic modelling the way forward? Educational Psychology, 37(1), 26–47.
  • Xie, Q. (2019). Error analysis and diagnostic assessment of linguistic accuracy: Construct specification and empirical verification. Assessing Writing, 41, 47–62. doi:10.1016/j.asw.2019.05.002

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.