ABSTRACT
As our understanding of the construct of oral communication (OC) has evolved, so have the possibilities of computer technology undertaking the delivery of tests that measure this ability. It is paramount to understand to what extent such developments lead to accurate, comprehensive, and useful assessment of OC. In this paper, we discuss five models of technology-delivered OC assessment that have appeared in the past three decades. We evaluate these models in terms of how well their respective methods aid in assessing OC. To achieve this aim, we use a framework which takes into account a contemporary view of OC ability, including the call for incorporating English as a lingua franca (ELF) considerations into English language assessment. The evaluation of the five models suggests strengths and weaknesses of each that should be considered when determining which is used for a particular purpose.
Additional information
Notes on contributors
Gary J. Ockey
Gary J. Ockey, a professor at Iowa State University in the Applied Linguistics and Technology program, received his Ph.D. from the University of California, Los Angeles in applied linguistics. He investigates second language assessments with a focus on the use of technology and quantitative methods to better measure oral communication. He has served as the Editor of the TOEFL Research Report Series, and is currently a Co-editor of Language Assessment Quarterly.
Reza Neiriz is a Ph.D. student in the Applied Linguistics and Technology program at Iowa State University. His research interests include technology-mediated language assessment with a focus on oral proficiency. He investigates the measurement of different aspects of oral proficiency through computer-delivered tests and spoken dialog systems as well as automated scoring of oral communication.