ABSTRACT
L2 test developers often use scripted spoken texts in their L2 listening tests, because it is efficient and practical to create scripted spoken texts that meet predetermined test specifications. But because scripted spoken texts differ in a number of fundamental ways from unscripted spoken language, there are potential threats to validity when only scripted spoken texts are used to assess test-takers’ communicative competence. The present study seeks to firstly, examine the process of “authenticating” scripted texts (making changes to scripted texts to give them more of the lexico-grammatical, phonological, and speech rate characteristics of unscripted spoken language), and secondly to compare how test-takers perform on tests with “authenticated” versus scripted spoken texts. A total of 111 ESL and EFL participants took the listening section of the General English Proficiency Test (GEPT). For each language group, half of the participants heard the actual GEPT (scripted) spoken texts, and half-heard an authenticated version of the same texts. An analysis of group means using two-way ANCOVA indicated that the test-takers who heard the scripted texts scored higher than the group who heard the authenticated version in both the ESL and EFL groups, and marked differences between the two text types were found.
Acknowledgements
We would like to thank the Language Training and Testing Center (LTTC) for the Language Teaching & Testing Research Grant. The research for this paper used two forms of the GEPT High-Intermediate Level Listening Tests from the LTTC. Any opinions, findings, conclusions, or recommendations expressed in this paper are ours and do not necessarily reflect the views of the LTTC, its related entities or its partners. We also thank the anonymous reviewers and the editor of this issue for their insightful and constructive comments on this paper, and all the participants involved in this research.
Disclosure statement
No potential conflict of interest was reported by the authors.