ABSTRACT
An assumption underlying speaking tests is that scores reflect the ability to produce online, non-rehearsed speech. Speech produced in testing situations may, however, be less spontaneous if extensive test preparation takes place, resulting in memorized or rehearsed responses. If raters detect these patterns, they may conceptualize speech as inauthentic. As of yet, no studies have investigated raters’ perceptions or viewpoints on authenticity. In this exploratory study, 58 raters rated eight speech samples, one set of four recorded by test takers who had been exposed to a test prompt one week beforehand and a second set of four who had not been exposed. The 58 raters rated the samples on 5 continuous speech-production authenticity indicators and 4 continuous proficiency indicators. Seven raters additionally participated in a stimulated verbal recall. The raters were able to differentiate authenticity across the exposure sets. Raters with experience working in China (n = 42), an educational context prone to cram-test preparation practices, were even better able to do so. The stimulated recall revealed a range of criteria raters used in their judgements of authenticity. In this study I discuss these findings and how this hidden facet may play a role in spoken-performance-test rating.
Acknowledgments
I would like to thank Professor Luke Harding at Lancaster University for his guidance and supervision during this project. I would also like to thank Dr. Paula Winke for her expertise in helping me craft this manuscript in its current form. Finally, I would like to thank my colleagues at the British Council for their participation in and support of this study.
Disclosure statement
No potential conflict of interest was reported by the author.
Supplementary material
Supplemental data for this article can be accessed on the publisher’s website.