Abstract
The process of constructing assessment scales for performance testing is complex and multi-dimensional. As a result, a number of different approaches, both empirically and intuitively based, are open to developers. In this paper we outline the approach taken in the revision of a set of assessment scales used with speaking tests, and present the value of combining methodologies to inform and refine scale development. We set the process in the context of the growing influence of the Common European Framework of Reference (Council of Europe Citation2001) and outline a number of stages in terms of the procedures followed and outcomes produced. The findings describe a range of data that was collected and analysed through a number of phases and used to inform the revision of the scales, including consultation with experts, and data-driven qualitative and quantitative research studies. The overall aim of the paper is to illustrate the importance of combining intuitive and data-driven scale construction methodologies, and to suggest a usable scale construction model for application or adaptation in a variety of contexts.
Keywords:
Acknowledgements
The authors would like to thank Lynda Taylor, Paul Newton, Michelle Meadows and the two anonymous reviewer for providing valuable feedback to earlier draft of this article.