6,126
Views
0
CrossRef citations to date
0
Altmetric
Articles

Building the foundations for measuring learning gain in higher education: a conceptual framework and measurement instrument

ORCID Icon, ORCID Icon & ORCID Icon
Pages 266-301 | Received 29 Nov 2017, Accepted 31 May 2018, Published online: 06 Sep 2018
 

ABSTRACT

In this paper, we set out the first step towards the measurement of learning gain in higher education by putting forward a conceptual framework for understanding learning gain that is relevant across disciplines. We then introduce the operationalisation of this conceptual framework into a new set of measurement tools. With the use of data from a large-scale survey of 11 English universities and over 4,500 students, we test the reliability and validity of the measurement instrument empirically. We find support in the data for the reliability of most of the measurement scales we put forward, as well as for the validity of the conceptual framework. Based on these results, we reflect on the conceptual framework and associated measurement tools in the context of at-scale deployment and the potential implications for policy and practice in higher education.

Acknowledgments

We would like to acknowledge the funding from the Office for Students (Higher Education Funding Council for England at the time of the award) under the Learning Gain Pilots scheme. We thank Prof Christina Hughes for her leadership of the LEGACY project, and all our LEGACY colleagues. The survey tool we introduce in this paper draws in part on existing published research and we would like to thank the following authors for their permission to use or modify their instruments, or for making them publicly available to other researchers: Prof Marlene Schommer-Aikins, Prof Kirsti Lonka, Prof Jennifer Fredricks, Prof Angela Duckworth, and The International Cognitive Ability Resource. Finally, we are indebted to the leaders, staff, and students of the 11 universities who gave their time to facilitate, and participate in, our study.

Disclosure statement

All authors declare they have no financial interest or benefit arising from the direct application of this research

Ethical approval

This research sought and received ethical approval from the Faculty of Education University of Cambridge. Explicit consent was obtained from all participants whose data are reported in this paper.

Notes

1. The risk of artificially low internal consistency coefficients (such as Cronbach’s alpha) increases for scales made up of small numbers of items. For the original critical processing scale, this means that the 0.733 internal consistency coefficient may have been downwardly biased by the small number of items, but still within acceptable margins.

2. For robustness, we have also computed Guttman’s λ2 for all our scales, and the results indicate good internal consistency for all of them.

3. See our discussion above in relation to the below 0.7 Cronbach’s alpha coefficient.

4. Not computable.

Additional information

Funding

This work was supported by the Office for Students (formerly, Higher Education Funding Council for England, (HEFCE), under the Piloting and Evaluating Measures of Learning Gain Programme (lead grantee institution: University of Warwick).