Abstract
Given the varied quality of massive open online courses (MOOCs), it is necessary to construct a MOOC evaluation framework to guide teachers’ online teaching practices. Based on the Quality Matters online course evaluation framework and studies on effective online teaching practices, a MOOC evaluation framework was constructed through literature analysis and expert interviews. Finally, an evaluation framework with eight dimensions and 42 secondary indicators was constructed. To examine its validity, five MOOCs were evaluated with this framework. Results show that the evaluation framework is helpful for evaluating MOOCs and providing implications for online instructors. They revealed the common limitations of MOOCs: single method for knowledge introduction, traditional evaluation methods, insufficient attention to student initiative, and inadequate copyright concerns. Corresponding suggestions are provided to improve the quality of MOOCs. Theoretical and practical implications for the improvement of MOOCs are discussed along with avenues for further research.
Acknowledgments
The author would like to thank the experts, instructors, and students who contributed to this study.
Disclosure statement
No potential conflict of interest was declared by the author(s).
Data availability statement
The data that support the findings of this study are available from the author upon reasonable request.
Additional information
Funding
Notes on contributors
Yang Wang
Yang Wang is an assistant professor at the School of Educational Science at Nanjing Normal University and a visiting scholar at the Ohio State University. Her research interests include instructor’s teaching presence and learning analytics.