ABSTRACT
The global demand for digital proficiency has resulted in increasing pressure to “massify” education. As practical digital skills development becomes more important, there is a need to design accurate and timely performance feedback systems that can scale to a large number of learners. This paper contributes meta-requirements and design principles for designing a socio-technical artefact that offers a solution to the general problem of providing performance feedback at scale. The artefact evaluation provides interesting results for achieving the three objectives of a) scalability to a large number of learners, b) validity and reliability of the feedback, and c) positive impact on learners’ behaviour and engagement with the feedback system. These results are obtained through the synergistic contribution of pedagogical prioritisation (i.e., what skills to cover), assignment design (i.e., what tasks to use to evaluate mastery) and automated measurement (i.e., grading engine functionalities for error detection).
ACCEPTING EDITOR:
ASSOCIATE EDITOR:
Disclosure statement
No potential conflict of interest was reported by the authors.
Correction Statement
This article has been republished with minor changes. These changes do not impact the academic content of the article.
Notes
1. The code for the grading engine is available at: https://github.com/digitaldatastreams/grader.
2. AWS Lambda is a cloud-based service that enables event-drive code execution. The Lambda service is based on a serverless architecture that automatically manages and provisions the computing resources required to run the code, thus enabling cloud-native application development and deployment.