ABSTRACT
Machine learning (ML) is an emergent computerised technology that relies on algorithms built by ‘learning’ from training data rather than ‘instruction’, which holds great potential to revolutionise science assessment. This study systematically reviewed 49 articles regarding ML-based science assessment through a triangle framework with technical, validity, and pedagogical features on three vertices. We found that a majority of the studies focused on the validity vertex, as compared to the other two vertices. The existing studies primarily involve text recognition, classification, and scoring with an emphasis on constructing scientific explanations, with a vast range of human-machine agreement measures. To achieve the agreement measures, most of the studies employed a cross-validation method, rather than self- or split-validation. ML allows complex assessments to be used by teachers without the burden of human scoring, saving both time and cost. Most studies used supervised ML, which relies on extraction of attributes from student work that was first coded by humans to achieve automaticity, rather than semi- or unsupervised ML. We found that 24 studies were explicitly embedded in science learning activities, such as scientific inquiry and argumentation, to provide feedback or learning guidance. This study identifies existing research gaps and suggests that all three vertices of the ML triangle should be addressed in future assessment studies, with an emphasis on the pedagogy and technology features.
Disclosure Statement
No potential conflict of interest was reported by the authors.
Notes
1. We originally found 47 papers including 34 paper from literature search, 8 papers from citation search and 5 solicited papers from authors. We then solicited one more paper from authors after completing the first-round analysis, which yielded one more paper from citation. In sum, there are 49 papers.
Additional information
Funding
Notes on contributors
Xiaoming Zhai
Xiaoming Zhai is an Assistant Professor in Science Education. He is interested in developing and applying innovative assessment for science teaching and learning.
Yue Yin
Yue Yin is an Associate Professor in Educational Psychology whose research interests and expertise including assessments, science education, research design, survey design, and applied measurement/statistics.
James W. Pellegrino
James W. Pellegrino, Liberal Arts and Sciences Distinguished Professor, and Distinguished Professor of Education at University of Illinois at Chicago. He is interested in cognition, instruction and assessment. He is an AERA fellow, a lifetime National Associate of the National Academy of Sciences and the National Academy of Education.
Kevin C. Haudek
Kevin C. Haudek is an Assistant Professor. He is interested in the application of computerised tools to evaluate student writing, focusing on short, content-rich responses in science education.
Lehong Shi
Lehong Shi is an Educator and Researcher in education.