ABSTRACT
Children’s oral language proficiency (OLP) is integral for developing literacy skills. Storytelling or retelling is often used by parents and educators to elicit children’s OLP, yet it is less commonly used for assessment purposes. Leveraged by natural language processing and machine learning, this study examined the extent to which computational linguistic and acoustic indices predict human ratings of children’s (n=184 aged 9 to 11) OLP using two story retell stimuli presented in written and aural forms. Human raters scored children’s OLP on five oral proficiency criteria: vocabulary, grammar, idea development, task-fulfilment, and speech delivery, using a 4-point scale, and linguistic and acoustic features were used to predict each criterion. Results showed the efficacy of automated indices to predict human scores of children’s OLP. This study calls for attention to discrepancies in human and machine speech delivery scores and stimulus effects on story retelling performance among children of different language backgrounds.
Ethics approval
This study has been reviewed and approved by the University of Toronto’s Social Sciences, Humanities, and Education Research Ethics Board (REB#: 34,203) as well as the local school board’s External Research Review Committee (ERRC#: 2017-2018-38). Parents and teachers provided written informed consent prior to commencing the study. After acquiring parental and teacher consent, we carefully explained the study to each student and obtained their assent. Then individual students were removed from their classrooms to complete the study.
Acknowledgments
The project was supported by the Social Sciences and Humanities Research Council, Canada.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Supplementary material
Supplemental data for this article can be accessed here.
Additional information
Notes on contributors
Melissa R. Hunte
Melissa R. Hunte is a doctoral candidate in the department of Applied Psychology and Human Development at the Ontario Institute for Studies in Education (OISE), University of Toronto. Melissa’s research examines the use of natural language processing and machine learning to assess students’ oral language proficiency and predict their internal beliefs about learning. She also specialises in psychometrics, program evaluation, and assessment reform through machine learning applications. Melissa works collaboratively with peers at the IDELA Lab to create digital assessment tools (BalanceAI) that evaluate language, literacy, cognition, and psychological traits.
Samantha McCormick
Samantha McCormick is a doctoral student in the department of Applied Psychology and Human Development at OISE. She is studying School and Clinical Child Psychology. Samantha is Anishinaabe and English and was born and raised in Northern Ontario. She has experience in digital language and literacy assessment and has been an active member of the BalanceAI project since 2016. Her specific research interest is in Indigenous language assessment, maintenance, and promotion, and how speaking an Indigenous language supports health, decolonial resistance, and well-being.
Maitree Shah
Maitree Shah is a master’s student in the department of Electrical and Computer Engineering at the University of Toronto. She specialises in Deep Learning and her work involves creating state-of-the-art NLP models and transfer learning. She also has strong knowledge of deep neural architectures (RNN, CNN, GAN) with expertise in modelling transformers architecture for text summarisation.
Clarissa Lau
Clarissa Lau is a doctoral candidate in the Developmental Psychology and Education program at OISE, University of Toronto. She completed her Bachelor of Science at McMaster University and went on to attain her Master of Science specialising in Speech and Language Sciences at Western University. She has been involved in various research settings including applied linguistics, neuropsychiatry, and rehabilitation sciences, and served as a research consultant for the Ministry of Education as well as private industries in areas of language development and assessment. Her research interests are in examining novel assessment methodologies and analytic techniques to understand learners’ cognitive, metacognitive, psychological, and affective traits.
Eunice Eunhee Jang
Eunice Eunhee Jang is Professor at the Department of Applied Psychology and Human Development at OISE, University of Toronto. With specialisations in diagnostic language assessment, technology-rich learning and assessment, mixed methods research, and program evaluation, Dr. Jang has led high-impact provincial, national, and international research with various stakeholders. She is the author of the book, Focus on Assessment (Oxford University Press, 2014), which provides evidence-based assessment guidelines for K-12 language teachers. Her current BalanceAI project examines ways to promote students’ cognitive and metacognitive development through innovative learning-oriented assessments based on machine learning applications.