Abstract
The goal of this article is to reconsider the types of dependent variables used in formal evaluation of technology-based learning programs. This article specifically focuses on measures, metrics, and indicators applied to the evaluation of technology-based learning programs and interventions. Its deeper intent is to support the use of evaluation results in decisions to improve the effectiveness of current or future learning-focused implementations. This article focuses on the description and use of three classifications of dependent variables appropriate to evaluate technological and other learning innovations. Beginning with a treatment of evaluation, which is central to the application of outcomes, the article considers a model of the relationships and key attributes of measures, metrics, and indicators, with the goal of clarifying their meaning. Ways of developing measures are provided, contrasting historical development of construct-oriented measures with criterion-referenced measures. An example of a criterion-referenced framework developed for Navy training is given, which has procedures for developing domain-independent measures for use across disparate content. Metrics are described, emphasizing different ways of giving meaning of scores. Norm-referenced and criterion-referenced approaches are contrasted. The use of indicators (a combination of relevant metrics) is discussed for larger policy and managerial uses.
Acknowledgements
The findings and opinions expressed in this article are those of the authors and do not necessarily reflect the positions or policies of the Office of Personnel Management or PowerTrain.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Funding
Notes on contributors
Eva L. Baker
Eva L. Baker is a distinguished Professor Emerita at UCLA. He researches design and validation of multipurpose training and assessments systems, now developing games, evaluations, simulations, and scenario-based assessments for the U.S. Navy. She served as President of the American and World Educational Research Associations, and is the Founding Director of CRESST.
Harold F. O’Neil
Harold F. O’Neil is a Professor of Educational Psychology with USC. His research interests include the effectiveness of computer games and simulations for teaching and assessment. His most recent book is Theoretical Issues of Using Simulations and Games in Educational Assessment (2022).