173
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Measures, Metrics, and Indicators for Evaluating Technology-Based Interventions

&
Pages 2199-2210 | Received 09 Nov 2021, Accepted 27 Jan 2023, Published online: 22 Feb 2023
 

Abstract

The goal of this article is to reconsider the types of dependent variables used in formal evaluation of technology-based learning programs. This article specifically focuses on measures, metrics, and indicators applied to the evaluation of technology-based learning programs and interventions. Its deeper intent is to support the use of evaluation results in decisions to improve the effectiveness of current or future learning-focused implementations. This article focuses on the description and use of three classifications of dependent variables appropriate to evaluate technological and other learning innovations. Beginning with a treatment of evaluation, which is central to the application of outcomes, the article considers a model of the relationships and key attributes of measures, metrics, and indicators, with the goal of clarifying their meaning. Ways of developing measures are provided, contrasting historical development of construct-oriented measures with criterion-referenced measures. An example of a criterion-referenced framework developed for Navy training is given, which has procedures for developing domain-independent measures for use across disparate content. Metrics are described, emphasizing different ways of giving meaning of scores. Norm-referenced and criterion-referenced approaches are contrasted. The use of indicators (a combination of relevant metrics) is discussed for larger policy and managerial uses.

Acknowledgements

The findings and opinions expressed in this article are those of the authors and do not necessarily reflect the positions or policies of the Office of Personnel Management or PowerTrain.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This article was supported in part by contract numbers OPM2615D0001 [PO 180815-1 and 190801-04] and 24361820D0001 [PO 210907-F0308-01 and 220123-F0036-30] from the Office of Personnel Management via PowerTrain Inc. to the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) at the University of California, Los Angeles, and a subcontract to the University of Southern California.

Notes on contributors

Eva L. Baker

Eva L. Baker is a distinguished Professor Emerita at UCLA. He researches design and validation of multipurpose training and assessments systems, now developing games, evaluations, simulations, and scenario-based assessments for the U.S. Navy. She served as President of the American and World Educational Research Associations, and is the Founding Director of CRESST.

Harold F. O’Neil

Harold F. O’Neil is a Professor of Educational Psychology with USC. His research interests include the effectiveness of computer games and simulations for teaching and assessment. His most recent book is Theoretical Issues of Using Simulations and Games in Educational Assessment (2022).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.