168
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

The Effect of Task Fidelity on Learning Curves: A Synthetic Analysis

, , , &
Pages 2253-2267 | Received 12 Oct 2021, Accepted 18 Aug 2022, Published online: 29 Jan 2023
 

Abstract

There have been discussions about the value of fidelity since simulation-based training systems have been created. A primary question, which has yet to be fully answered, is what is the effect of level of simulation fidelity on learning on a target task? We present a new analysis method and use it for several analyses of a training simulation for an electronic maintenance task with two levels of fidelity: a high-fidelity simulation that basically takes as much time as the real-world task and a low-fidelity simulation with minimal delays and many actions removed or reduced in fidelity and time. The analyses are based on the Keystroke-Level Model (KLM) and the power law of learning. The analyses predict that the performance on the low-fidelity simulation initially takes between one quarter and one eighth of the time of the high, and thus starts out providing between four and eight times as many practice trials in a given time period. The low-fidelity curve has a lower intercept and a steeper slope. Learners that move from low to high appear to not be adversely affected. For a small number of practice trials, this makes a significant difference. We also explore the effect of missing subtasks in the low-fidelity simulation. This effect varies with the tasks included: If the low-fidelity simulation does not train an important task, learners can be slower when they transfer. We also analyze a simulation that we have built and are studying. These analyses demonstrate that using lower fidelity training situations helps most where there is less time to practice, and that if there is extensive time to practice full fidelity has nearly the same outcome (but perhaps not the same costs or risks). We discuss how this analysis approach can help choose the level of fidelity of future training simulations.

Acknowledgements

Grace Good, Jake Graham, Jong Kim, Jacob Oury, Ray Perez, Clare Robson, Fred Ryans, Shan Wang, Steve Zimmerman, and two anonymous reviewers provided helpful comments. James Niehaus suggested how to improve the schematic. This work was supported by ONR, N00014-18-C-7015 and N00014-15-1-2275. A previous version was published at the International Conference on Cognitive Modeling, and the reviewers there provided useful comments.

Disclosure statement

Frank Ritter is required by Pennsylvania State University [sic] to include this paragraph: “Frank E. Ritter, the co-author of this article, have financial interest with Charles River Analytics Inc.; a company in which Frank E. Ritter provides consulting services and could potentially benefit from by the results of this research. The interest has been reviewed and is being monitored by the Pennsylvania State University in accordance with its individual Conflict of Interest policy, for the purpose of maintaining the objectivity of research at the Pennsylvania State University” [sic].

Notes

1 Maintenance Enhancement with Next Generation-Development of Skills.

Additional information

Notes on contributors

Frank E. Ritter

Frank E. Ritter researches the development, application, and methodology of cognitive models, particularly applied to interface design, behavioral moderators, and understanding learning. He contributed to a National Research Council report on how to use cognitive models to improve human-system design (Pew & Mavor, eds., Citation2007).

Martin K. Yeh

Martin K. Yeh is an Associate Professor of Information Sciences and Technology. His research interests are understanding the role of computer technology in learning environments and applying new technology in Human-Computer Interactions. He also enjoys learning, teaching, and practicing software engineering and computer security.

Sarah J. Stager

Sarah J. Stager is interested in furthering learning theory to be inclusive of individual differences. Her research is facilitative and evaluative of the role of technology in teaching and learning.

Ashley F. McDermott

Ashley F. McDermott is a Cognitive Scientist with interests in the role of technology in reshaping cognitive processes and behaviors, and how to develop learning experiences that optimize different learning technologies. She holds a Ph.D. in Brain and Cognitive Sciences from the University of Rochester.

Peter W. Weyhrauch

Peter W. Weyhrauch is a Principal Scientist at Charles River Analytics researching technologies and intelligent applications that model human activity, motivations, and performance. His career has taken him into areas such as intelligent adaptive training and operations, cognitive modeling, and computational interactive narrative, a field that he helped establish.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.