ABSTRACT
In a human reliability analysis (HRA) study, a systematic analytical process is being developed by applying a macro-cognitive model to HRA in order to obtain consistent results. In the macro-cognitive model application, the operator’s cognitive process is decomposed, categorized, and formalized by a macro-cognitive process. Then, this formalized cognitive model is applied to each task in order to identify the performance influence factors (PIFs). Once the PIFs are identified on each cognitive model on each task, human error probabilities are calculated by integrating each influence factor’s human error rate. On the other hand, it is still difficult to correctly estimate the PIFs on each task through a desktop study. Therefore, in this study, empirical performance data collected during simulator training are used to supplement the desktop studies. When collecting performance data during simulator training, it is challenging to manually collect performance data at a cognitive level on each task because various tasks are performed in a short period of time. The measurement method to obtain operator’s performance data at cognitive levels is investigated, and automatic performance evaluation methods are developed. An on-site evaluation by the prototype system was performed, and the effectiveness of the approach with the system was confirmed.
Acknowledgments
This paper was supported by the collaborative research and development project of Mitsubishi Heavy Industries, Ltd, Okayama University, and Tottori University. The authors express their sincere appreciation for those organizations and members involved in this study.
The authors thank Jeffrey C. Joe, Senior Scientist and Advisor, Human Factors & Human Reliability Department, Idaho National Laboratory, U.S.A., for his great support in providing both technical and editorial reviews.
Disclosure statement
No potential conflict of interest was reported by the author(s).