483
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Continuous Error Timing in Automation: The Peak-End Effect on Human-Automation Trust

ORCID Icon, , & ORCID Icon
Pages 1832-1844 | Received 25 Sep 2022, Accepted 06 Jun 2023, Published online: 20 Jun 2023
 

Abstract

This study developed an experimental paradigm (CAPTCHA recognition task) with high ecological validity to investigate how continuous errors in an automatic system and the timing of their occurrence affect human-automation trust. The continuous system errors were manipulated to appear at either of the four timing conditions: the early stage, middle stage, late stage of the task, or not showing. Our research found that continuous errors undermines trust in automated systems. More importantly, even with the same average system reliability, overall trust decreases significantly with continuous errors. Human-automation trust is significantly lower in the late continuous error condition compared to the no continuous error condition, indicating that trust in automated systems accords with the peak-end rule. Thus, user trust is mainly affected by the peak and end values of the system reliability. This study provides new suggestions for a trustworthy artificial intelligence design. Although system errors cannot be eliminated thoroughly, developers can minimize their impact on human-automation trust by avoiding continuous errors and preventing them from occurring during the late stage of interaction.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research is supported by the National Natural Science Foundation of China [32000771]; the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China [21XNLG13]; and fund for building world-class universities (disciplines) of Renmin University of China. Project No. 2018 [RUCPSY0007].

Notes on contributors

Kexin Wang

Kexin Wang received the bachelor’s degree in Applied Psychology in 2021. Currently, she is a graduate student in the Department of Psychology at Renmin University of China. Her research primarily revolves around the concept of trust, encompassing interpersonal trust, human-machine trust, and facial trustworthiness.

Jianan Lu

Jianan Lu has over 10 years of experience in the software design & product operation. He has an integrated background with a Bachelor of Engineering and a Master of Psychology.

Shuyi Ruan

Shuyi Ruan received the bachelor's degree in Management in 2021, then ventured into Applied Psychology at Renmin University of China. She is now immersed in user research and product design, applying her cross-disciplinary knowledge to unearth genuine user needs and translate them into tangible design solutions.

Yue Qi

Yue Qi received the Ph.D. degree in psychology from the Institute of Psychology, Chinese Academy of Sciences, in 2015. She is currently an Associate Professor in the Department of Psychology, and a Research Fellow of the Metaverse Research Center at Renmin University of China. Her research focuses on interpersonal and human-computer interaction.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.