483
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Continuous Error Timing in Automation: The Peak-End Effect on Human-Automation Trust

ORCID Icon, , & ORCID Icon
Pages 1832-1844 | Received 25 Sep 2022, Accepted 06 Jun 2023, Published online: 20 Jun 2023

References

  • Abbass, H. A., Scholz, J., & Reid, D. J. (Eds.) (2018). Trustworthiness of autonomous systems. In Foundations of trusted autonomy (Vol. 117, pp. 161–180). Springer International Publishing. https://doi.org/10.1007/978-3-319-64816-3
  • Alaybek, B., Dalal, R. S., Fyffe, S., Aitken, J. A., Zhou, Y., Qu, X., Roman, A., & Baines, J. I. (2022). All’s well that ends (and peaks) well? A meta-analysis of the peak-end rule and duration neglect. Organizational Behavior and Human Decision Processes, 170, 104149. https://doi.org/10.1016/j.obhdp.2022.104149
  • Aliasghari, P., Ghafurian, M., Nehaniv, C. L., & Dautenhahn, K. (2021). Effect of Domestic Trainee Robots’ Errors on Human Teachers’ Trust. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) (pp. 81–88). https://doi.org/10.1109/RO-MAN50785.2021.9515510
  • Asch, S. E. (1946). Forming impressions of personality. The Journal of Abnormal and Social Psychology, 41(3), 258–290. https://doi.org/10.1037/h0055756
  • Bahrin, M. A. K., Othman, M. F., Nor Azli, N. H., & Talib, M. F. (2016). Industry 4.0: A review on industrial automation and robotic. Jurnal Teknologi, 78(6–13), 137–143. https://doi.org/10.11113/jt.v78.9285
  • Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is Stronger than Good. Review of General Psychology, 5(4), 323–370. https://doi.org/10.1037/1089-2680.5.4.323
  • Brooks, D. J. (2017). A human-centric approach to autonomous robot failures. [Ph.D. dissertation]. Department of Computer Science, University.
  • Chancey, E. T., Bliss, J. P., Yamani, Y., & Handley, H. A. H. (2017). Trust and the compliance–reliance paradigm: The effects of risk, error bias, and reliability on trust and dependence. Human Factors, 59(3), 333–345. https://doi.org/10.1177/0018720816682648
  • Chavaillaz, A., Wastell, D., & Sauer, J. (2016). System reliability, performance and trust in adaptable automation. Applied Ergonomics, 52, 333–342. https://doi.org/10.1016/j.apergo.2015.07.012
  • Chen, H.-J., & Ho, M.-X. (2021). Integrating the peak-end rule and the Kano model in assessing the product-use process. International Journal of Affective Engineering, 20(3), 171–179. https://doi.org/10.5057/ijae.IJAE-D-21-00003
  • Chen, J., Mishler, S., & Hu, B. (2021). Automation error type and methods of communicating automation reliability affect trust and performance: An empirical study in the cyber domain. IEEE Transactions on Human-Machine Systems, 51(5), 463–473. https://doi.org/10.1109/THMS.2021.3051137
  • Cockburn, A., Quinn, P., & Gutwin, C. (2015). Examining the peak-end effects of subjective experience. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 357–366). ACM. https://doi.org/10.1145/2702123.2702139
  • Costabile, K. A., & Klein, S. B. (2005). Finishing strong: Recency effects in juror judgments. Basic and Applied Social Psychology, 27(1), 47–58. https://doi.org/10.1207/s15324834basp2701_5
  • Davenport, R. B., & Bustamante, E. A. (2010). Effects of false-alarm vs. miss-prone automation and likelihood alarm technology on trust, reliance, and compliance in a miss-prone task. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 54(19), 1513–1517. https://doi.org/10.1177/154193121005401933
  • de Visser, E., & Parasuraman, R. (2011). Adaptive aiding of human-robot teaming: Effects of imperfect automation. On Performance, Trust, and Workload. Journal of Cognitive Engineering and Decision Making, 5(2), 209–231. https://doi.org/10.1177/1555343411410160
  • Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. (2013). Impact of robot failures and feedback on real-time trust. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Japan (pp. 251–258). IEEE. https://doi.org/10.1109/HRI.2013.6483596
  • Desai, M., Medvedev, M., Vázquez, M., McSheehy, S., Gadea-Omelchenko, S., Bruggeman, C., Steinfeld, A., & Yanco, H. (2012). Effects of changing reliability on trust of robot systems. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction - HRI ’12 (p. 73). ACM. https://doi.org/10.1145/2157689.2157702
  • Dixon, S. R., & Wickens, C. D. (2006). Automation reliability in unmanned aerial vehicle control: A reliance-compliance model of automation dependence in high workload. Human Factors: The Journal of the Human Factors and Ergonomics Society, 48(3), 474–486. https://doi.org/10.1518/001872006778606822
  • Doi, T., Doi, S., & Yamaoka, T. (2022). The peak–end rule in evaluating product user experience: The chronological evaluation of past impressive episodes on overall satisfaction. Human Factors and Ergonomics in Manufacturing & Service Industries, 32(3), 256–267. https://doi.org/10.1002/hfm.20951
  • Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
  • Dzindolet, M. T., Pierce, L. G., Beck, H. P., & Dawe, L. A. (2002). The perceived utility of human and automated aids in a visual detection task. Human Factors: The Journal of the Human Factors and Ergonomics Society, 44(1), 79–94. https://doi.org/10.1518/0018720024494856
  • Flook, R., Shrinah, A., Wijnen, L., Eder, K., Melhuish, C., & Lemaignan, S. (2019). On the impact of different types of errors on trust in human-robot interaction: Are laboratory-based HRI experiments trustworthy? Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 20(3), 455–486. https://doi.org/10.1075/is.18067.flo
  • Foroughi, C. K., Devlin, S., Pak, R., Brown, N. L., Sibley, C., & Coyne, J. T. (2023). Near-perfect automation: investigating performance, trust, and visual attention allocation. Human Factors: The Journal of the Human Factors and Ergonomics Society, 65(4), 546–561. https://doi.org/10.1177/00187208211032889
  • Freedy, A., DeVisser, E., Weltman, G., & Coeyman, N. (2007). Measurement of trust in human-robot collaboration. 2007 International Symposium on Collaborative Technologies and Systems, USA (pp. 106–114). IEEE. https://doi.org/10.1109/CTS.2007.4621745
  • Geng, X., Chen, Z., Lam, W., & Zheng, Q. (2013). Hedonic evaluation over short and long retention intervals: The mechanism of the peak-end rule: Hedonic evaluation over retention intervals. Journal of Behavioral Decision Making, 26(3), 225–236. https://doi.org/10.1002/bdm.1755
  • Greenlees, I., Dicks, M., Holder, T., & Thelwell, R. (2007). Order effects in sport: Examining the impact of order of information presentation on attributions of ability. Psychology of Sport and Exercise, 8(4), 477–489. https://doi.org/10.1016/j.psychsport.2006.07.004
  • Gutwin, C., Rooke, C., Cockburn, A., Mandryk, R. L., & Lafreniere, B. (2016). Peak-end effects on player experience in casual games. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5608–5619). https://doi.org/10.1145/2858036.2858419
  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 517–527. https://doi.org/10.1177/0018720811417254
  • Hennessy, D. A., Jakubowski, R. D., & Leo, B. (2016). The impact of primacy/recency effects and hazard monitoring on attributions of other drivers. Transportation Research Part F: Traffic Psychology and Behaviour, 39, 43–53. https://doi.org/10.1016/j.trf.2016.03.001
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Jian, J.-Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04
  • Jiang, L., Yin, D., & Liu, D. (2019). Can joy buy you money? The impact of the strength, duration, and phases of an entrepreneur’s peak displayed joy on funding performance. Academy of Management Journal, 62(6), 1848–1871. https://doi.org/10.5465/amj.2017.1423
  • Johnson, J. D., Sanchez, J., Fisk, A. D., & Rogers, W. A. (2004). Type of automation failure: The effects on trust and reliance in automation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(18), 2163–2167. https://doi.org/10.1177/154193120404801807
  • Jones, E. E., Rock, L., Shaver, K. G., Goethals, G. R., & Ward, L. M. (1968). Pattern of performance and ability attribution: An unexpected primacy effect. Journal of Personality and Social Psychology, 10(4), 317–340. https://doi.org/10.1037/h0026818
  • Kahneman, D. (2000). Evaluation by moments: Past and future. In D. Kahneman & A. Tversky (Eds.), Choices, values, and frames (1st ed., pp. 693–708). Cambridge University Press. https://doi.org/10.1017/CBO9780511803475.039
  • Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2023). Trust in artificial intelligence: Meta-analytic findings. Human Factors: The Journal of the Human Factors and Ergonomics Society, 65(2), 337–359. https://doi.org/10.1177/00187208211013988
  • Kerstholt, J. H., & Jackson, J. L. (1998). Judicial decision making: Order of evidence presentation and availability of background information. Applied Cognitive Psychology, 12(5), 445–454. https://doi.org/10.1002/(SICI)1099-0720(199810)12:5<445::AID-ACP518>3.0.CO;2-8
  • Khastgir, S., Birrell, S., Dhadyalla, G., & Jennings, P. (2017). Calibrating trust to increase the use of automated systems in a vehicle. In N. A. Stanton, S. Landry, G. Di Bucchianico, & A. Vallicelli (Eds.), Advances in human aspects of transportation (Vol. 484, pp. 535–546). Springer International Publishing. https://doi.org/10.1007/978-3-319-41682-3_45
  • Kraus, J. M., Forster, Y., Hergeth, S., & Baumann, M. (2019). Two routes to trust calibration: Effects of reliability and brand information on trust in automation. International Journal of Mobile Human Computer Interaction, 11(3), 1–17. https://doi.org/10.4018/IJMHCI.2019070101
  • Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-Computer Studies, 40(1), 153–184. https://doi.org/10.1006/ijhc.1994.1007
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Li, Y., Choi, D., Chung, J., Kushman, N., Schrittwieser, J., Leblond, R., Eccles, T., Keeling, J., Gimeno, F., Dal Lago, A., Hubert, T., Choy, P., de Masson d‘Autume, C., Babuschkin, I., Chen, X., Huang, P.-S., Welbl, J., Gowal, S., Cherepanov, A., … Vinyals, O. (2022). Competition-level code generation with AlphaCode. Science (New York, N.Y.), 378(6624), 1092–1097. https://doi.org/10.1126/science.abq1158
  • Lopez, J., Watkins, H., & Pak, R. (2023). Enhancing component-specific trust with consumer automated systems through humanness design. Ergonomics, 66(2), 291–302. https://doi.org/10.1080/00140139.2022.2079728
  • Lount, R. B., Zhong, C.-B., Sivanathan, N., & Murnighan, J. K. (2008). Getting off on the wrong foot: The timing of a breach and the restoration of trust. Personality & Social Psychology Bulletin, 34(12), 1601–1612. https://doi.org/10.1177/0146167208324512
  • Lucas, G. M., Boberg, J., Traum, D., Artstein, R., Gratch, J., Gainer, A., Johnson, E., Leuski, A., & Nakano, M. (2017). The role of social dialogue and errors in robots. Proceedings of the 5th International Conference on Human Agent Interaction, Germany (pp. 431–433). Association for Computing Machinery. https://doi.org/10.1145/3125739.3132617
  • Lucas, G. M., Boberg, J., Traum, D., Artstein, R., Gratch, J., Gainer, A., Johnson, E., Leuski, A., & Nakano, M. (2018). Getting to know each other: The role of social dialogue in recovery from errors in social robots. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, USA (pp. 344–351). Association for Computing Machinery. https://doi.org/10.1145/3171221.3171258
  • Luchins, A. S. (1958). Definitiveness of impression and primacy-recency in communications. The Journal of Social Psychology, 48(2), 275–290. https://doi.org/10.1080/00224545.1958.9919292
  • Lyons, J. B., Ho, N. T., Van Abel, A. L., Hoffmann, L. C., Sadler, G. G., Fergueson, W. E., Grigsby, M. A., & Wilkins, M. (2017). Comparing trust in auto-GCAS between experienced and novice air force pilots. Ergonomics in Design: The Quarterly of Human Factors Applications, 25(4), 4–9. https://doi.org/10.1177/1064804617716612
  • Madhavan, P., Wiegmann, D. A., & Lacson, F. C. (2006). Automation failures on tasks easily performed by operators undermine trust in automated aids. Human Factors: The Journal of the Human Factors and Ergonomics Society, 48(2), 241–256. https://doi.org/10.1518/001872006777724408
  • Manzey, D., Reichenbach, J., & Onnasch, L. (2012). Human performance consequences of automated decision aids: The impact of degree of automation and system experience. Journal of Cognitive Engineering and Decision Making, 6(1), 57–87. https://doi.org/10.1177/1555343411433844
  • Moray, N., Inagaki, T., & Itoh, M. (2000). Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. Journal of Experimental Psychology: Applied, 6(1), 44–58. https://doi.org/10.1037/1076-898X.6.1.44
  • Neuschatz, J. S., Lampinen, J. M., Preston, E. L., Hawkins, E. R., & Toglia, M. P. (2002). The effect of memory schemata on memory and the phenomenological experience of naturalistic situations. Applied Cognitive Psychology, 16(6), 687–708. https://doi.org/10.1002/acp.824
  • Nourani, M., King, J. T., & Ragan, E. D. (2020). The role of domain expertise in user trust and the impact of first impressions with intelligent systems. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8(1), 112–121. https://doi.org/10.1609/hcomp.v8i1.7469
  • Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
  • Petrak, B., Weitz, K., Aslan, I., & Andre, E. (2019). Let me show you your new home: Studying the effect of proxemic-awareness of robots on users’ first impressions. 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), India (pp. 1–7). IEEE. https://doi.org/10.1109/RO-MAN46459.2019.8956463
  • Pezdek, K., Whetstone, T., Reynolds, K., Askari, N., & Dougherty, T. &. (1989). Memory for real-world scenes: The role of consistency with schema expectation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(4), 587–595. https://doi.org/10.1037/0278-7393.15.4.587
  • Qiao, H., Zhang, J., Zhang, L., Li, Y., & Loft, S. (2022). Exploring the peak-end effects in air traffic controllers’ mental workload ratings. Human Factors: The Journal of the Human Factors and Ergonomics Society, 64(8), 1292–1305. https://doi.org/10.1177/0018720821994355
  • Raghavendra, U., Acharya, U. R., & Adeli, H. (2019). Artificial intelligence techniques for automated diagnosis of neurological disorders. European Neurology, 82(1–3), 41–64. https://doi.org/10.1159/000504292
  • Redelmeier, D. A., & Kahneman, D. (1996). Patients’ memories of painful medical treatments: Real-time and retrospective evaluations of two minimally invasive procedures. Pain, 66(1), 3–8. https://doi.org/10.1016/0304-3959(96)02994-6
  • Robinson, E. S., & Brown, M. A. (1926). Effect of serial position upon memorization. The American Journal of Psychology, 37(4), 538. https://doi.org/10.2307/1414914
  • Ross, J. M. (2008). Moderators of trust and reliance across multiple decision aids [Doctoral dissertation]. University of Central Florida.
  • Rossi, A., Dautenhahn, K., Koay, K. L., & Walters, M. L. (2017). How the timing and magnitude of robot errors influence peoples’ trust of robots in an emergency scenario. In A. Kheddar, E. Yoshida, S. S. Ge, K. Suzuki, J.-J. Cabibihan, F. Eyssel, & H. He (Eds.), Social robotics (pp. 42–52). Springer International Publishing.
  • Rovira, E., McGarry, K., & Parasuraman, R. (2007). Effects of imperfect automation on decision making in a simulated command and control task. Human Factors, 49(1), 76–87. https://doi.org/10.1518/001872007779598082
  • Sanchez, J., Fisk, A. D., & Rogers, W. A. (2004). Reliability and age-related effects on trust and reliance of a decision support aid. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(3), 586–589. https://doi.org/10.1177/154193120404800366
  • Sarkar, S., Araiza-Illan, D., & Eder, K. (2017). Effects of faults, experience, and personality on trust in a robot co-worker. arXiv preprint arXiv:1703.02335. https://doi.org/10.48550/ARXIV.1703.02335
  • Sato, T., Yamani, Y., Liechty, M., & Chancey, E. T. (2020). Automation trust increases under high-workload multitasking scenarios involving risk. Cognition, Technology & Work, 22(2), 399–407. https://doi.org/10.1007/s10111-019-00580-5
  • Schreiber, C. A., & Kahneman, D. (2000). Determinants of the remembered utility of aversive sounds. Journal of Experimental Psychology: General, 129(1), 27–42. https://doi.org/10.1037/0096-3445.129.1.27
  • Tolmeijer, S., Gadiraju, U., Ghantasala, R., Gupta, A., & Bernstein, A. (2021). Second chance for a first impression? Trust development in intelligent system interaction. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 77–87). ACM. https://doi.org/10.1145/3450613.3456817
  • Washburn, A., Adeleye, A., An, T., & Riek, L. D. (2020). Robot errors in proximate HRI: How functionality framing affects perceived reliability and trust. ACM Transactions on Human-Robot Interaction, 9(3), 1–21. https://doi.org/10.1145/3380783
  • Wright, J. L., Chen, J. Y. C., & Lakhmani, S. G. (2020). Agent transparency and reliability in human–robot interaction: The influence on user confidence and perceived reliability. IEEE Transactions on Human-Machine Systems, 50(3), 254–263. https://doi.org/10.1109/THMS.2019.2925717

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.