742
Views
2
CrossRef citations to date
0
Altmetric
Articles

Evaluating Effects of Enhanced Autonomy Transparency on Trust, Dependence, and Human-Autonomy Team Performance over Time

, & ORCID Icon
Pages 1962-1971 | Received 31 Mar 2021, Accepted 01 Apr 2022, Published online: 13 Jul 2022

References

  • Bagheri, N., & Jamieson, G. A. (2004). The impact of context-related reliability on automation failure detection and scanning behaviour [Paper presentation]. 2004 IEEE International Conference on Systems, Man and Cybernetics (pp. 212–217). IEEE. https://doi.org/10.1109/ICSMC.2004.1398299
  • Beller, J., Heesen, M., & Vollrath, M. (2013). Improving the driver–automation interaction: An approach using automation uncertainty. Human Factors, 55(6), 1130–1141. https://doi.org/10.1177/0018720813482327
  • Bhat, S., Lyons, J. B., Shi, C., & Yang, X. J. (To appear). Clustering trust dynamics in a human-robot sequential decision-making task. IEEE Robotics and Automation Letters.
  • Chen, J. Y. C., & Barnes, M. J. (2014). Human-agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems, 44(1), 13–29. https://doi.org/10.1109/THMS.2013.2293535
  • Chen, J. Y. C., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282. https://doi.org/10.1080/1463922X.2017.1315750
  • Chen, J. Y. C., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. J. (2014). Situation awareness-based agent transparency [No. ARL-TR-6905; Tech. Rep.]. Aberdeen Proving Ground.
  • de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From ‘automation’ to ‘autonomy’: The importance of trust repair in human–machine interaction. Ergonomics, 61(10), 1409–1427. https://doi.org/10.1080/00140139.2018.1457725
  • de Visser, E. J., Peeters, M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human-robot teams. International Journal of Social Robotics, 12(2), 459–478. https://doi.org/10.1007/s12369-019-00596-x
  • Dixon, W. (1953). Processing data for outliers. Biometrics, 9(1), 74–89. https://doi.org/10.2307/3001634
  • Du, N., Haspiel, J., Zhang, Q., Tilbury, D., Pradhan, A. K., Yang, X. J., & Robert, L. P. (2019). Look who’s talking now: Implications of AV’s explanations on driver’s trust, AV preference, anxiety and mental workload. Transportation Research Part C: Emerging Technologies, 104, 428–442. https://doi.org/10.1016/j.trc.2019.05.025
  • Du, N., Huang, K. Y., & Yang, X. J. (2020). Not all information is equal: Effects of disclosing different types of likelihood information on trust, compliance and reliance, and task performance in human-automation teaming. Human Factors, 62(6), 987–1001. https://doi.org/10.1177/0018720819862916
  • Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
  • Endsley, M. R. (2017). From here to autonomy: Lessons learned from human-automation research. Human Factors, 59(1), 5–27. https://doi.org/10.1177/0018720816681350
  • Fletcher, K. I., Bartlett, M. L., Cockshell, S. J., & McCarley, J. S. (2017). Visualizing probability of detection to aid sonar operator performance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 61(1), 302–306. https://doi.org/10.1177/1541931213601556
  • Forster, Y., Naujoks, F., Neukum, A., & Huestegge, L. (2017). Driver compliance to take-over requests with different auditory outputs in conditional automation. Accident; Analysis and Prevention, 109, 18–28. https://doi.org/10.1016/j.aap.2017.09.019
  • Guo, Y., Shi, C., & Yang, X. J. (2021). Reverse psychology in trust-aware human-robot interaction. IEEE Robotics and Automation Letters, 6(3), 4851–4858. https://doi.org/10.1109/LRA.2021.3067626
  • Guo, Y., & Yang, X. J. (2021). Modeling and predicting trust dynamics in human-robot teaming: A bayesian inference approach. International Journal of Social Robotics, 13(8), 1899–1909. https://doi.org/10.1007/s12369-020-00703-3
  • Jian, J.-Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04
  • Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2015). Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal on Interactive Design and Manufacturing, 9(4), 269–275. https://doi.org/10.1007/s12008-014-0227-2
  • Koo, J., Shin, D., Steinert, M., & Leifer, L. (2016). Understanding driver responses to voice alerts of autonomous car operations. International Journal of Vehicle Design, 70(4), 317–377. https://doi.org/10.1504/IJVD.2016.076740
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lyons, J. B., & Havig, P. R. (2014). Transparency in a human-machine context: Approaches for fostering shared awareness/intent. In R. Shumaker & S. Lackey (Eds.), Virtual, augmented and mixed reality. designing and developing virtual and augmented environments (pp. 181–190). Springer International Publishing.
  • Lyons, J. B., Ho, N. T., Koltai, K. S., Masequesmay, G., Skoog, M., Cacanindin, A., & Johnson, W. W. (2016). Trust-based analysis of an air force collision avoidance system. Ergonomics in Design: The Quarterly of Human Factors Applications, 24(1), 9–12. https://doi.org/10.1177/1064804615611274
  • MacLean, A., Young, R. M., Bellotti, V. M. E., & Moran, T. P. (1991). Questions, options, and criteria: Elements of design space analysis. Human-Computer Interaction, 6(3), 201–250. https://doi.org/10.1207/s15327051hci0603&4_2
  • Manzey, D., Reichenbach, J., & Onnasch, L. (2012). Human performance consequences of automated decision aids: The impact of degree of automation and system experience. Journal of Cognitive Engineering and Decision Making, 6(1), 57–87. https://doi.org/10.1177/1555343411433844
  • McGuirl, J. M., & Sarter, N. B. (2006). Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors, 48(4), 656–665. https://doi.org/10.1518/001872006779166334
  • Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human-agent teaming for multi-uxv management. Human Factors, 58(3), 401–415. https://doi.org/10.1177/0018720815621206
  • Miller, C. A. (2018). Displaced interactions in human-automation relationships: Transparency over time. In D. Harris (Ed.), Engineering psychology and cognitive ergonomics (pp. 191–203). Springer International Publishing.
  • Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed.). Pearson.
  • Seong, Y., & Bisantz, A. M. (2008). The impact of cognitive feedback on judgment performance and trust with decision aids. International Journal of Industrial Ergonomics, 38(7–8), 608–625. https://doi.org/10.1016/j.ergon.2008.01.007
  • Sheridan, T. B., & Verplank, W. L. (1978). Human and computer control of undersea teleoperators. [Tech. Rep.]. Man Machine Systems Laboratory, MIT.
  • Theodorou, A., Wortham, R. H., & Bryson, J. J. (2017). Designing and implementing transparency for real time inspection of autonomous robots. Connection Science, 29(3), 230–241. https://doi.org/10.1080/09540091.2017.1310182
  • Walliser, J. C., de Visser, E. J., & Shaw, T. H. (2016). Application of a system-wide trust strategy when supervising multiple autonomous agents. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 133–137. https://doi.org/10.1177/1541931213601031
  • Wang, L., Jamieson, G. A., & Hollands, J. G. (2009). Trust and reliance on an automated combat identification system. Human Factors, 51(3), 281–291. https://doi.org/10.1177/0018720809338842
  • Yang, X. J., Schemanske, C., & Searle, C. (2021). Toward quantifying trust dynamics: How people adjust their trust after moment-to-moment interaction with automation. Human Factors, 001872082110347. https://doi.org/10.1177/00187208211034716
  • Yang, X. J., Unhelkar, V. V., Li, K., & Shah, J. A. (2017). Evaluating effects of user experience and system transparency on trust in automation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’17) (pp. 408–416). ACM.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.