655
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Using Agent Features to Influence User Trust, Decision Making and Task Outcome during Human-Agent Collaboration

, &
Pages 1740-1761 | Received 28 Jun 2021, Accepted 30 Oct 2022, Published online: 11 Jan 2023

References

  • Akash, K., Reid, T., & Jain, N. (2019). Improving human-machine collaboration through transparency-based feedback – Part II: Control design and synthesis. IFAC-PapersOnLine, 51(34), 322–328. https://doi.org/10.1016/j.ifacol.2019.01.026
  • American Psychological Association. (2020a). APA dictionary of psychology: Practice effect. https://dictionary.apa.org/practice-effect
  • American Psychological Association. (2020b). APA Dictionary of Psychology: Yes-no task. https://dictionary.apa.org/yes-no-task
  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., et al. (2019). Guidelines for human-AI interaction [Paper presentation]. Conference on Human Factors in Computing Systems Proceedings. https://doi.org/10.1145/3290605.3300233
  • An, C., & Fromm, H. (2005). Supply chain management on demand: Strategies and technologies, applications. Springer Science & Business Media.
  • Anderson, N. D. (2015). Teaching signal detection theory with pseudoscience. Frontiers in Psychology, 6, 762. https://doi.org/10.3389/fpsyg.2015.00762
  • Benbasat, I., & Wang, W. (2005). Trust in and adoption of online recommendation agents. Journal of the Association for Information Systems, 6(3), 72–101. https://doi.org/10.17705/1jais.00065
  • Benda, N. C., Novak, L. L., Reale, C., & Ancker, J. S. (2021). Trust in AI: Why we should be designing for appropriate reliance. Journal of the American Medical Informatics Association, 29(1), 207–212. https://doi.org/10.1093/jamia/ocab238
  • Bradshaw, J. M., Sierhuis, M., Acquisti, A., Feltovich, P., Hoffman, R., Jeffers, R., Prescott, D., Suri, N., Uszok A., & Van Hoof, R. (2003). Adjustable autonomy and human-agent teamwork in practice: An interim report on space applications. In Agent autonomy (pp. 243–280). Springer.
  • Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46(1), 112–130. https://doi.org/10.3758/s13428-013-0365-7
  • Chen, M., Nikolaidis, S., Soh, H., Hsu, D., & Srinivasa, S. (2018). Planning with trust for human-robot collaboration [Paper presentation]. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 307–315).
  • Chen, M., Nikolaidis, S., Soh, H., Hsu, D., & Srinivasa, S. (2020). Trust-aware decision making for human-robot collaboration: Model learning and planning. ACM Transactions on Human-Robot Interaction, 9(2), 1–23. https://doi.org/10.1145/3359616
  • Choi, J., Lee, H. J., & Kim, Y. C. (2011). The influence of social presence on customer intention to reuse online recommender systems: The roles of personalization and product type. International Journal of Electronic Commerce, 16(1), 129–154. https://doi.org/10.2753/JEC1086-4415160105
  • Cowell, A. J., & Stanney, K. M. (2005). Manipulation of non-verbal interaction style and demographic embodiment to increase anthropomorphic computer character credibility. International Journal of Human-Computer Studies, 62(2), 281–306. https://doi.org/10.1016/j.ijhcs.2004.11.008
  • Cremonesi, P., Garzotto, F., & Turrin, R. (2012). Investigating the persuasion potential of recommender systems from a quality perspective: An empirical study. ACM Transactions on Interactive Intelligent Systems, 2(2), 1–41. https://doi.org/10.1145/2209310.2209314
  • Daronnat, S., Azzopardi, L., Halvey, M. (2020). Impact of agents’ errors on performance, reliance and trust in human-agent collaboration [Paper presentation]. Human factors and ergonomics society annual meeting (pp. 1–5).
  • Daronnat, S., Azzopardi, L., Halvey, M., & Dubiel, M. (2020). Impact of agent reliability and predictability on trust in real time human-agent collaboration [Paper presentation]. Proceedings of the 8th International Conference on Human-Agent Interaction (pp. 131–139). https://doi.org/10.1145/3406499.3415063
  • Dautenhahn, K. (1998). The art of designing socially intelligent agents: Science, fiction, and the human in the loop. Applied Artificial Intelligence, 12(7–8), 573–617. https://doi.org/10.1080/088395198117550
  • de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human–robot teams. International Journal of Social Robotics, 12(2), 459–478. https://doi.org/10.1007/s12369-019-00596-x
  • Defense Science Board. (2012). The role of autonomy in DoD systems. https://irp.fas.org/agency/dod/dsb/autonomy.pdf
  • Dent, S. (2017). Tesla driver in fatal autopilot crash ignored safety warnings. Engadget. https://engadget.com/2017/06/20/tesla-driver-in-fatal-autopilot-crash-ignored-safety-warnings
  • Dickie, D. A., & Boyle, L. N. (2009). Drivers’ understanding of adaptive cruise control limitations. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 53(23), 1806–1810. https://doi.org/10.1177/154193120905302313
  • Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
  • Fan, X., Oh, S., McNeese, M., Yen, J., Cuevas, H., Strater, L., & Endsley, M. R. (2008). The influence of agent reliability on trust in human-agent collaboration [Paper presentation]. Proceedings of the 15th European Conference on Cognitive Ergonomics: The Ergonomics of Cool Interaction (pp. 1–8). https://doi.org/10.1145/1473018.1473028
  • Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
  • Gefen, D. (2000). E-commerce: The role of familiarity and trust. Omega, 28(6), 725–737. https://doi.org/10.1016/S0305-0483(00)00021-9
  • Groom, V., & Nass, C. (2007). Can robots be teammates?: Benchmarks in human-robot teams. Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 8(3), 483–500. https://doi.org/10.1075/is.8.3.10gro
  • Ham, J., Cuijpers, R. H., & Cabibihan, J.-J. (2015). Combining robotic persuasive strategies: The persuasive power of a storytelling robot that uses gazing and gestures. International Journal of Social Robotics, 7(4), 479–487. https://doi.org/10.1007/s12369-015-0280-4
  • Hancock, P. A., Billings, D. R., & Schaefer, K. E. (2011). Can you trust your robot? Ergonomics in Design: The Quarterly of Human Factors Applications, 19(3), 24–29. https://doi.org/10.1177/1064804611415045
  • Hancock, P. A., Kessler, T. T., Kaplan, A. D., Brill, J. C., & Szalma, J. L. (2021). Evolving trust in robots: Specification through sequential and comparative meta-analyses. Human Factors, 63(7), 1196–1229. https://doi.org/10.1177/0018720820922080
  • Hebesberger, D., Koertner, T., Gisinger, C., & Pripfl, J. (2017). A long-term autonomous robot at a care hospital: A mixed methods study on social acceptance and experiences of staff and older adults. International Journal of Social Robotics, 9(3), 417–429. https://doi.org/10.1007/s12369-016-0391-6
  • Herse, S., Vitale, J., Johnston, B., & Williams, M.-A. (2021). Using trust to determine user decision making & task outcome during a human-agent collaborative task [Paper presentation]. Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (pp. 73–82). https://doi.org/10.1145/3434073.3444673
  • Herse, S., Vitale, J., Tonkin, M., Ebrahimian, D., Ojha, S., Johnston, B., Judge, W., & Williams, M. A. (2018). Do you trust me, blindly? Factors influencing trust towards a robot recommender system [Paper presentation]. The 27th IEEE International Symposium on Robot and Human Interactive Communication (pp. 7–14).
  • Human-Agent Interaction Conference. (2022). What is HAI? https://hai-conference.net/what-is-hai/
  • Jeong, S., Logan, D. E., Goodwin, M. S., Graca, S., O’Connell, B., Goodenough, H., Anderson, L., Stenquist, N., Fitzpatrick, K., Zisook, M., Plummer, L., Breazeal, C., & Weinstock, P. (2015). A social robot to mitigate stress, anxiety, and pain in hospital pediatric care [Paper presentation]. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (pp. 103–104).
  • Jhangiani, R. S., Chiang, I., & Price, P. C. (2015). Research methods in psychology (2nd Canadian ed.). BC Campus.
  • Johnson, C. J., Lematta, G. J., Huang, L., Holder, E., Bhatti, S., Cooke, N. J. (2020). An interaction taxonomy of human–agent teaming in next generation combat vehicle systems [Paper presentation]. International Conference on Applied Human Factors and Ergonomics (pp. 10–16).
  • Jung, M. F., DiFranzo, D., Stoll, B., Shen, S., Lawrence, A., & Claure, H. (2018). Robot assisted tower construction-A resource distribution task to study human-robot collaboration and interaction with groups of people. arXiv preprint arXiv:1812.09548
  • Kermany, D. S., Goldbaum, M., Cai, W., Valentim, C. C. S., Liang, H., Baxter, S. L., McKeown, A., Yang, G., Wu, X., Yan, F., Dong, J., Prasadha, M. K., Pei, J., Ting, M. Y. L., Zhu, J., Li, C., Hewett, S., Dong, J., Ziyar, I., … Zhang, K. (2018). Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell, 172(5), 1122–1131.e9. https://doi.org/10.1016/j.cell.2018.02.010
  • Kermany, D. S., Zhang, K., & Goldbaum, M. (2018). Labeled optical coherence tomography (OCT) and chest X-ray images for classification (Vol. 2). Mendeley Data.
  • Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface [Paper presentation]. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 2390–2395).
  • Komatsu, T., Yamada, S., Kobayashi, K., Funakoshi, K., & Nakano, M. (2010). Artificial subtle expressions: Intuitive notification methodology of artifacts [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1941–1944).
  • Laughery, K. R., & Wogalter, M. S. (2014). A three-stage model summarizes product warning and environmental sign research. Safety Science, 61(January), 3–10. https://doi.org/10.1016/j.ssci.2011.02.012
  • Lee, J. D., & Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10), 1243–1270.
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lewis, M. (1998). Designing for human-agent interaction. AI Magazine, 19(2), 67–67. https://doi.org/10.1609/aimag.v19i2.1369
  • Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education [Tech. Rep.]. Pearson.
  • Malle, B. F., & Ullman, D. (2021). A multidimensional conception and measure of human-robot trust. In Trust in human-robot interaction (pp. 3–25). Elsevier.
  • McLeod, S. A. (2017). Experimental design. https://www.simplypsychology.org/experimental-designs.html
  • Meghdari, A., Shariati, A., Alemi, M., Vossoughi, G. R., Eydi, A., Ahmadi, E., Mozafari, B., Amoozandeh Nobaveh, A., & Tahami, R. (2018). Arash: A social robot buddy to support children with cancer in a hospital environment. Proceedings of the Institution of Mechanical Engineers. Part H, Journal of Engineering in Medicine, 232(6), 605–618. https://doi.org/10.1177/0954411918777520
  • Muir, B. M., & Moray, N. (1996). Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics, 39(3), 429–460. https://doi.org/10.1080/00140139608964474
  • Nayak, B. K. (2010). Understanding the relevance of sample size calculation. Indian Journal of Ophthalmology, 58(6), 469–470. https://doi.org/10.4103/0301-4738.71673
  • Ni, F., Arnott, D., & Gao, S. (2019). The anchoring effect in business intelligence supported decision-making. Journal of Decision Systems, 28(2), 67–81. https://doi.org/10.1080/12460125.2019.1620573
  • Obuchowski, N. A. (2003). Receiver operating characteristic curves and their use in radiology. Radiology, 229(1), 3–8. https://doi.org/10.1148/radiol.2291010898
  • Ogawa, K., Bartneck, C., Sakamoto, D., Kanda, T., Ono, T., Ishiguro, H. (2009). Can an android persuade you? [Paper presentation]. RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication (pp. 516–521).
  • Okamura, K., & Yamada, S. (2018). Adaptive trust calibration for supervised autonomous vehicles [Paper presentation]. Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 92–97).
  • Okamura, K., & Yamada, S. (2020). Adaptive trust calibration for human-AI collaboration. PLoS One, 15(2), e0229132. https://doi.org/10.1371/journal.pone.0229132
  • Ososky, S., Sanders, T., Jentsch, F., Hancock, P., & Chen, J. Y. (2014). Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems [Paper presentation]. Unmanned Systems Technology XVI (Vol. 9084, p. 90840E). https://doi.org/10.1117/12.2050622
  • Ososky, S., Schuster, D., Phillips, E., Jentsch, F. G. (2013). Building appropriate trust in human-robot teams [Paper presentation]. AAAI Spring Symposium: Trust and Autonomous Systems.
  • Parasuraman, R., & Miller, C. A. (2004). Trust and etiquette in high-criticality automated systems. Communications of the ACM, 47(4), 51–55. https://doi.org/10.1145/975817.975844
  • Pu, P., Chen, L., & Hu, R. (2011). A user-centric evaluation framework for recommender systems [Paper presentation]. Proceedings of the Fifth ACM Conference on Recommender Systems (pp. 157–164). https://doi.org/10.1145/2043932.2043962
  • Pynadath, D. V., Wang, N., & Kamireddy, S. (2019). A Markovian method for predicting trust behavior in human-agent interaction [Paper presentation]. Proceedings of the 7th International Conference on Human-Agent Interaction (pp. 171–178). https://doi.org/10.1145/3349537.3351905
  • Ricci, F., Rokach, L., & Shapira, B. (2011). Introduction to recommender systems handbook. In Recommender systems handbook (pp. 1–35). Springer.
  • Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed.). Pearson.
  • Saadatzi, M. N., Pennington, R. C., Welch, K. C., & Graham, J. H. (2018). Effects of a robot peer on the acquisition and observational learning of sight words in young adults with autism spectrum disorder. Journal of Special Education Technology, 33(4), 284–296. https://doi.org/10.1177/0162643418778506
  • Salas, E., Dickinson, T. L., Converse, S. A., & Tannenbaum, S. I. (1992). Toward an understanding of team performance and training. In R. W. Swezey & E. Salas (Eds.), Teams: Their training and performance (pp. 3–29). Ablex Publishing.
  • Sanders, T., Oleson, K. E., Billings, D. R., Chen, J. Y., Hancock, P. A. (2011). A model of human-robot trust: Theoretical model development [Paper presentation]. Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 55, pp. 1432–1436 ). https://doi.org/10.1177/1071181311551298
  • Sandström, S., Edvardsson, B., Kristensson, P., & Magnusson, P. (2008). Value in use through service experience. Managing Service Quality: An International Journal, 18(2), 112–126. https://doi.org/10.1108/09604520810859184
  • Schaefer, K. E. (2016). Measuring trust in human robot interactions: Development of the “trust perception scale-HRI”. In Robust intelligence and trust in autonomous systems (pp. 191–218). Springer.
  • Scheutz, M., DeLoach, S. A., & Adams, J. A. (2017). A framework for developing and using shared mental models in human-agent teams. Journal of Cognitive Engineering and Decision Making, 11(3), 203–224. https://doi.org/10.1177/1555343416682891
  • Seymour, R., & Peterson, G. L. (2009). A trust-based multiagent system [Paper presentation]. 2009 International Conference on Computational Science and Engineering (Vol. 3, pp. 109–116). https://doi.org/10.1109/CSE.2009.297
  • Sheridan, T. B. (2019). Extending three existing models to analysis of trust in automation: Signal detection, statistical parameter estimation, and model-based control. Human Factors, 61(7), 1162–1170. https://doi.org/10.1177/0018720819829951
  • Staffa, M., & Rossi, S. (2016). Recommender interfaces: The more human-like, the more humans like. In A. Agah, J.-J. Cabibihan, A. M. Howard, M. A. Salichs, & H. He (Eds.), Social robotics (pp. 200–210). Springer International Publishing.
  • Stanislaw, H., & Todorov, N. (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31(1), 137–149. https://doi.org/10.3758/bf03207704
  • Sycara, K., & Lewis, M. (2004). Integrating intelligent agents into human teams. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding the factors that drive process and performance (pp. 203–231). American Psychological Association.
  • Wagner, A. R. (2009). The role of trust and relationships in human-robot social interaction. Georgia Institute of Technology.
  • Wang, N., Pynadath, D. V., Hill, S. G. (2016a). The impact of POMDP-generated explanations on trust and performance in human-robot teams [Paper presentation]. The International Conference on Autonomous Agents and Multiagent Systems (pp. 997–1005).
  • Wang, N., Pynadath, D. V., Hill, S. G. (2016b). Trust calibration within a human-robot team: Comparing automatically generated explanations [Paper presentation]. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (pp. 109–116).
  • Wang, N., Pynadath, D., Hill, S., Ground, A. P. (2015). Building trust in a human-robot team with automatically generated explanations [Paper presentation]. Proceedings of the Interservice/Industry Training, Simulation and Education Conference.
  • Wickens, C. D., & Dixon, S. R. (2007). The benefits of imperfect diagnostic automation: A synthesis of the literature. Theoretical Issues in Ergonomics Science, 8(3), 201–212. https://doi.org/10.1080/14639220500370105
  • Wintersberger, P., Frison, A.-K., Riener, A., & Boyle, L. N. (2016). Towards a personalized trust model for highly automated driving [Paper presentation]. Mensch und Computer 2016–Workshopband
  • Yeh, M., & Wickens, C. D. (2001). Display signaling in augmented reality: Effects of cue reliability and image realism on attention allocation and trust calibration. Human Factors, 43(3), 355–365. https://doi.org/10.1518/001872001775898269
  • Zorcec, T., Robins, B., Dautenhahn, K. (2018). Getting engaged: Assisted play with a humanoid robot Kaspar for children with severe autism [Paper presentation]. International Conference on Telecommunications (pp. 198–207).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.