790
Views
2
CrossRef citations to date
0
Altmetric
Articles

Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems

, , , , , & show all
Pages 310-334 | Received 06 Mar 2022, Accepted 02 Jun 2022, Published online: 25 Jun 2022

References

  • Adadi, Amini, and Mohammed Berrada. 2018. “Peeking inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI).” IEEE Access 6: 52138–52160. doi:10.1109/ACCESS.2018.2870052.
  • Akash, Kumar, Griffon McMahon, Tahira Reid, and Neera Jain. 2020. “Human Trust-Based Feedback Control: Dynamically Varying Automation Transparency to Optimize Human-Machine Interactions.” IEEE Control Systems 40 (6): 98–116. doi:10.1109/MCS.2020.3019151.
  • Bell, Suzanne T., Shanique G. Brown, Anthony Colaneri, and Neal Outland. 2018. “Team Composition and the ABCs of Teamwork.” American Psychologist 73: 349.
  • Bhaskara, Adella, Michael Skinner, and Shayne Loft. 2020. “Agent Transparency: A Review of Current Theory and Evidence.” IEEE Transactions on Human-Machine Systems 50 (3): 215. doi:10.1109/THMS.2020.2965529.
  • Bobko, P. 2001. Correlation and Regression. Rev. ed. Thousand Oaks, CA: Sage Press.
  • Bobko, Philip, Alex J. Barelka, and Leanne M. Hirshfield. 2014. “The Construct of State-Level Suspicion: A Model and Research Agenda for Automated and Information Technology (IT) Contexts.” Human Factors 56 (3): 489–508. doi:10.1177/0018720813497052.
  • Calhoun, Christopher S., Philip Bobko, Jennie J. Gallimore, and Joseph B. Lyons. 2019. “Linking Precursors of Interpersonal Trust to Humanautomation Trust: An Expanded Typology and Exploratory Experiment.’ Journal of Trust Research 9: 28–46.
  • Cassidy, Simon, and Peter Eachus. 2002. “Developing the Computer User Self-Efficacy (CUSE) Scale: Investigating the Relationship Between Computer Self-Efficacy, Gender and Experience with Computers.” Journal of Educational Computing Research 26: 133–153.
  • Chancey, Eric T., James P. Bliss, Alexandra B. Proaps, and Poornima Madhavan. 2015. “The Role of Trust as A Mediator between System Characteristics and Response Behaviors.” Human Factors 57: 947–958.
  • Chen, Jessie Y. C., Michael J. Barnes, and Caitlin Kenny. 2011. “Effects of Unreliable Automation and Individual Differences on Supervisory Control of Multiple Ground Robots.” In 371–78. IEEE.
  • Chen, Jesse Y. C., and Michael J. Barnes. 2014. “Human–Agent Teaming for Multirobot Control: A Review of Human Factors Issues.” IEEE Transactions on Human-Machine Systems 44 (1): 13–29. doi:10.1109/THMS.2013.2293535.
  • Chiou, Erin K., and John D. Lee. 2021. “Trusting Automation: Designing for Responsivity and Resilience.” Human Factors: The Journal of the Human Factors and Ergonomics Society: 001872082110099. 1–29. doi:10.1177/00187208211009995.
  • Crisp, C. Brad, and Sirkka L. Jarvenpaa. 2013. “Swift Trust in Global Virtual Teams.” Journal of Personnel Psychology.
  • Curnin, Steven, Christine Owen, Douglas Paton, Cain Trist, and David Parsons. 2015. “Role Clarity, Swift Trust and Multi-Agency Coordination.” Journal of Contingencies and Crisis Management 23 (1): 29–35. doi:10.1111/1468-5973.12072.
  • DeCostanza, Arwen H, Marathe, Amar R, Bohannon, Addison, Evans, A William, Palazzolo, Edward T, Metcalfe, Jason S, McDowell, Kaleb. Enhancing Human–Agent Teaming with Individualized, Adaptive Technologies: A Discussion of Critical Scientific Questions, US Army Research Laboratory Aberdeen Proving Ground United States, 2018,
  • Dehais, Frederic, Alex Lafont, Raphaelle Roy, and Stephen Fairclough. 2020. “A Neuroergonomics Approach to Mental Workload, Engagement and Human Performance.” Frontiers in Neuroscience 14: 268. doi:10.3389/fnins.2020.00268.
  • De Jong, Bart A., Kurt T. Dirks, and Nicole Gillespie. 2016. “Trust and Team Performance: A Meta-Analysis of Main Effects, Moderators, and Covariates.” Journal of Applied Psychology 101: 1134. doi:10.1037/apl0000110.
  • Driskell, James E., Eduardo Salas, and Tripp Driskell. 2018. “Foundations of Teamwork and Collaboration.” American Psychologist 73: 334.
  • Eloy, Lucca, Angela E. B. Stewart, Mary Jean Amon, Caroline Reinhardt, Amanda Michaels, Chen Sun, Valerie Shute, Nicholas D. Duran, and Sidney D’Mello. 2019. “Modeling Team-Level Multimodal Dynamics during Multiparty Collaboration.” 2019 International Conference on Multimodal Interaction. doi:10.1145/3340555.3353748.
  • Fahim, Md. Abdullah Al., Mohammed Maifi, Hassan Khan, Theodore Jensen, Yusuf Albayram, and Emil Coman. 2021. “Do Integral Emotions Affect Trust? The Mediating Effect of Emotions on Trust in the Context of Human-Agent Interaction.” Designing Interactive Systems Conference 2021: 1492–1503. doi:10.1145/3461778.3461997.
  • Garrison, D. Randy , and Zehra Akyol. 2015. “Toward the Development of A Metacognition Construct for Communities of Inquiry.” The Internet and Higher Education 24: 66–71.
  • Glikson, Ella, and Anita Williams Woolley. 2020. “Human Trust in Artificial Intelligence: Review of Empirical Research.” Academy of Management Annals 14 (2): 627–660. doi:10.5465/annals.2018.0057.
  • Goodwin, Gerald F., Nikki Blacksmith, and Meredith R. Coats. 2018. “The Science of Teams in the Military: Contributions from Over 60 Years of Research.” American Psychologist 73: 322. doi:10.1037/amp0000259.
  • Hagras, Hani. 2018. “Toward Humanunderstandable, Explainable AI.” Computer 51: 28–36.
  • Hancock, Peter A., Richard J. Jagacinski, Raja Parasuraman, Christopher D. Wickens, Glenn F. Wilson, and David B. Kaber. 2013. “Human-Automation Interaction Research.” Ergonomics in Design: The Quarterly of Human Factors Applications 21 (2): 9–14. doi:10.1177/1064804613477099.
  • Harmon-Jones, Cindy, Brock Bastian, and Eddie Harmon-Jones. 2016. “The Discrete Emotions Questionnaire: A New Tool for Measuring State Self-Reported Emotions.” PLoS One 11: e0159915.
  • Hagras, Hani. 2018. ‘Toward Human-Understandable, Explainable AI’. Computer 51: 28–36.
  • Helldin, Tove. 2014. “Transparency for Future Semi-Automated Systems: Effects of Transparency on Operator Performance, Workload and Trust.” PhD Thesis, Örebro Universitet.
  • Hirshfield, Leanne, Philip Bobko, Alex Barelka, Natalie Sommer, and Senem Velipasalar. 2019. “Toward Interfaces That Help Users Identify Misinformation Online: Using fNIRS to Measure Suspicion.” Augmented Human Research 4 (1): 1–13. doi:10.1007/s41133-019-0011-8.
  • Hoff, Kevin Anthony, and Masooda Bashir. 2015. ‘Trust in Automation: Integrating Empirical Evidence on Factors that Influence Trust.” Human Factors 57: 407–434.
  • Hussain, M. S., Omar Al Zoubi, Rafael A. Calvo, and Sidney K. D’Mello. 2011. “Affect Detection from Multichannel Physiology during Learning Sessions with AutoTutor.” Lecture Notes in Computer Science(6738): 131–138. doi:10.1007/978-3-642-21869-9_19.
  • Hussein, Aya, Sondoss Elsawah, and Hussein A. Abbass. 2020. “The Reliability and Transparency Bases of Trust in Human-Swarm Interaction: Principles and Implications.” Ergonomics 63 (9): 1116–1132. doi:10.1080/00140139.2020.1764112.
  • Ignat, Claudia-Lavinia, Quang-Vinh Dang, and Valerie L Shalin. 2019. “The Influence of Trust Score on Cooperative Behavior.” ACM Transactions on Internet Technology (TOIT) 19: 1–22.
  • Jennings, Nicholas R., Luc Moreau, David Nicholson, Sarvapali Ramchurn, Stephen Roberts, Tom Rodden, and Rogers Alex. 2014. “Human-Agent Collectives.” Communications of the ACM 57: 80–88.
  • Jessup, Sarah A., Tamera R. Schneider, Gene M. Alarcon, Tyler J. Ryan, and August Capiola. 2019. “The Measurement of the Propensity to Trust Automation.” In International Conference on Human-Computer Interaction 476–489. Springer.
  • Keiser, Nathanael L., and Winfred Arthur. 2021. “A Meta-Analysis of the Effectiveness of the after-Action Review (or Debrief) and Factors That Influence Its Effectiveness.” The Journal of Applied Psychology 106 (7): 1007–1032. doi:10.1037/apl0000821.
  • Khalid, Halimahtun M., Martin G. Helander, and Mei-Hua Lin. 2021. “Determinants of Trust in Human-Robot Interaction: Modeling, Measuring, and Predicting.” In Trust in Human-Robot Interaction (Elsevier).
  • Knight, W. 2017. “The Dark Secret at the Heart of AI.” MIT Technology Review 120 (3): 55–63.
  • Kozlowski, Steve W. J. 2015. “Advancing Research on Team Process Dynamics.” Organizational Psychology Review 5 (4): 270–299. doi:10.1177/2041386614533586.
  • Krueger, Frank, Kevin McCabe, Jorge Moll, Nikolaus Kriegeskorte, Roland Zahn, Maren Strenziok, Armin Heinecke, and Jordan Grafman. 2007. “Neural Correlates of Trust.” Proceedings of the National Academy of Sciences of the United States of America 104 (50): 20084–20089. doi:10.1073/pnas.0710103104.
  • Krueger, Frank, and Andreas Meyer-Lindenberg. 2019. “Toward a Model of Interpersonal Trust Drawn from Neuroscience, Psychology, and Economics.” Trends in Neurosciences 42 (2): 92–101. doi:10.1016/j.tins.2018.10.004.
  • Kunze, Alexander, Stephen J. Summerskill, Russell Marshall, and Ashleigh J. Filtness. 2019. “Automation Transparency: Implications of Uncertainty Communication for Human-Automation Interaction and Interfaces.” Ergonomics 62: 345–360.
  • Lankton, Nancy, D. Harrison McKnight, and John Tripp. 2015. “Technology, Humanness, and Trust: Rethinking Trust in Technology.” Journal of the Association for Information Systems 16 (10): 880–918. doi:10.17705/1jais.00411.
  • Lee, Chang-Shing, Mei-Hui Wang, Zong-Han Ciou, Rin-Pin Chang, Chun-Hao Tsai, Shen-Chien Chen, Tzong-Xiang Huang, Eri Sato-Shimokawara, and Toru Yamaguchi. 2021. “Robotic Assistant Agent for Student and Machine Colearning on AI-FML Practice with AIoT Application.” In 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 1–6. IEEE.
  • Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46 (1): 50–80. doi:10.1518/hfes.46.1.50_30392.
  • Madhavan, Poornima, and Douglas A. Wiegmann. 2007. “Effects of Information Source, Pedigree, and Reliability on Operator Interaction with Decision Support Systems.” Human Factors 49 (5): 773–785. doi:10.1518/001872007X230154.
  • Marks, Michelle A., John E. Mathieu, and Stephen J. Zaccaro. 2001. “A Temporally Based Framework and Taxonomy of Team Processes.” Academy of Management Review 26 (3): 356–376. doi:10.5465/amr.2001.4845785.
  • Mathieu, John E., Margaret M. Luciano, and Leslie A. DeChurch. 2018. “Multiteam Systems: The Next Chapter.” The SAGE Handbook of Industrial, Work & Organizational Psychology 2: 333–353.
  • Mayer, Roger C., James H. Davis, and F. David Schoorman. 1995. “An Integrative Model of Organizational Trust.” Academy of Management Review 20: 709–734.
  • McAllister, Daniel J. 1995. “Affect- and Cognition-Based Trust as Foundations for Interpersonal Cooperation in Organizations.” Academy of Management Journal 38 (1): 24–59. doi:10.5465/256727.
  • Mckinney, Earl H., James R. Barker, Kevin J. Davis, and Daryl Smith. 2005. “How Swift Starting Action Teams Get off the Ground.” Management Communication Quarterly 19 (2): 198–237. doi:10.1177/0893318905278539.
  • Mercado, Joseph E., Michael A. Rupp, Jesse Y. C. Chen, Michael J. Barnes, Daniel Barber, and Katelyn Procci. 2016. “Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management.” Human Factors 58 (3): 401–415. doi:10.1177/0018720815621206.
  • Merritt, Stephanie M. 2011. “Affective Processes in Human–Automation Interactions.” Human Factors, 53: 356–370. doi:10.1177/0018720811411912.
  • Meyerson, Debra, Karl E. Weick, and Roderick M. Kramer. 1996. “Swift Trust and Temporary Groups.” Trust in Organizations: Frontiers of Theory and Research, 166: 195.
  • Miller, Christopher A. 2021. Trust in Human-Robot Interaction, Trust, transparency, explanation, and planning: Why we need a lifecycle perspective on human-automation interaction, Nam, Chang S., Lyons, Joseph B., Academic Press, 233–257.
  • Oprihory, J.-L. 2020. USAF brings pilot training next to regular training in experimental curriculum. Air Force Magazine. https://www.airforcemag.com/usaf-brings-pilot-training-next-to-regular-training-in-experimental-curriculum/
  • Parasuraman, Raja, Robert Molloy, and Indramani L. Singh. 1993. “Performance Consequences of Automation-Induced Complacency.” The International Journal of Aviation Psychology 3: 1–23.
  • Parasuraman, Raja, Thomas B. Sheridan, and Christopher D. Wickens. 2008. “Situation Awareness, Mental Workload, and Trust in Automation: Viable, Empirically Supported Cognitive Engineering Constructs.” Journal of Cognitive Engineering and Decision Making 2: 140–160.
  • Parasuraman, Raja, and Victor Riley. 1997. “Humans and Automation: Use, Misuse, Disuse, Abuse.” Human Factors, 39: 230–253.
  • Rammstedt, Beatrice, and Oliver P. John. 2007. “Measuring Personality in One Minute or Less: A 10-Item Short Version of the Big Five Inventory in English and German.” Journal of Research in Personality, 41: 203–212.
  • Rousseau, Denise M., Sim B. Sitkin, Ronald S. Burt , and Colin Camerer. 1998. “Not so Different After All: A Cross-Discipline View of Trust.” Academy of Management Review, 23: 393–404.
  • Russell, James A. 1980. “A Circumplex Model of Affect.” Journal of Personality and Social Psychology 39 (6): 1161–1178. doi:10.1037/h0077714.
  • Seeber, Isabella, Eva Bittner, Robert O Briggs, Triparna De Vreede, Gert-Jan De Vreede, Aaron Elkins, Ronald Maier, Alexander B Merz, Sarah Oeste-Reiß , and Nils Randrup. 2020. “Machines as Teammates: A Research Agenda on AI in Team Collaboration.” Information & Management, 57: 103174.
  • Sherer, Mark, James E. Maddux, Blaise Mercandante, Steven Prentice-Dunn, Beth Jacobs , and Ronald W. Rogers. 1982. “The Selfefficacy Scale: Construction and Validation.” Psychological Reports, 51: 663–671.
  • Sheridan, Thomas B., and William L. Verplank. 1978. “Human and Computer Control of Undersea Teleoperators.” Apps.dtic.mil. July 15. https://apps.dtic.mil/sti/citations/ADA057655.
  • Stowers, Kimberly, Nicholas Kasdaglis, Michael Rupp, Olivia B. Newton, Jessie Y. C. Chen, and Michael J. Barnes. 2020. “The IMPACT of Agent Transparency on Human Performance.” IEEE Transactions on Human-Machine Systems 50 (3): 245–253. doi:10.1109/THMS.2020.2978041.
  • Stanford, Matthew S., Charles W. Mathias, Donald M. Dougherty, Sarah L. Lake, Nathaniel E. Anderson, and Jim H. Patton. 2009. “Fifty Years of the Barratt Impulsiveness Scale: An Update and Review.” Personality and Individual Differences 47: 385–395.
  • Sun, Chen, Valerie J Shute, Angela Stewart, Jade Yonehiro, Nicholas Duran, and Sidney D’Mello. 2020. “Towards A Generalized Competency Model of Collaborative Problem Solving.” Computers & Education 143: 103672.
  • Thai, Mai Thanh, Phuic Thien Phan, Trung Thien Hoang, Shing Wong, Nigel H. Lovell, and Thanh Nho Do. 2020. “Advanced Intelligent Systems for Surgical Robotics.” Advanced Intelligent Systems 2 (8): 1900138. doi:10.1002/aisy.201900138.
  • Wang, Ning, David V. Pynadath, and Susan G. Hill. 2016. “Trust Calibration Within a Human-Robot Team: Comparing Automatically Generated Explanations.” In 109–16. IEEE.
  • Wildman, Jessica L., Marissa L. Shuffler, Elizabeth H. Lazzara, Stephen M. Fiore, C. Shawn Burke, Eduardo Salas, and Sena Garven. 2012. “Trust Development in Swift Starting Action Teams: A Multilevel Framework.” Group & Organization Management 37: 137–170.
  • Wright, Julia L., Jessie Y. C. Chen, and Shan G. Lakhmani. 2020. “Agent Transparency and Reliability in Human–Robot Interaction: The Influence on User Confidence and Perceived Reliability.” IEEE Transactions on Human-Machine Systems 50 (3): 254–263. doi:10.1109/THMS.2019.2925717.
  • Wright, Julia L., Jessie Y. C. Chen, and Shan G. Lakhmani. 2019. “Agent Transparency and Reliability in Human–Robot Interaction: The Influence on User Confidence and Perceived Reliability.” IEEE Transactions on Human-Machine Systems 50: 254–263.
  • Wynne, Kevin T., and Joseph B. Lyons. 2018. “An Integrative Model of Autonomous Agent Teammate-Likeness.” Theoretical Issues in Ergonomics Science 19 (3): 353–374. doi:10.1080/1463922X.2016.1260181.
  • Zhang, Yunfeng, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. “Effect of Confidence and Explanation on Accuracy and Trust Calibration in AIassisted Decision Making.” In 295–305.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.