1,024
Views
21
CrossRef citations to date
0
Altmetric
Articles

The reliability and transparency bases of trust in human-swarm interaction: principles and implications

, &
Pages 1116-1132 | Received 17 Sep 2019, Accepted 13 Apr 2020, Published online: 13 May 2020

References

  • Adams, J. A., J. Y. Chen, and, and M. A. Goodrich. 2018. “Swarm Transparency.” Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 45–46. ACM. doi:10.1145/3173386.3177008.
  • Atoyan, H., J.-R. Duquet, and J.-M. Robert. 2006. “Trust in New Decision Aid Systems.” In Proceedings of the 18th Conference on l’Interaction Homme-Machine, edited by J.-M. Robert, 115–122. New York, NY: ACM.
  • Bustamante, E. A. 2009. “A Reexamination of the Mediating Effect of Trust among Alarm Systems’ Characteristics and Human Compliance and Reliance.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 53 (4): 249–253. doi:10.1177/154193120905300419.
  • Chancey, E. T., J. P. Bliss, A. B. Proaps, and P. Madhavan. 2015. “The Role of Trust as a Mediator between System Characteristics and Response Behaviors.” Human Factors: The Journal of the Human Factors and Ergonomics Society 57 (6): 947–958. doi:10.1177/0018720815582261.
  • Charness, G., U. Gneezy, and M. A. Kuhn. 2012. “Experimental Methods: Between-Subject and within-Subject Design.” Journal of Economic Behavior & Organization 81 (1): 1–8. doi:10.1016/j.jebo.2011.08.009.
  • Chen, J. Y., K. Procci, M. Boyce, J. Wright, A. Garcia, and M. Barnes. 2014. “Situation Awareness-Based Agent Transparency.” Army Research Lab Aberdeen Proving Ground MD Human Research and Engineering Directorate, Tech. Rep.
  • Chen, J. Y., M. J. Barnes, A. R. Selkowitz, and K. Stowers. 2016. “Effects of Agent Transparency on Human-Autonomy Teaming Effectiveness.” Systems, Man, and Cybernetics (SMC), 2016 IEEE International Conference on. IEEE.
  • Chen, M., S. Nikolaidis, H. Soh, D. Hsu, and S. Srinivasa. 2018. “Planning with Trust for Human-Robot Collaboration.” Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 307–315.
  • Clare, A. S. 2013. “Modeling Real-Time Human-Automation Collaborative Scheduling of Unmanned Vehicles.” Massachusetts Institute of Tech Cambridge Department of Aeronautics and Astronautics, Tech. Rep.
  • Desai, M. 2012. “Modeling Trust to Improve Human-Robot Interaction.” Ph.D. diss., University of Massachusetts Lowell.
  • Desai, M., P. Kaniarasu, M. Medvedev, A. Steinfeld, and H. Yanco. 2013 “Impact of Robot Failures and Feedback on Real-Time Trust.” In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, edited by H. Kuzuoka, 251–258. Piscataway, NJ: IEEE Press.
  • Dixon, S. R., and C. D. Wickens. 2006. “Automation Reliability in Unmanned Aerial Vehicle Control: A Reliance-Compliance Model of Automation Dependence in High Workload.” Human Factors: The Journal of the Human Factors and Ergonomics Society 48 (3): 474–486. doi:10.1518/001872006778606822.
  • Dorneich, M. C., R. Dudley, E. Letsu-Dake, W. Rogers, S. D. Whitlow, M. C. Dillard, and E. Nelson. 2017. “Interaction of Automation Visibility and Information Quality in Flight Deck Information Automation.” IEEE Transactions on Human-Machine Systems 47 (6): 915–926. doi:10.1109/THMS.2017.2717939.
  • Ekman, F., M. Johansson, and J. Sochor. 2018. “Creating Appropriate Trust in Automated Vehicle Systems: A Framework for HMI Design.” IEEE Transactions on Human-Machine Systems 48 (1): 95–101. doi:10.1109/THMS.2017.2776209.
  • Gao, F., A. S. Clare, J. C. Macbeth, and M. Cummings. 2013. “Modeling the Impact of Operator Trust on Performance in Multiple Robot Control.” 2013 AAAI Spring Symposium Series. AAAI.
  • Gao, J., and J. D. Lee. 2006. “Extending the Decision Field Theory to Model Operators’ Reliance on Automation in Supervisory Control Situations.” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 36 (5): 943–959. doi:10.1109/TSMCA.2005.855783.
  • Göritzlehner, R., C. Borst, J. Ellerbroek, C. Westin, M. M. van Paassen, and M. Mulder. 2014. “Effects of Transparency on the Acceptance of Automated Resolution Advisories.” Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, 2965–2970. Piscataway, NJ: IEEE.
  • Hancock, P. A., D. R. Billings, K. E. Schaefer, J. Y. Chen, E. J. De Visser, and R. Parasuraman. 2011. “A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction.” Human Factors: The Journal of the Human Factors and Ergonomics Society 53 (5): 517–527. doi:10.1177/0018720811417254.
  • Helldin, T. 2014. “Transparency for Future Semi-Automated Systems: Effects of Transparency on Operator Performance, Workload and Trust.” Ph.D. diss., Örebro Universitet.
  • Helldin, T., U. Ohlander, G. Falkman, and M. Riveiro. 2014. “Transparency of Automated Combat Classification.” International Conference on Engineering Psychology and Cognitive Ergonomics, 22–33. Springer.
  • Hoff, K. A., and M. Bashir. 2015. “Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust.” Human Factors: The Journal of the Human Factors and Ergonomics Society 57 (3): 407–434. doi:10.1177/0018720814547570.
  • Hunt, S. M. 2013. “The Impact of Trajectory Prediction Uncertainty on Reliance Strategy and Trust Attitude in an Automated Air Traffic Management Environment.” Master’s thesis, San Jose State University.
  • Hussein, A., S. Elsawah, and H. A. Abbass. 2019. “Trust Mediating Reliability-Reliance Relationship in Supervisory Control of Human-Swarm Interactions.” Human Factors. .
  • Ibrahim, M., and P. M. Ribbers. 2009. “The Impacts of Competence-Trust and Openness-Trust on Interorganizational Systems.” European Journal of Information Systems 18 (3): 223–234. doi:10.1057/ejis.2009.17.
  • Jian, J.-Y., A. M. Bisantz, and C. G. Drury. 2000. “Foundations for an Empirically Determined Scale of Trust in Automated Systems.” International Journal of Cognitive Ergonomics 4 (1): 53–71. doi:10.1207/S15327566IJCE0401_04.
  • Khasawneh, M. T., S. R. Bowling, X. Jiang, A. K. Gramopadhye, and B. J. Melloy. 2003. “A Model for Predicting Human Trust in Automated Systems.” Proceedings of the 8th Annual International Conference on Industrial Engineering – Theory, Applications and Practice, Las Vegas, Nevada, USA, November 10–12.
  • Kunze, A., S. J. Summerskill, R. Marshall, and A. J. Filtness. 2019. “Automation Transparency: implications of Uncertainty Communication for Human-Automation Interaction and Interfaces.” Ergonomics 62 (3): 345–360. pMID: 30501566. doi:10.1080/00140139.2018.1547842.
  • Lee, J. D. 2000. “Cognitive Engineering Challenges of Managing Swarms of Self-Organizing Agent-Based Automation.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 44 (6): 568–571doi:10.1177/154193120004400606.
  • Lee, J. D. 2001. “Emerging Challenges in Cognitive Ergonomics: Managing Swarms of Self-Organizing Agent-Based Automation.” Theoretical Issues in Ergonomics Science 2 (3): 238–250. doi:10.1080/14639220110104925.
  • Lee, J. D., and K. A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors: The Journal of the Human Factors and Ergonomics Society 46 (1): 50–80. doi:10.1518/hfes.46.1.50.30392.
  • Lee, J., and N. Moray. 1992. “Trust, Control Strategies and Allocation of Function in Human-Machine Systems.” Ergonomics 35 (10): 1243–1270. doi:10.1080/00140139208967392.
  • Li, H. 2020. “A Computational Model of Human Trust in Supervisory Control of Robotic Swarms.” Ph.D. diss., University of Pittsburgh.
  • Liu, R., F. Jia, W. Luo, M. Chandarana, C. Nam, M. Lewis, and K. Sycara. 2019. “Trust-Aware Behavior Reflection for Robot Swarm Self-Healing,” Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 122–130. International Foundation for Autonomous Agents and Multiagent Systems.
  • Liu, R., Z. Cai, M. Lewis, J. Lyons, and, and K. Sycara. 2019. “Trust Repair in Human-Swarm Teams+.” 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 1–6. IEEE. doi:10.1109/RO-MAN46459.2019.8956420.
  • Lyons, J. B. 2013. “Being Transparent about Transparency.”. AAAI Spring Symposium. AAAI.
  • McBride, S. E., W. A. Rogers, and A. D. Fisk. 2014. “Understanding Human Management of Automation Errors.” Theoretical Issues in Ergonomics Science 15 (6): 545–577. doi:10.1080/1463922X.2013.817625.
  • Mercado, J. E., M. A. Rupp, J. Y. Chen, M. J. Barnes, D. Barber, and K. Procci. 2016. “Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management.” Human Factors: The Journal of the Human Factors and Ergonomics Society 58 (3): 401–415. doi:10.1177/0018720815621206.
  • Merritt, S. M., and D. R. Ilgen. 2008. “Not All Trust is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions.” Human Factors: The Journal of the Human Factors and Ergonomics Society 50 (2): 194–210. doi:10.1518/001872008X288574.
  • Merritt, S. M., D. Lee, J. L. Unnerstall, and K. Huber. 2015. “Are Well-Calibrated Users Effective Users? Associations between Calibration of Trust and Performance on an Automation-Aided Task.” Human Factors: The Journal of the Human Factors and Ergonomics Society 57 (1): 34–47. doi:10.1177/0018720814561675.
  • Miller, D., M. Johns, B. Mok, N. Gowda, D. Sirkin, K. Lee, and W. Ju. 2016. “Behavioral Measurement of Trust in Automation: The Trust Fall.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60 (1): 1849–1853. doi:10.1177/1541931213601422.
  • Mishra, A. K. 1996. “Organizational Responses to Crisis.” In Trust in Organizations: Frontiers of Theory and Research, edited by R. Kramer and T. Tyle, 261. Thousand Oaks: Sage.
  • Nam, C., P. Walker, H. Li, M. Lewis, and K. Sycara. 2019. “Models of Trust in Human Control of Swarms with Varied Levels of Autonomy.” IEEE Transactions on Human-Machine Systems : 1–11.
  • Nam, C., P. Walker, M. Lewis, and K. Sycara. 2017. “Predicting Trust in Human Control of Swarms via Inverse Reinforcement Learning,” 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE. doi:10.1109/ROMAN.2017.8172353.
  • Nunes, I., and D. Jannach. 2017. “A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems.” User Modeling and User-Adapted Interaction 27 (3–5): 393–444. doi:10.1007/s11257-017-9195-0.
  • Okamura, K., and S. Yamada. 2018. “Adaptive Trust Calibration for Supervised Autonomous Vehicles.” Adjunct Proceedings of The 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 92–97. ACM.
  • Oleson, R. R. 2003. “Mathematical Robotic Algorithms for Mowing a Field.” Master’s thesis, University of Central Florida
  • Ososky, S., T. Sanders, F. Jentsch, P. Hancock, and J. Y. Chen. 2014. “Determinants of System Transparency and Its Influence on Trust in and Reliance on Unmanned Robotic Systems.” Unmanned Systems Technology XVI, vol. 9084. International Society for Optics and Photonics, 90840E. doi:10.1117/12.2050622.
  • Reynolds, C. W. 1987. “Flocks, Herds and Schools: A Distributed Behavioral Model.” Proceedings of The 14th Annual Conference on Computer Graphics and Interactive Techniques, 25–34. doi:10.1145/37401.37406.
  • Roundtree, K. A., M. D. Manning, and J. A. Adams. 2018. “Analysis of Human-Swarm Visualizations.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62 (1): 287–291. doi:10.1177/1541931218621066.
  • Rovira, E., K. McGarry, and R. Parasuraman. 2007. “Effects of Imperfect Automation on Decision Making in a Simulated Command and Control Task.” Human Factors: The Journal of the Human Factors and Ergonomics Society 49 (1): 76–87. doi:10.1518/001872007779598082.
  • Rubenstein, M., A. Cabrera, J. Werfel, G. Habibi, J. McLurkin, 2013. and, and R. Nagpal. “Collective Transport of Complex Objects by Simple Robots: Theory and Experiments.” Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems, Richland County, South Carolina, 47–54.
  • Rusnock, C. F., M. E. Miller, and J. M. Bindewald. 2017. “Observations on Trust, Reliance, and Performance Measurement in Human-Automation Team Assessment.” IIE Annual Conference. Proceedings. Institute of Industrial and Systems Engineers (IISE), 368–373.
  • Sadler, G., H. Battiste, N. Ho, L. Hoffmann, W. Johnson, R. Shively, J. Lyons, and D. Smith. 2016. “Effects of Transparency on Pilot Trust and Agreement in the Autonomous Constrained Flight Planner.” Digital Avionics Systems Conference (DASC), 2016 IEEE/AIAA 35th, 1–9. Piscataway, NJ: IEEE.
  • Sanchez, J., W. A. Rogers, A. D. Fisk, and E. Rovira. 2014. “Understanding Reliance on Automation: Effects of Error Type, Error Distribution, Age and Experience.” Theoretical Issues in Ergonomics Science 15 (2): 134–160. doi:10.1080/1463922X.2011.611269.
  • Schaefer, K. E. 2016. “Measuring Trust in Human Robot Interactions: Development of the Trust Perception Scale-HRI.” In Robust Intelligence and Trust in Autonomous Systems, edited by R. Mittu, D. Sofge, A. Wagner, W. F. Lawless, 191–218. Boston, MA: Springer.
  • Schaefer, K. E., J. Y. Chen, J. L. Szalma, and P. A. Hancock. 2016. “A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems.” Human Factors: The Journal of the Human Factors and Ergonomics Society 58 (3): 377–400. doi:10.1177/0018720816634228.
  • Valentini, G., H. Hamann, and M. Dorigo. 2015. “Efficient Decision-Making in a Self-Organizing Robot Swarm: On the Speed Versus Accuracy Trade-Off.” Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, Richland County, South Carolina, 1305–1314.
  • Walker, P., M. Lewis, and, and K. Sycara. 2016. “The Effect of Display Type on Operator Prediction of Future Swarm States. 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 002521–002526. IEEE. doi:10.1109/SMC.2016.7844619.
  • Wang, N., D. V. Pynadath, and S. G. Hill. 2016. “Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations.” In The Eleventh ACM/IEEE International Conference on Human Robot Interaction, edited by C. Bartneck, 109–116. Piscataway, NJ: IEEE Press.
  • Wang, N., D. V. Pynadath, S. G. Hill, and C. Merchant. 2017. “The Dynamics of Human-Agent Trust with POMDP-Generated Explanations.” International Conference on Intelligent Virtual Agents, 459–462. Springer.
  • Wickens, C. D., and S. R. Dixon. 2007. “The Benefits of Imperfect Diagnostic Automation: A Synthesis of the Literature.” Theoretical Issues in Ergonomics Science 8 (3): 201–212. doi:10.1080/14639220500370105.
  • WIckens, C., S. Dixon, J. Goh, and B. Hammer. “Pilot Dependence on Imperfect Diagnostic Automation in Simulated UAV Flights: An Attentional Visual Scanning Analysis.” Illinois University at Urbana Savoy, Tech. Rep., 2005.
  • Wiczorek, R., and D. Manzey. 2010. “Is Operators’ Compliance with Alarm Systems a Product of Rational Consideration?.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 54 (19): 1722–1726doi:10.1177/154193121005401976.
  • Xu, A., and G. Dudek. 2012. “Trust-Driven Interactive Visual Navigation for Autonomous Robots.” 2012 IEEE International Conference on Robotics and Automation, 3922–3929. Piscataway, NJ: IEEE.
  • Yang, X. J., V. V. Unhelkar, K. Li, and J. A. Shah. 2017. “Evaluating Effects of User Experience and System Transparency on Trust in Automation.” HRI, 408–416. doi:10.1145/2909824.3020230.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.