793
Views
1
CrossRef citations to date
0
Altmetric
Articles

Bidirectional Communications in Human-Agent Teaming: The Effects of Communication Style and Feedback

ORCID Icon, ORCID Icon & ORCID Icon
Pages 1972-1985 | Received 12 Apr 2021, Accepted 18 Apr 2022, Published online: 03 May 2022

References

  • Abich, J., Reinerman-Jones, L., & Taylor, G. (2013, July). Establishing workload manipulations utilizing a simulated environment. In International Conference on Virtual, Augmented and Mixed Reality (pp. 211–220). Springer.
  • Ahlstrom, U., & Friedman-Berg, F. J. (2006). Using eye movement activity as a correlate of cognitive workload. International Journal of Industrial Ergonomics, 36(7), 623–636. https://doi.org/10.1016/j.ergon.2006.04.002
  • Becker, A. B., Warm, J. S., Dember, W. N., & Hancock, P. A. (1995). Effects of jet engine noise and performance feedback on perceived workload in a monitoring task. The International Journal of Aviation Psychology, 5(1), 49–62. https://doi.org/10.1207/s15327108ijap0501_4
  • Bhaskara, A., Duong, L., Brooks, J., Li, R., McInerney, R., Skinner, M., Pongracic, H., & Loft, S. (2021). Effect of automation transparency in the management of multiple unmanned vehicles. Applied Ergonomics, 90, 103243. https://doi.org/10.1016/j.apergo.2020.103243
  • Bradshaw, J. M., Feltovich, P. J., & Johnson, M. (2012). Human–agent interaction. In G. A. Boy (Ed.), A human-centered design approach. CRC Press.
  • Breazeal, C., & Thomaz, A. L. (2008, May). Learning from human teachers with socially guided exploration. In IEEE International Conference on Robotics and Automation, 2008. ICRA 2008 (pp. 3539–3544). IEEE.
  • Buchholz, V., & Kopp, S. (2020). Towards an adaptive assistance system for monitoring tasks: Assessing mental workload using eye-tracking and performance measures [Paper presentation]. 2020 IEEE International Conference on Human-Machine Systems (ICHMS), September (pp. 1–6). IEEE. https://doi.org/10.1109/ICHMS49158.2020.9209435
  • Cakmak, M., & Thomaz, A. L. (2012). Designing robot learners that ask good questions [Paper presentation]. Proceedings of the Seventh Annual ACM/IEEE International Conference, on Human-Robot Interaction, March 17–24. ACM. https://doi.org/10.1145/2157689.2157693
  • Chen, J. Y. C., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. J. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282. https://doi.org/10.1080/1463922X.2017.1315750
  • Chen, J. Y. C., Procci, K., Boyce, M., Wright, J. L., Garcia, A., & Barnes, M. J. (2014). SA-based agent transparency (ARL-TR-6905). US Army Research Laboratory.
  • Chien, S., Lewis, M., Sycara, K., Kumru, A., & Liu, J. (2020). Influence of culture, transparency, trust, and degree of automation on automation use. IEEE Transactions on Human-Machine Systems, 50(3), 205–214. https://doi.org/10.1109/THMS.2019.2931755
  • Ehmke, C., & Wilson, S. (2007). Identifying web usability problems from eyetracking data. In Paper presented at the British HCI conference. University of Lancaster.
  • Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 32–64. https://doi.org/10.1518/001872095779049543
  • Evans, W. A. III. (2012). Safe operations of unmanned systems for reconnaissance in complex environments-army technology objective (SOURCE ATO) field experimentation observations and soldier feedback (ARL-TN-0488). US Army Research Laboratory.
  • Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191.
  • Fiore, S. M., Badler, N. L., Boloni, L., Goodrich, M. A., Wu, A. S., & Chen, J. (2011). Human-robot teams collaborating socially, organizationally, and culturally. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1), 465–469. https://doi.org/10.1177/1071181311551096
  • Fong, T., Thorpe, C., & Baur, C. (2003). Robot, asker of questions. Robotics and Autonomous Systems, 42(3–4), 235–243. https://doi.org/10.1016/S0921-8890(02)00378-0
  • Gawronski, B., & Bodenhausen, G. V. (2006). Associative and propositional processes in evaluation: an integrative review of implicit and explicit attitude change. Psychological Bulletin, 132(5), 692–731. https://doi.org/10.1037/0033-2909.132.5.692
  • Graham, E. E., Barbato, C. A., & Perse, E. M. (1993). The interpersonal communication motives model. Communication Quarterly, 41(2), 172–186. https://doi.org/10.1080/01463379309369877
  • Greenwald, A. G., Nosek, B. A., & Banaji, M. R. (2003). “Understanding and using the implicit association test: I. An improved scoring algorithm ”: Correction to Greenwald et al. (2003). Journal of Personality and Social Psychology, 85(3), 481.
  • Hart, S., & Staveland, L. (1988). Development of NASA TLX (task load index): Results of empirical and theoretical research. In P. Hancock & N. Meshkati (Eds.), Human mental workload (pp. 139–183). Elsevier.
  • Héder, M. (2014). The machine’s role in human’s service automation and knowledge sharing. AI & Society, 29(2), 185–192. https://doi.org/10.1007/s00146-013-0474-y
  • Hou, M. (2020). IMPACT: A trust model for human-agent teaming [Paper presentation]. 2020 IEEE International Conference on Human-Machine Systems (ICHMS), September (pp. 1–4). IEEE. https://doi.org/10.1109/ICHMS49158.2020.9209519
  • Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on behavior in organizations. Journal of Applied Psychology, 64(4), 349–371. https://doi.org/10.1037/0021-9010.64.4.349
  • Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04
  • Jones, D. G., & Kaber, D. B. (2004). Situation awareness measurement and the situation awareness global assessment technique. In N. Stanton, A. Hedge, H. Hendrick, K. Brookhuis, & E. Salas (Eds.), Handbook of human factors and ergonomics methods (pp. 419–427). CRC Press.
  • Kaupp, T., Makarenko, A., & Durrant-Whyte, H. (2010). Human–robot communication for collaborative decision making—A probabilistic approach. Robotics and Autonomous Systems, 58(5), 444–456. https://doi.org/10.1016/j.robot.2010.02.003
  • Kilgore, R., & Voshell, M. (2014). Increasing the transparency of unmanned systems: Applications of ecological interface design. In International conference on virtual, augmented and mixed reality (pp. 378–389). Springer.
  • Klein, G., Feltovich, P. J., Bradshaw, J. M., & Woods, D. D. (2005). Common ground and coordination in joint activity. Organizational Simulation, 53, 139–184.
  • Krausman, A., Neubauer, C., Forster, D., Lakhmani, S. G., Baker, A., Fitzhugh, S. M., Gremillion, G., Wright, J. L., Metcalfe, J. S., & Schaeffer, K. E. (2022). Team trust measurement in human-autonomy teaming. IEEE Transactions on Human-Machine Systems.
  • Lakhmani, S. G., Wright, J. L., Schwartz, M., & Barber, D. (2019a). Exploring the effect of communication patterns and transparency on the attitudes towards robots. In International conference on applied human factors and ergonomics (pp. 27–36). Springer.
  • Lakhmani, S. G., Wright, J. L., Schwartz, M., & Barber, D. (2019b). Exploring the effect of communication patterns and transparency on performance in a human-robot team. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1), 160–164. https://doi.org/10.1177/1071181319631054
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.
  • Lyons, J. B. (2013). Being transparent about transparency: A model for human-robot interaction. In 2013 AAAI Spring Symposium Series.
  • Matthews, G., Lin, J., Panganiban, A. R., & Long, M. D. (2020). Individual differences in trust in autonomous robots: Implications for transparency. IEEE Transactions on Human-Machine Systems, 50(3), 234–244. https://doi.org/10.1109/THMS.2019.2947592
  • Mavridis, N. (2015). A review of verbal and non-verbal human–robot interactive communication. Robotics and Autonomous Systems, 63(1), 22–35. https://doi.org/10.1016/j.robot.2014.09.031
  • Mercado, J. E., Rupp, M. A., Chen, J. Y., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors, 58(3), 401–415. https://doi.org/10.1177/0018720815621206
  • Merritt, S. M., Heimbaugh, H., LaChapell, J., & Lee, D. (2013). I trust it, but I don’t know why: Effects of implicit attitudes toward automation on trust in an automated system. Human Factors, 55(3), 520–534. https://doi.org/10.1177/0018720812465081
  • Muir, B. M., & Moray, N. (1996). Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics, 39(3), 429–460.
  • Norton, R. W. (1978). Foundation of a communicator style construct. Human Communication Research, 4(2), 99–112. https://doi.org/10.1111/j.1468-2958.1978.tb00600.x
  • Ososky, S., Sanders, T., Jentsch, F., Hancock, P., & Chen, J. Y. (2014, June). Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems [Paper presentation]. SPIE Defense + Security (pp. 90840E-90840E). International Society for Optics and Photonics. https://doi.org/10.1117/12.2050622
  • Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
  • Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2008). Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs. Journal of Cognitive Engineering and Decision Making, 2(2), 140–160. https://doi.org/10.1518/155534308X284417
  • Patel, J., Ramaswamy, T., Li, Z., & Pinciroli, C. (2021). Transparency in multi-human multi-robot interaction. arXiv preprint arXiv:2101.10495.
  • Project Implicit (2017). Retrieved September, 2017, from https://www.projectimplicit.net/index.html
  • Rau, P. P., Li, Y., & Li, D. (2009). Effects of communication style and culture on ability to accept recommendations from robots. Computers in Human Behavior, 25(2), 587–595. https://doi.org/10.1016/j.chb.2008.12.025
  • Roth, G., Schulte, A., Schmitt, F., & Brand, Y. (2020). Transparency for a workload-adaptive cognitive agent in a manned–unmanned teaming application. IEEE Transactions on Human-Machine Systems, 50(3), 225–233. https://doi.org/10.1109/THMS.2019.2914667
  • Rubin, R. B. (1977). The role of context in information seeking and impression formation. Communication Monographs, 44(1), 81–90. https://doi.org/10.1080/03637757709390117
  • Rubin, R. B., Perse, E. M., & Barbato, C. A. (1988). Conceptualization and measurement of interpersonal communication motives. Human Communication Research, 14(4), 602–628. https://doi.org/10.1111/j.1468-2958.1988.tb00169.x
  • Ryan, R. M. (1982). Control and information in the intrapersonal sphere: An extension of cognitive evaluation theory. Journal of Personality and Social Psychology, 43(3), 450–461. https://doi.org/10.1037/0022-3514.43.3.450
  • Salmon, P. M., Stanton, N. A., Walker, G. H., Jenkins, D., Ladva, D., Rafferty, L., & Young, M. (2009). Measuring situation awareness in complex systems: Comparison of measures study. International Journal of Industrial Ergonomics, 39(3), 490–500. https://doi.org/10.1016/j.ergon.2008.10.010
  • Selkowitz, A. R., Lakhmani, S. G., Larios, C. N., & Chen, J. Y. (2016). Agent transparency and the autonomous squad member. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 1319–1323. https://doi.org/10.1177/1541931213601305
  • Singh, I. L., Molloy, R. L., & Parasuraman, R. (1993). Automation-induced “complacency”: Development of the complacency-potential rating scale. The International Journal of Aviation Psychology, 3(2), 111–122.
  • Slowiak, J. M., & Lakowske, A. M. (2017). The influence of feedback statement sequence and goals on task performance. Behavior Analysis: Research and Practice, 17(4), 357. https://doi.org/10.1037/bar0000084
  • Smith, K., & Hancock, P. A. (1995). Situation awareness is adaptive, externally directed consciousness. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 137–148. https://doi.org/10.1518/001872095779049444
  • Solaja, M. O., Idowu, E. F., & James, E. A. (2016). Exploring the relationship between leadership communication style, personality trait and organizational productivity. Serbian Journal of Management, 11(1), 99–117. https://doi.org/10.5937/sjm11-8480
  • Stanton, N. A., Salmon, P. M., Rafferty, L. A., Walker, G. H., Baber, C., & Jenkins, D. P. (2012). Human factors methods: A practical guide for engineering and design. Ashgate Publishing, Ltd.
  • Stowers, K., Kasdaglis, N., Rupp, M. A., Newton, O. B., Chen, J., & Barnes, M. J. (2020). The IMPACT of agent transparency on human performance. IEEE Transactions on Human-Machine Systems, 50(3), 245–253. https://doi.org/10.1109/THMS.2020.2978041
  • Sukthankar, G., Shumaker, R., & Lewis, M. (2012). Intelligent agents as teammates. In Theories of team cognition: Cross-disciplinary perspectives (pp. 313–343). Routledge.
  • Sycara, K., & Sukthankar, G. (2006). Literature review of teamwork models. Robotics Institute, Carnegie Mellon University.
  • Wang, N., Pynadath, D., Hill, S. (2016). Trust calibration within a human–robot team: Comparing automatically generated explanations. In Proceedings of the 11th ACM/IEEE International Conference on Human Robot Interaction (pp. 109–116).
  • Wickens, C. D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3(2), 159–177. https://doi.org/10.1080/14639220210123806
  • Wright, J. L., Chen, J. Y. C., Barnes, M. J., & Hancock, P. A. (2017). Agent reasoning transparency: The influence of information level on automation-induced complacency (ARL-TR-8044). US Army Research Laboratory.
  • Wright, J. L., Chen, J. Y. C., Lakhmani, S. G., & Selkowitz, A. R. (2020). Agent transparency for an autonomous squad member: Depth of reasoning and reliability. (ARL-TR-8956). US Army Research Laboratory.
  • Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a ROBOT? The development of a human--robot interaction trust scale. International Journal of Social Robotics, 4(3), 235–248.
  • Yi, D., & Goodrich, M. A. (2014, June). Supporting task-oriented collaboration in human-robot teams using semantic-based path planning [Paper presentation]. Proc. SPIE (Vol. 9084). https://doi.org/10.1117/12.2050718
  • Zhou, J. (1998). Feedback valence, feedback style, task autonomy, and achievement orientation: Interactive effects on creative performance. Journal of Applied Psychology, 83(2), 261–276. https://doi.org/10.1037/0021-9010.83.2.261

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.