1,369
Views
21
CrossRef citations to date
0
Altmetric
Articles

Robot Transparency and Team Orientation Effects on Human–Robot Teaming

, , , , , & show all

References

  • Adams, B. D., Bruyn, L. E., Houde, S., & Angelopoulos, P. (2003). Trust in automated systems literature review (DRDC Toronto No. CR-2003-096). Toronto, Canada: Defence Research and Development Canada.
  • Bailey, N., & Scerbo, M. (2007). Automation- induced complacency for monitoring highly reliable systems: The role of task complexity, system experience and operator trust. Theoretical Issues in Ergonomical Science, 8, 321–348.
  • Barelka, A. J., Bobko, P., Wesselmann, E. D., & Lyons, J. B. (under review). Investigating user suspicion of human and computer teammates: An exploration in two cyber experiments. Manuscript submitted for publication.
  • Bruemmer, D. J., Marble, J. L., & Dudenhoeffer, D. D. (2002). Mutual initiative in human machine teams. Proceedings of the 7th Conf. on Human Factors and Power Plants, Scottsdale, AZ, pp. 7-22-7-30.
  • Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). Shared mental models in expert team decision making. In N. J. Castellan (Ed.), Current issues in individual and group decision making (pp. 221–246). Hillsdale, NJ: Erlbaum.
  • Chen, J. Y. C., & Barnes, M. J. (2014). Human-agent teaming for multirobot control: A review of the human factors issues. IEEE Transactions on Human-Machine Systems, 44(1), 13–29.
  • Chen, J. Y. C., Haas, E., Pillalamarri, P., & Jacobson, C. (2006). Human- robot interface: issues in operator performance, interface design, and technologies. Aberdeen Proving Ground, MD: Army Research Laboratory.
  • Chen, J. Y. C., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomical Science, 19, 259–282.
  • Chen, J. Y. C., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness–Based agent transparency. U.S. Army Res. Lab., Aberdeen Proving Ground, MD, USA: Tech. Rep. TR-6905.
  • Eggemeier, F. T., Wilson, G. F., Kramer, A. F., & Damos, D. (1991). Workload assessment in multi- task environments. In D. L. Damos (Ed.), Multiple-task performance (pp. 207–216). London, England: Taylor & Francis.
  • Endsley, M. R. (1987). NOR DOC 87-83, “SAGAT: A methodology for the measurement of situation awareness”. Hawthorne, CA: Northrop Corporation.
  • Endsley, M. R. (1988a). Design and evaluation for situation awareness enhancement. Proceedings of the 32nd Human Factors and Erg. Society Annual Meeting, Santa Monica, CA, USA, vol. 32, pp. 97–101.
  • Endsley, M. R. (1988b). Situation awareness global assessment technique (SAGAT). Paper presented at the National Aerospace and Electronics Conference (NAECON), New York, IEEE.
  • Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64.
  • Everitt, B., & Skrondal, A. (2010). The Cambridge dictionary of statistics. Cambridge, UK: Cambridge University Press.
  • Fischer, K. (2011). How people talk with robots: Designing dialogue to reduce user uncertainty. AI Magazine, 32, 31–38.
  • Granda, T., Kirkpatrick, M., Julien, T., & Peterson, L. (1990). The evolutionary role of humans in the human-robot system. Proc. Human Factors Society 34th Annual Meeting, Los Angeles, CA, pp. 664–668.
  • Groom, V., & Nass, C. (2007). Can robots be teammates? Benchmarks in human-robot teams. Interaction Studies, 8, 483–500.
  • Hancock, P., Billings, D., Schaefer, K., Chen, J., de Visser, E., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53, 517–527.
  • Hart, S. G., & Stave Land, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In P. A. Hancock & N. Meshkati (Eds.), Human mental workload, advances in psych. (Vol. 52, pp. 139–183). Amsterdam, NE: N. Holland Press.
  • Helldin, T., Ohlander, U., Falkman, G., & Riveiro, M. (2014). Transparency of automated combat classification. Engineering Psychology and Cognitive Ergonomics, 8532, 22–33.
  • Hoeft, R. M., Kochan, J. A., & Jentsch, F. (2006). automated systems in the cockpit: Is the autopilot, “George,” a Team Member? In C. Bowers, E. Salas, & F. Jentsch (Eds.), Creating high-tech teams: Practical guidance on work performance and technology (pp. 243–259). Washington, DC, US: American Psychological Association.
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57, 407–434.
  • Huey, B., & Wickens, C. (1993). Workload transition: Implications for individual and team performance. Washington, DC: National Academy Press.
  • Johnson, M., & Vera, A. H. (in press). No AI is an island: The case for teaming intelligence. AI Magazine.
  • Kim, T., & Hinds, P. (2006). Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. Proceedings of the 15th IEEE Int. Symposium on Robot and Human Interactive Comm. (RO-MAN), Hatfield, Hertfordshire, UK, pp. 80–85.
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46, 50–80.
  • LePine, J. A. (2003). Team adaptation and post-change performance: Effects of team composition in terms of members’ cognitive ability and personality. Journal of Applied Psychology, 88, 27–39.
  • Lyons, J. B. (2013). Being transparent about transparency: A model for human-robot interaction. Proceedings of AAAI Spring Symposium on Trust in Autonomous Systems, Palo Alto, CA, pp. 48–53.
  • Lyons, J. B., Clark., M. A., Wagner, A., & Schuelke, M. J. (2017). Certifiable trust in autonomous systems: Making the intractable tangible. AI Magazine, 38, 37–49.
  • Lyons, J. B., Koltai, K. S., Ho, N. T., Johnson, W. B., Smith, D. E., & Shively, J. R. (2016). Engineering trust in complex automated systems. Ergonomics in Design, 24, 13–17.
  • Lyons, J. B., & Stokes, C. K. (2012). Human-human reliance in the context of automation. Human Factors, 54, 112–121.
  • Lyons, J. B., Wynne, K. T., Roebke, M. A., & Mahoney, S. (2018). Viewing machines as teammates: A qualitative study. In Proceedings of the AAAI Spring Symposium Series, Palo Alto, CA.
  • Ma, M., Fong, T., Micire, M., Kim, Y., & Feigh, K. (2018). Human-robot teaming: concepts and components for design. In M. Hutter & R. Siegwart (Eds.), Field and service robotics. Springer proceedings in advanced robotics (Vol. 5, pp. 649–663). Cham: Springer.
  • Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barber, D., Procci, K., & Barnes, M. (2015). Effects of agent transparency on multi-robot management effectiveness. U.S. Army Res. Lab., Aberdeen Proving Ground, MD, USA: Human Res. and U.S. Army Engineering Res. Lab., Aberdeen Proving Ground, MD, USA: Human Res. and Engineering Directorate, Tech. Rep. TR-7466.
  • Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., Procci, B., & Procci, K. (2016). Intelligent agent transparency in human-agent teaming for multi-UvX management. Human Factors, 58, 401–415.
  • Merritt, S. M., LaChapell, J., & Lee, D. (2012). The perfect automation schema: Measure development and validation. U.S. Air Force Res. Lab., Dayton, OH, USA: Human Effectiveness Directorate Tech. Rep. FA8650-09-D-6939.
  • Merritt, S. M., LaChapell, J., & Lee, D. (2013). Continuous calibration of trust in automated systems-phase 2. U.S. Air Force Res. Lab., Dayton, OH, USA: Human Effectiveness Directorate Tech. Rep. TR-0026.
  • Merritt, S. M., Unnerstall, J. L., Lee, D., & Huber, K. (2015). Measuring individual differences in the perfect automation schema. Human Factors, 57, 740–753.
  • Moran, S., Pantidi, N., Bachour, K., Fischer, J. E., Flintham, M., Rodden, T., … Johnson, S. (2013). Team reactions to voiced agent instructions in a pervasive game. In Proceedings of the 18th International Conference on Intelligent User Interfaces, Santa Monica, CA, pp. 371–382.
  • Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103.
  • Nygren, T. E. (1991). Psychometric properties of subjective workload measurement techniques: Implications for their use in the assessment of perceived mental workload. Human Factors, 33, 17–33.
  • Ososky, S., Schuster, D., Phillips, E., & Jentsch, F. (2013). Building appropriate trust in human robot teams. In Proceedings of AAAI Spring Symposium on Trust in Autonomous Systems, Palo Alto, CA, pp. 60–65.
  • Pak, R., Fink, N., Price, M., Bass, B., & Sturre, L. (2012). Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics, 55, 1059–1072.
  • Park, E., Jenkins, Q., & Jiang, X. (2008). Measuring trust of human operators in new generation rescue robots. Paper presented at the 7th JFPS International Symposium on Fluid Power, Toyama, Japan.
  • Prewett, M., Johnson, R., Saboe, K., Elliott, L., & Coovert, M. (2010). Managing workload in human-robot interaction: A review of empirical studies. Computers in Human Behavior, 26, 840–856.
  • Sadler, G., Battiste, H., Ho, N. T., Hoffmann, L., Johnson, W., Shively, R., … Smith, D. (2016). Effects of transparency on pilot trust and agreement in the autonomous constrained flight planner. Proceedings of the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), Sacramento, CA, USA, pp. 1–9.
  • Salas, E., Cooke, N. J., & Rosen, M. A. (2008). On teams, teamwork, and team performance: Discoveries and developments. Human Factors, 50, 540–547.
  • Salas, E., Sims, D. E., & Burke, S. C. (2005). Is there a “Big Five” in teamwork? Small Group Research, 36, 555–599.
  • Sanders, T. L., Wixon, T., Schafer, K. E., Chen, J. Y. C., & Hancock, P. (2014). The influence of modality and transparency on trust in human-robot interaction. IEEE Int. Inter-Disciplinary Conf. on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), San Antonio, TX, USA, pp. 156–159.
  • Scheutz, M., Schermerhorn, P., & Kramer, J. (2006). The utility of affect expression in natural language interactions in joint human-robot tasks. ACM International Conference on Human-Robot Interaction, Utah, USA.
  • Selkowitz, A. R., Lakhmani, S. G., Larios, C. N., & Chen, Y. C. (2016). Agent transparency and the autonomous squad member. Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting, Washington, DC, pp. 1319–1323.
  • Stedmon, A. W., Sharples, S., Littlewood, R., Cox, G., Patel, H., & Wilson, J. R. (2007). Datalink in air traffic management: Human factors issues in communications. Applied Ergonomics, 38, 473–480.
  • Steinfeld, A. (2004). Interface lessons for fully and semi-autonomous mobile robots. Proc. IEEE International Conference on Robotics and Automation (ICRA), New Orleans, LA.
  • Stowers, K., Kasdaglis, N., Newton, O., Lakhmani, S., Wohleber, R., & Chen, J. (2016). Intelligent agent transparency: The design and evaluation of an interface to facilitate human and intelligent agent collaboration. Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting, Washington, DC, pp. 1706–1710.
  • Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117.
  • Wynne, K. T., & Lyons, J. B. (2018). An integrative model of autonomous agent teammate likeness. Theoretical Issues in Ergonomical Science, 19, 353–374.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.