1,154
Views
5
CrossRef citations to date
0
Altmetric
Review Article

Human trust in otherware – a systematic literature review bringing all antecedents together

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 976-998 | Received 05 Nov 2021, Accepted 28 Aug 2022, Published online: 14 Sep 2022

References

  • Abbass, H. A. 2019. “Social Integration of Artificial Intelligence: Functions, Automation Allocation Logic and Human-Autonomy Trust.” Cognitive Computation 11 (2): 159–171. doi:10.1007/s12559-018-9619-0.
  • Akash, Kumar, Griffon McMahon, Tahira Reid, and Neera Jain. 2020. Human Trust-Based Feedback Control: Dynamically Varying Automation Transparency to Optimize Human-Machine Interactions. Ithaca: Cornell University Library.
  • Alfonseca, Manuel, Manuel Cebrian, Antonio Fernandez Anta, Lorenzo Coviello, Andrés Abeliuk, and Iyad Rahwan. 2021. “Superintelligence Cannot Be Contained: Lessons from Computability Theory.” Journal of Artificial Intelligence Research 70: 65–76. doi:10.1613/jair.1.12202.
  • Arts, E., S. Zörner, K. Bhatia, G. Mir, F. Schmalzl, A. Srivastava, B. Vasiljevic, T. Alpay, A. Peters, E. Strahl, and A. Wermter. 2020. “Exploring Human-Robot Trust through the Investment Game.” In: M. Obaid, ed. Proceedings of the 8th International Conference on Human-Agent Interaction. New York, NY, United States: Association for Computing Machinery, 121–130.
  • Asan, O., A. E. Bayrak, and A. Choudhury. 2020. “Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians.” Journal of Medical Internet Research 22 (6): e15154. doi:10.2196/15154.
  • Ashby, W. R. 1964. An Introduction to Cybernetics. New York: Wiley.
  • Baier, A. 1986. “Trust and Antitrust.” Ethics, 96 (2): 231–260. Available from: www.jstor.org/stable/2381376. [Accessed 5 Jan 2021]. doi:10.1086/292745.
  • Baker, Anthony L., Elizabeth K. Phillips, Daniel Ullman, and Joseph R. Keebler. 2018. “Toward an Understanding of Trust Repair in Human-Robot Interaction.” ACM Transactions on Interactive Intelligent Systems 8 (4): 1–30. doi:10.1145/3181671.
  • Balfe, N., S. Sharples, and J. R. Wilson. 2018. “Understanding is Key: An Analysis of Factors Pertaining to Trust in a Real-World Automation System.” Human Factors 60 (4): 477–495. doi:10.1177/0018720818761256.
  • Bastian, M., S. Heymann, and M. Jacomy. 2009. “Gephi: An Open Source Software for Exploring and Manipulating Networks.” In: International AAAI Conference on Weblogs and Social Media.
  • Benbasat, I, and W. Wang. 2005. “Trust in and Adoption of Online Recommendation Agents.” Journal of the Association for Information Systems 6 (3): 72–101. doi:10.17705/1jais.00065.
  • Bigman, Yochanan E., Adam Waytz, Ron Alterovitz, and Kurt Gray. 2019. “Holding Robots Responsible: The Elements of Machine Morality.” Trends in Cognitive Sciences 23 (5): 365–368. doi:10.1016/j.tics.2019.02.008.
  • Biros, D. P., M. Daly, and G. Gunsch. 2004. “The Influence of Task Load and Automation Trust on Deception Detection.” Group Decision and Negotiation 13 (2): 173–189. doi:10.1023/B:GRUP.0000021840.85686.57.
  • Bitkina, Olga V., Heejin Jeong, Byung Cheol Lee, Jangwoon Park, Jaehyun Park, and Hyun K. Kim. 2020. Perceived Trust in Artificial Intelligence Technologies: A Preliminary Study. Human Factors and Ergonomics in Manufacturing & Service Industries. Available from: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2020-17740-001&lang=de&site=ehost-live.
  • Booth, A. 2006. “Brimful of STARLITE’: Toward Standards for Reporting Literature Searches.” Journal of the Medical Library Association 94 (4): 421–429, e205.
  • Boyce, Michael W., Jessie Y. C. Chen, Anthony R. Selkowitz, and Shan G. Lakhmani. 2015. “Effects of Agent Transparency on Operator Trust.” In: J. A. Adams eds. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts – HRI'15 Extended Abstracts. New York, New York, USA: ACM Press, 179–180.
  • Brown, M. W. 2020. “Developing Readiness to Trust Artificial Intelligence within Warfighting Teams.” Military Review, 100 (1): 36–44. Available from: http://search.ebscohost.com/login.aspx?direct=true&db=pbh&AN=141188317&lang=de&site=ehost-live.
  • Burggräf, P., J. Wagner, and T. M. Saßmannshausen. 2021. “Sustainable Interaction of Human and Artificial Intelligence in Cyber Production Management Systems.” In: J. P. Wulfsberg eds. Production at the Leading Edge of Technology: Proceedings of the 10th Congress of the German Academic Association for Production Technology (WGP). Berlin: Springer, 508–517.
  • Chen, J. Y. C, and M. J. Barnes. 2014. “Human–Agent Teaming for Multirobot Control: A Review of Human Factors Issues.” IEEE Transactions on Human-Machine Systems 44 (1): 13–29. doi:10.1109/THMS.2013.2293535.
  • Christensen, J. C, and J. B. Lyons. 2017. “Trust between Humans and Learning Machines: Developing the Gray Box.” Mechanical Engineering 139 (06): S9–S13. doi:10.1115/1.2017-Jun-5.
  • Cooper, H. M. 1988. “Organizing Knowledge Syntheses: A Taxonomy of Literature Reviews.” Knowledge in Society 1 (1): 104–126. doi:10.1007/BF03177550.
  • Culley, K. E, and P. Madhavan. 2013. “Trust in Automation and Automation Designers: Implications for HCI and HMI.” Computers in Human Behavior 29 (6): 2208–2210. doi:10.1016/j.chb.2013.04.032.
  • Dai, L, and Y. Wu. 2015. “Trust Maintenance and Trust Repair.” Psychology 06 (06): 767–772. doi:10.4236/psych.2015.66075.
  • Daronnat, S. 2020. “Human-Agent Trust Relationships in a Real-Time Collaborative Game.” In: P. Mirza-Babaei, ed. Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play. New York,NY,United States: Association for Computing Machinery, 18–20. doi:10.1145/3383668.3419953.
  • Davis, G. F. 2019. “How to Communicate Large-Scale Social Challenges: The Problem of the Disappearing American Corporation.” In: Proceedings of the National Academy of Sciences of the United States of America 116 (16): 7698–7702. doi:10.1073/pnas.1805867115.
  • de Visser, Ewart J., Frank Krueger, Patrick McKnight, Steven Scheid, Melissa Smith, Stephanie Chalk, and Raja Parasuraman. 2012. “The World is Not Enough: Trust in Cognitive Agents.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 56 (1): 263–267. doi:10.1177/1071181312561062.
  • de Visser, Ewart J., Marvin Cohen, Amos Freedy, and Raja Parasuraman. 2014. “A Design Methodology for Trust Cue Calibration in Cognitive Agents.” In: D. Hutchison eds. Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments. Cham: Springer International Publishing, 251–262.
  • de Visser, Ewart J., Marieke M. M. Peeters, Malte F. Jung, Spencer Kohn, Tyler H. Shaw, Richard Pak, and Mark A. Neerincx. 2019. “Towards a Theory of Longitudinal Trust Calibration in Human–Robot Teams.” International Journal of Social Robotics 8 (3): 1393.
  • de Visser, E. J., R. Pak, and T. H. Shaw. 2018. “From 'Automation’ to 'Autonomy’: The Importance of Trust Repair in Human-Machine Interaction.” Ergonomics 61 (10): 1409–1427. doi:10.1080/00140139.2018.1457725.
  • de Winter, J. C. F., P. A. Hancock, and J. C. F. de Winter. 2021. “Why Human Factors Science is Demonstrably Necessary: historical and Evolutionary Foundations.” Ergonomics 64 (9): 1115–1131. doi:10.1080/00140139.2021.1905882.
  • Dekker, S. W. A, and D. D. Woods. 2002. “MABA-MABA or Abracadabra? Progress on Human-Automation Co-Ordination.” Cognition, Technology & Work 4 (4): 240–244. doi:10.1007/s101110200022.
  • Deley, T, and E. Dubois. 2020. “Assessing Trust versus Reliance for Technology Platforms by Systematic Literature Review.” Social Media + Society 6 (2): 205630512091388–205630512091388. doi:10.1177/2056305120913883.
  • Deligianis, Christopher, Christopher John Stanton, Craig McGarty, and Catherine J. Stevens. 2017. “The Impact of Intergroup Bias on Trust and Approach Behaviour towards a Humanoid Robot.” Journal of Human-Robot Interaction 6 (3): 4. doi:10.5898/JHRI.6.3.Deligianis.
  • Dellermann, Dominik, Philipp Ebel, Matthias Söllner, and Jan Marco Leimeister. 2019. “Hybrid Intelligence.” Business & Information Systems Engineering 61 (5): 637–643. doi:10.1007/s12599-019-00595-2.
  • Detweiler, C, and J. Broekens. 2009. “Trust in Online Technology: Towards Practical Guidelines Based on Experimentally Verified Theory.” In: J. A. Jacko, ed. Human-Computer Interaction. Ambient, Ubiquitous and Intelligent Interaction. Berlin, Heidelberg: Springer Berlin Heidelberg, 605–614.
  • Deutsch, M. 1958. “Trust and Suspicion.” Journal of Conflict Resolution 2 (4): 265–279. doi:10.1177/002200275800200401.
  • Deutsch, M. 1962. “Cooperation and Trust: Some Theoretical Notes.” In: M. R. Jones, ed. Nebraska Symposium on Motivation. Nebraska: University of Nebraska Press, 275–320.
  • Di Dio, Cinzia, Federico Manzi, Giulia Peretti, Angelo Cangelosi, Paul L. Harris, Davide Massaro, and Antonella Marchetti. 2020. “Shall I Trust You? From Child-Robot Interaction to Trusting Relationships.” Frontiers in Psychology 11: 469. doi:10.3389/fpsyg.2020.00469.
  • Ding, Yi., Yaqin Cao, Vincent G. Duffy, Yi Wang, and Xuefeng Zhang. 2020. “Measurement and Identification of Mental Workload during Simulated Computer Tasks with Multimodal Methods and Machine Learning.” Ergonomics 63 (7): 896–908.
  • Drnec, Kim., Amar R. Marathe, Jamie R. Lukos, and Jason S. Metcalfe. 2016. “From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction.” Frontiers in Human Neuroscience 10: 290.
  • Ehsan, Upol, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O. Riedl. 2019. “Automated Rationale Generation: A Technique for Explainable AI and Its Effects on Human Perceptions.” In: Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI), 263–274 (accessed May 13, 2021). http://arxiv.org/pdf/1901.03729v1.
  • Farooq, U, and J. Grudin. 2016. “Human-Computer Integration.” Interactions 23 (6): 26–32. doi:10.1145/3001896.
  • Fischer, K., H. M. Weigelin, and L. Bodenhagen. 2018. “Increasing Trust in Human–Robot Medical Interactions: Effects of Transparency and Adaptability.” Paladyn, Journal of Behavioral Robotics 9 (1): 95–109. doi:10.1515/pjbr-2018-0007.
  • Fjeld, J., A. Nele, H. Hannah, N. Adam, S. Madhu. 2020. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI. Berkman Klein Center at Harvard University. (accessed Jun 6, 2020). http://nrs.harvard.edu/urn-3:HUL.InstRepos:42160420.
  • Gabriel, G. 2012. “Geltung Und Genese Als Grundlagenproblem.” Erwägen Wissen Ethik 23 (4): 475–486.
  • Glikson, E, and A. W. Woolley. 2020. “Human Trust in Artificial Intelligence: Review of Empirical Research.” Academy of Management Annals, 14 (2): 627–660. doi:10.5465/annals.2018.0057.
  • Grudin, J. 2009. “AI and HCI: Two Fields Divided by a Common Focus.” AI Magazine 30 (4): 48. doi:10.1609/aimag.v30i4.2271.
  • Gulati, S., S. Sousa, and D. Lamas. 2018. “Modelling Trust in Human-like Technologies.” In: Unknown, ed. Proceedings of the 9th International Conference on HCI IndiaHCI 2018 – IndiaHCI 18. New York, New York, USA: ACM Press, 1–10.
  • Guo, Y, and X. J. Yang. 2020. Modeling and Predicting Trust Dynamics in Human–Robot Teaming: A Bayesian Inference Approach. International Journal of Social Robotics. Available from: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2020-75017-001&lang=de&site=ehost-live.
  • Hancock, Peter A., Deborah R. Billings, Kristin E. Schaefer, Jessie Y C. Chen, Ewart J. de Visser, and Raja Parasuraman. 2011. “A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction.” Human Factors 53 (5): 517–527.
  • Hancock, Peter A., Theresa T. Kessler, Alexandra D. Kaplan, John C. Brill, and James L. Szalma. 2021. “Evolving Trust in Robots: Specification through Sequential and Comparative Meta-Analyses.” Human Factors 63 (7): 1196–1129.
  • Hancock, P. A., K. L. Stowers, and T. T. Kessler. 2019. “Can We Trust Autonomous Systems?.” In: H. Ayaz and F. Dehais, eds. Neuroergonomics: The Brain at Work and in Everyday Life. London: Academic Press, 199.
  • Hashemian, M., Raul Paradeda, Carla Guerra, and Ana Paiva. 2019. “Do You Trust Me? Investigating the Formation of Trust in Social Robots.” In: P. Moura Oliveira, P. Novais, and L. P. Reis, eds. Progress in Artificial Intelligence. Cham: Springer International Publishing, 357–369.
  • Haslam, N. 2006. “Dehumanization: An Integrative Review.” Personality and Social Psychology Review 10 (3): 252–264. doi:10.1207/s15327957pspr1003_4.
  • Hass, N. C. 2020. “Can I Get a Second Opinion?' How User Characteristics Impact Trust in Automation in a Medical Screening Task.” Dissertation. University of Missouri, US.
  • Hassenzahl, Marc, Jan Borchers, Susanne Boll, Astrid Rosenthal-von der Pütten, and Volker Wulf. 2021. “Otherware: How to Best Interact with Autonomous Systems.” Interactions 28 (1): 54–57. doi:10.1145/3436942.
  • High-Level Expert Group on Artificial Intelligence. 2019. Ethics Guidelines for Trustworthy AI (accessed Jun 6, 2020). Available from: https://ai.bsa.org/wp-content/uploads/2019/09/AIHLEG_EthicsGuidelinesforTrustworthyAI-ENpdf.pdf.
  • Hoff, K. A, and M. Bashir. 2013. “A Theoretical Model for Trust in Automated Systems.” In: W. Mackay, S. Brewster, and S. Bødker, eds. CHI 2013: Extended Abstracts of the 31st Annual CHI Conference on Human Factors in Computing Systems: 27 April – 2 May 2013. Paris, France. New York: ACM, 115.
  • Hoff, K. A, and M. Bashir. 2015. “Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust.” Human Factors 57 (3): 407–434.
  • Holton, R. 1994. “Deciding to Trust, Coming to Believe.” Australasian Journal of Philosophy 72 (1): 63–76. doi:10.1080/00048409412345881.
  • Honeycutt, D. R., M. Nourani, and E. D. Ragan. 2020. Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy. Ithaca: Cornell University Library, arXiv.org. arXiv.org. Available from: https://www.proquest.com/docview/2438804585?accountid=14644.
  • Huang, M.-H., R. Rust, and V. Maksimovic. 2019. “The Feeling Economy: Managing in the Next Generation of Artificial Intelligence (AI).” California Management Review 61 (4): 43–65. doi:10.1177/0008125619863436.
  • Kaplan, Alexandra D., Theresa T. Kessler, J. Christopher Brill, and P. A. Hancock. 2021. “Trust in Artificial Intelligence: Meta-Analytic Findings.” Human factors. doi:10.1177/00187208211013988.
  • Ketchen, D. J. Jr., and C. L. Shook. 1996. “The Application of Cluster Analysis in Strategic Management Research: An Analysis and Critique.” Strategic Management Journal 17 (6): 441–458. doi:10.1002/(SICI)1097-0266(199606)17:6<441::AID-SMJ819>3.0.CO;2-G.
  • Körber, M., E. Baseler, and K. Bengler. 2018. “Introduction Matters: Manipulating Trust in Automation and Reliance in Automated Driving.” Applied Ergonomics 66: 18–31.
  • Kramer, R. M. 1999. “Trust and Distrust in Organizations: Emerging Perspectives, Enduring Questions.” Annual Review of Psychology 50: 569–598.
  • Laschke, Matthias, Robin Neuhaus, Judith Dörrenbächer, Marc Hassenzahl, Volker Wulf, Astrid Rosenthal-von der Pütten, Jan Borchers, and Susanne Boll. 2020. “Otherware Needs Otherness: Understanding and Designing Artificial Counterparts.” In: Proceedings of the 11th Nordic Conference on Human-Computer Interaction. Tallinn Estonia: 25–29.10.2020, 1–4.
  • Lee, H. R, and S. Sabanović. 2014. “Culturally Variable Preferences for Robot Design and Use in South Korea, Turkey, and the United States.” In: G. Sagerer eds. Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot interaction – HRI '14. New York, New York, USA: ACM Press, 17–24.
  • Lee, J. D, and N. Moray. 1992. “Trust, Control Strategies and Allocation of Function in Human-Machine Systems.” Ergonomics 35 (10): 1243–1270.
  • Lee, J. D, and N. Moray. 1994. “Trust, Self-Confidence, and Operators’ Adaptation to Automation.” International Journal of Human-Computer Studies 40 (1): 153–184. doi:10.1006/ijhc.1994.1007.
  • Lee, J. D, and K. A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46 (1): 50–80.
  • Lewandowsky, S., M. Mundy, and G. P. A. Tan. 2000. “The Dynamics of Trust: Comparing Humans to Automation.” Journal of Experimental Psychology. Applied 6 (2): 104–123.
  • Lu, Yuqian, Juvenal Sastre Adrados, Saahil Shivneel Chand, and Lihui Wang. 2021. “Humans Are Not Machines – Anthropocentric Human–Machine Symbiosis for Ultra-Flexible Smart Manufacturing.” Engineering 7 (6): 734–737. doi:10.1016/j.eng.2020.09.018.
  • Luhmann, N. 1979. Trust and Powers. Chichester [West Sussex]: Wiley.
  • Marsh, S, and M. R. Dibben. 2005. “The Role of Trust in Information Science and Technology.” Annual Review of Information Science and Technology 37 (1): 465–498. doi:10.1002/aris.1440370111.
  • Mayer, R. C., J. H. Davis, and F. D. Schoorman. 1995. “An Integrative Model of Organizational Trust.” Academy of Management Review (AMR) 20 (3): 709–734. doi:10.2307/258792.
  • McAfee, A, and E. Brynjolfsson. 2017. Machine, Platform, Crowd: Harnessing Our Digital Future. New York, London: W.W. Norton & Company.
  • Mcknight, D. Harrison, Michelle Carter, Jason Bennett Thatcher, and Paul F. Clay. 2011. “Trust in a Specific Technology: An Investigation of Its Components and Measures.” ACM Transactions on Management Information Systems 2 (2): 1–25. doi:10.1145/1985347.1985353.
  • McKnight, D. H., L. L. Cummings, and N. L. Chervany. 1998. “Initial Trust Formation in New Organizational Relationships.” Academy of Management Review 23 (3): 473–490. doi:10.5465/amr.1998.926622.
  • McNeese, Nathan, Mustafa Demir, Erin Chiou, Nancy Cooke, and Giovanni Yanikian. 2019. Understanding the Role of Trust in Human-Autonomy Teaming. In: T. Bui, ed. Proceedings of the 52nd Hawaii International Conference on System Sciences: Hawaii International Conference on System Sciences.
  • Merritt, Stephanie M., Heather Heimbaugh, Jennifer LaChapell, and Deborah Lee. 2013. “I Trust It, but I Don’t Know Why: effects of Implicit Attitudes toward Automation on Trust in an Automated System.” Human Factors: The Journal of the Human Factors and Ergonomics Society 55 (3): 520–534. doi:10.1177/0018720812465081.
  • Merritt, Stephanie M., Deborah Lee, Jennifer L. Unnerstall, and Kelli Huber. 2015. “Are Well-Calibrated Users Effective Users? Associations between Calibration of Trust and Performance on an Automation-Aided Task.” Human Factors: The Journal of the Human Factors and Ergonomics Society 57 (1): 34–47. doi:10.1177/0018720814561675.
  • Merritt, S. M, and D. R. Ilgen. 2008. “Not All Trust is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions.” Human Factors: The Journal of the Human Factors and Ergonomics Society 50 (2): 194–210. doi:10.1518/001872008X288574.
  • Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing Atari with Deep Reinforcement Learning arXiv.org.
  • Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. “Human-Level Control through Deep Reinforcement Learning.” Nature 518 (7540): 529–533.
  • Moher, David, Alessandro Liberati, Jennifer Tetzlaff, and Douglas G. Altman. 2009. “Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement.” Annals of Internal Medicine 151 (4): 264–269, W64. doi:10.7326/0003-4819-151-4-200908180-00135.
  • Mori, M., K. MacDorman, and N. Kageki. 2012. “The Uncanny Valley.” IEEE Robotics & Automation Magazine 19 (2): 98–100. doi:10.1109/MRA.2012.2192811.
  • Morita, P. P, and C. M. Burns. 2014. “Understanding ‘Interpersonal Trust’ from a Human Factors Perspective: insights from Situation Awareness and the Lens Model.” Theoretical Issues in Ergonomics Science 15 (1): 88–110. doi:10.1080/1463922X.2012.691184.
  • Muir, B. M. 1987. “Trust between Humans and Machines, and the Design of Decision Aids.” International Journal of Man-Machine Studies 27 (5-6): 527–539. doi:10.1016/S0020-7373(87)80013-5.
  • Muir, B. M. 1994. “Trust in Automation: Part I. Theoretical Issues in the Study of Trust and Human Intervention in Automated Systems.” Ergonomics 37 (11): 1905–1922. doi:10.1080/00140139408964957.
  • Muir, B. M, and N. Moray. 1996. “Trust in Automation. Part II. Experimental Studies of Trust and Human Intervention in a Process Control Simulation.” Ergonomics 39 (3): 429–460. doi:10.1080/00140139608964474.
  • Nam, Changjoo, Phillip Walker, Huao Li, Michael Lewis, and Katia Sycara. 2020. “Models of Trust in Human Control of Swarms with Varied Levels of Autonomy.” IEEE Transactions on Human-Machine Systems 50 (3): 194–204. doi:10.1109/THMS.2019.2896845.
  • Nass, Clifford, Youngme Moon, Brian J. Fogg, Byron Reeves, and D. Christopher Dryer. 1995. “Can Computer Personalities Be Human Personalities?.” In: J. Miller eds. Conference Companion on Human Factors in Computing systems – CHI '95. New York, New York, USA: ACM Press, 228–229
  • Nass, C., B. J. Fogg, and Y. Moon. 1996. “Can Computers Be Teammates?” International Journal of Human-Computer Studies 45 (6): 669–678. doi:10.1006/ijhc.1996.0073.
  • Nass, C, and K. M. Lee. 2001. “Does Computer-Synthesized Speech Manifest Personality? Experimental Tests of Recognition, Similarity-Attraction, and Consistency-Attraction.” Journal of Experimental Psychology. Applied 7 (3): 171–181.
  • Nass, C, and Y. Moon. 2000. “Machines and Mindlessness: Social Responses to Computers.” Journal of Social Issues 56 (1): 81–103. doi:10.1111/0022-4537.00153.
  • Norman, D. A., A. Ortony, and D. M. Russell. 2003. “Affect and Machine Design: Lessons for the Development of Autonomous Machines.” IBM Systems Journal 42 (1): 38–44. doi:10.1147/sj.421.0038.
  • Oksanen, Atte, Nina Savela, Rita Latikka, and Aki Koivula. 2020. “Trust toward Robots and Artificial Intelligence: An Experimental Approach to Human-Technology Interactions Online.” Frontiers in Psychology 11 (568256): 568256–568213.
  • Parasuraman, R, and C. A. Miller. 2004. “Trust and Etiquette in High-Criticality Automated Systems.” Communications of the ACM 47 (4): 51–55. doi:10.1145/975817.975844.
  • Parasuraman, R., R. Molloy, and I. L. Singh. 1993. “Performance Consequences of Automation-Induced 'Complacency.” The International Journal of Aviation Psychology 3 (1): 1–23. doi:10.1207/s15327108ijap0301_1.
  • Parasuraman, R, and V. Riley. 1997. “Humans and Automation: Use, Misuse, Disuse, Abuse.” Human Factors: The Journal of the Human Factors and Ergonomics Society 39 (2): 230–253. doi:10.1518/001872097778543886.
  • Paré, Guy., Marie-Claude Trudel, Mirou Jaana, and Spyros Kitsiou. 2015. “Synthesizing Information Systems Knowledge: A Typology of Literature Reviews.” Information & Management 52 (2): 183–199. doi:10.1016/j.im.2014.08.008.
  • Peruzzini, M., F. Grandi, and M. Pellicciari. 2020. “Exploring the Potential of Operator 4.0 Interface and Monitoring.” Computers & Industrial Engineering 139: 105600. doi:10.1016/j.cie.2018.12.047.
  • Pinto, Ana, Sonia Sousa, Cristóvão Silva, and Pedro Coelho. 2020. “Adaptation and Validation of the HCTM Scale into Human-Robot Interaction Portuguese Context.” In: Proceedings of the 11th Nordic Conference on Human-Computer Interaction. Tallinn Estonia: 25–29.10.2020, 1–4.
  • Qin, F., K. Li, and J. Yan. 2020. “Understanding User Trust in Artificial Intelligence-Based Educational Systems: Evidence from China.” British Journal of Educational Technology 51 (5): 1693–1710. doi:10.1111/bjet.12994.
  • Rheu, Minjin, Ji Youn Shin, Wei Peng, and Jina Huh-Yoo. 2020. Systematic Review: Trust-Building Factors and Implications for Conversational Agent Design. International Journal of Human-Computer Interaction. Available from: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2020-66126-001&lang=de&site=ehost-live.
  • Riedl, M. O. 2019. “Human‐Centered Artificial Intelligence and Machine Learning.” Human Behavior and Emerging Technologies 1 (1): 33–36. doi:10.1002/hbe2.117.
  • Robinette, P., A. Howard, and A. R. Wagner. 2017. “Conceptualizing Overtrust in Robots: Why Do People Trust a Robot That Previously Failed?.” In: W. F. Lawless eds. Autonomy and Artificial Intelligence: A Threat or Savior? Cham: Springer International Publishing, 129–155
  • Rosenbrock, H. H. 1990. Machines with a Purpose. Oxford, New York: Oxford University Press.
  • Rossi, A., Dautenhahn, K., Lee Koay, K., Saunders, J. 2017. “Investigating Human Perceptions of Trust in Robots for Safe HRI in Home Environments.” In: B. Mutlu eds. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction – HRI '17. New York, New York, USA: ACM Press, 375–376
  • Rousseau, Denise M., Sim B. Sitkin, Ronald S. Burt, and Colin Camerer. 1998. “Not so Different after All: A Cross-Discipline View of Trust.” Academy of Management Review 23 (3): 393–404. doi:10.5465/amr.1998.926617.
  • Rovira, E., R. Pak, and A. McLaughlin. 2017. “Effects of Individual Differences in Working Memory on Performance and Trust with Various Degrees of Automation.” Theoretical Issues in Ergonomics Science 18 (6): 573–591. doi:10.1080/1463922X.2016.1252806.
  • Sanneman, L, and J. A. Shah. 2020. Trust Considerations for Explainable Robots: A Human Factors Perspective. Ithaca: Cornell University Library, arXiv.org. arXiv.org. Available from: https://www.proquest.com/docview/2402196722?accountid=14644.
  • Saßmannshausen, Till, Peter Burggräf, Johannes Wagner, Marc Hassenzahl, Thomas Heupel, and Fabian Steinberg. 2021. “Trust in Artificial Intelligence within Production Management – an Exploration of Antecedents.” Ergonomics 64 (10): 1333–1350.
  • Saßmannshausen, T. M. 2022. “Data from an Extensive Literature Review on Factors Influencing Human Trust in Artificial Intelligence.” Mendeley Data. Available from: doi:10.17632/cf78cr8xyx.3.
  • Sato, Tetsuya, Yusuke Yamani, Molly Liechty, and Eric T. Chancey. 2020. “Automation Trust Increases under High-Workload Multitasking Scenarios Involving Risk.” Cognition, Technology & Work 22 (2): 399–407. doi:10.1007/s10111-019-00580-5.
  • Schaefer, Kristin E., Jessie Y C. Chen, James L. Szalma, and P A. Hancock. 2016. “A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems.” Human Factors 58 (3): 377–400.
  • Schmidt, P., F. Biessmann, and T. Teubner. 2020. “Transparency and Trust in Artificial Intelligence Systems.” Journal of Decision Systems 29 (4): 260–278. doi:10.1080/12460125.2020.1819094.
  • Sebo, S. S., P. Krishnamurthi, and B. Scassellati. 2019. “I Don’t Believe You: Investigating the Effects of Robot Trust Violation and Repair.” In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI): IEEE, 57–65.
  • Sethumadhavan, A. 2019. “Trust in Artificial Intelligence.” Ergonomics in Design: The Quarterly of Human Factors Applications 27 (2): 34–34. doi:10.1177/1064804618818592.
  • Sharan, N. N, and D. M. Romano. 2020. “The Effects of Personality and Locus of Control on Trust in Humans versus Artificial Intelligence.” Heliyon 6 (8): e04572. doi:10.1016/j.heliyon.2020.e04572.
  • Shneiderman, B. 2020. “Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy.” International Journal of Human–Computer Interaction 36 (6): 495–504. doi:10.1080/10447318.2020.1741118.
  • Siau, K, and W. Wang. 2018. “Building Trust in Artificial Intelligence, Machine Learning, and Robotics.” Cutter Business Technology Journal 31 (2): 47–53.
  • Söllner, Matthias, Axel Hoffmann, Holger Hoffmann, Arno Wacker, and Jan Marco Leimeister. 2012. “Understanding the Formation of Trust in IT Artifacts.” In: Proceedings of the International Conference on Information Systems (ICIS), Orlando Florida (USA), 1–18.
  • Söllner, M, and P. A. Pavlou. 2016. “A Longitudinal Perspective on Trust in IT Artefacts.” European Conference on Information Systems (ECIS). Isstanbul (Turkey): AIS eLibrary, 1–17.
  • Song, Y, and Y. Luximon. 2020. “Trust in AI Agent: A Systematic Review of Facial Anthropomorphic Trustworthiness for Social Robot Design.” Sensors 20 (18): 5087. doi:10.3390/s20185087.
  • Sowa, K., A. Przegalinska, and L. Ciechanowski. 2021. “Cobots in Knowledge Work.” Journal of Business Research 125: 135–142. doi:10.1016/j.jbusres.2020.11.038.
  • Stuck, R. E, and W. A. Rogers. 2018. “Older Adults’ Perceptions of Supporting Factors of Trust in a Robot Care Provider.” Journal of Robotics 2018: 1–11. doi:10.1155/2018/6519713.
  • Sullivan, Y., M. de Bourmont, and M. Dunaway. 2022. “Appraisals of Harms and Injustice Trigger an Eerie Feeling That Decreases Trust in Artificial Intelligence Systems.” Annals of Operations Research 308 (1): 525–548. doi:10.1007/s10479-020-03702-9.
  • Sutrop, M. 2019. “Should we Trust Artificial Intelligence?” Trames. Journal of the Humanities and Social Sciences 23 (4): 499–522. doi:10.3176/tr.2019.4.07.
  • Swanson, L. R., J. L. Bellanca, and J. Helton. 2019. “Automated Systems and Trust: Mineworkers’ Trust in Proximity Detection Systems for Mobile Machines.” Safety and Health at Work 10 (4): 461–469.
  • Thiebes, S., S. Lins, and A. Sunyaev. 2021. “Trustworthy Artificial Intelligence.” Electronic Markets 31 (2): 447–464. doi:10.1007/s12525-020-00441-4.
  • Tolmeijer, Suzanne, Astrid Weiss, Marc Hanheide, Felix Lindner, Thomas M. Powers, Clare Dixon, and Myrthe L. Tielman. 2020. “Taxonomy of Trust-Relevant Failures and Mitigation Strategies.” In: T. Belpaeme, ed. Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. New York,NY,United States: Association for Computing Machinery, 3–12
  • Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Gonzalez Zelaya, C., and van Moorsel, A. 2019. The Relationship between Trust in AI and Trustworthy Machine Learning Technologies. Ithaca: Cornell University Library. arXiv.org. arXiv.org. Available from: http://arxiv.org/pdf/1912.00782v2.
  • Torraco, R. J. 2005. “Writing Integrative Literature Reviews: Guidelines and Examples.” Human Resource Development Review 4 (3): 356–367. doi:10.1177/1534484305278283.
  • Tussyadiah, I. P., F. J. Zach, and J. Wang. 2020. “Do Travelers Trust Intelligent Service Robots?” Annals of Tourism Research 81: 102886. doi:10.1016/j.annals.2020.102886.
  • Ullman, D, and B. F. Malle. 2017. “Human-Robot Trust: Just a Button Press Away.” In: B. Mutlu eds. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction – HRI '17. New York, New York, USA: ACM Press, 309–310.
  • Volante, William G., Janine Sosna, Theresa Kessler, Tracy Sanders, and P A. Hancock. 2019. “Social Conformity Effects on Trust in Simulation-Based Human-Robot Interaction.” Human Factors 61 (5): 805–815.
  • vom Brocke, Jan, Alexander Simons, Bjoern Niehaves, Bjorn Niehaves, Kai Reimer, Ralf Plattfaut, and Anne Cleven. 2009. “Reconstructing the Giant: On the Importance of Rigour in Documenting the Literature Search Process.” European Conference on Information Systems (ECIS) Proceedings. 161: 2206–2217.
  • Wagner, A. R., P. Robinette, and A. Howard. 2018. “Modeling the Human-Robot Trust Phenomenon.” ACM Transactions on Interactive Intelligent Systems 8 (4): 1–24. doi:10.1145/3152890.
  • Walterbusch, M., M. Gräuler, and F. Teuteberg. 2014. “How Trust is Defined: A Qualitative and Quantitative Analysis of Scientific Literature.” 20th Americas Conference on Information Systems, AMCIS-0524-2014.R1.
  • Weißer, Tim., Till Saßmannshausen, Dennis Ohrndorf, Peter Burggräf, and Johannes Wagner. 2020. “A Clustering Approach for Topic Filtering within Systematic Literature Reviews.” MethodsX 7 (100831): 100831.
  • Wong, A., A. Xu, and G. Dudek. 2019. “Investigating Trust Factors in Human-Robot Shared Control: Implicit Gender Bias around Robot Voice.” In: 2019 16th Conference on Computer and Robot Vision (CRV): IEEE : 195–200. doi:10.1109/CRV.2019.00034.
  • World Economic Forum. 2015. Deep Shift: Technology Tipping Points and Societal Impact: Survey Report, September 2015. Geneva: World Economic Forum.
  • Wright, W. E. 1974. “An Axiomatic Specification of Euclidean Analysis.” The Computer Journal 17 (4): 355–364. doi:10.1093/comjnl/17.4.355.
  • Xie, Heng, Gayle Prybutok, Xianghui Peng, and Victor Prybutok. 2020. Determinants of Trust in Health Information Technology: An Empirical Investigation in the Context of an Online Clinic Appointment System. International Journal of Human-Computer Interaction. Available from: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2020-03466-001&lang=de&site=ehost-live.
  • Xu, A, and G. Dudek. 2015. “Towards Efficient Collaborations with Trust-Seeking Adaptive Robots.” In: J. A. Adams eds. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts – HRI'15 Extended Abstracts. New York, New York, USA: ACM Press, 221–222
  • Yagoda, R. E, and D. J. Gillan. 2012. “You Want Me to Trust a ROBOT? The Development of a Human–Robot Interaction Trust Scale.” International Journal of Social Robotics 4 (3): 235–248. doi:10.1007/s12369-012-0144-0.
  • Zhou, J., S. Luo, and F. Chen. 2020. “Effects of Personality Traits on User Trust in Human-Machine Collaborations.” Journal on Multimodal User Interfaces 14 (4): 387–400. doi:10.1007/s12193-020-00329-9.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.