404
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Warmth, Competence, and the Determinants of Trust in Artificial Intelligence: A Cross-Sectional Survey from China

, , , , &
Received 17 Dec 2023, Accepted 13 May 2024, Published online: 28 May 2024

References

  • Ahn, J., Kim, J., & Sung, Y. (2022). The effect of gender stereotypes on artificial intelligence recommendations. Journal of Business Research, 141, 50–59. https://doi.org/10.1016/j.jbusres.2021.12.007
  • High-Level Expert Group on Artificial Intelligence [AI HLEG]. (2019, April). Ethics guidelines for trustworthy AI [European Commission report]. https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa 75ed71a1
  • Ajzen, I., & Fishbein, M. (2000). Attitudes and the attitude-behavior relation: Reasoned and automatic processes. European Review of Social Psychology, 11(1), 1–33. https://doi.org/10.1080/14792779943000116
  • Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34. https://doi.org/10.1016/j.cognition.2018.08.003
  • Calhoun, C. S., Bobko, P., Gallimore, J. J., & Lyons, J. B. (2019). Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment. Journal of Trust Research, 9(1), 28–46. https://doi.org/10.1080/21515581.2019.1579730
  • Canals, J., & Heukamp, F. (2020). The future of management in an AI world. Palgrave Macmillan.
  • Chao, C. M. (2019). Factors determining the behavioral intention to use mobile learning: An application and extension of the UTAUT Model. Frontiers in Psychology, 10, 1652. https://doi.org/10.3389/fpsyg.2019.01652
  • Choung, H., David, P., & Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727–1739. https://doi.org/10.1080/10447318.2022.2050543
  • Christian, B. (2020). The alignment problem: Machine learning and human values. WW Norton & Company.
  • Christoforakos, L., Gallucci, A., Surmava-Große, T., Ullrich, D., & Diefenbach, S. (2021). Can robots earn our trust the same way humans do? A systematic exploration of competence, warmth, and anthropomorphism as determinants of trust development in HRI. Frontiers in Robotics and AI, 8, 640444. https://doi.org/10.3389/frobt.2021.640444
  • Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. The Journal of Applied Psychology, 92(4), 909–927. https://doi.org/10.1037/0021-9010.92.4.909
  • Cuddy, A. J., Fiske, S. T., & Glick, P. (2008). Warmth and competence as universal dimensions of social perception: The stereotype content model and the BIAS map. Advances in Experimental Social Psychology, 40, 61–149. https://doi.org/10.1016/S0065-2601(07)00002-0
  • Delgosha, M. S., & Hajiheydari, N. (2021). How human users engage with consumer robots? A dual model of psychological ownership and trust to explain post-adoption behaviours. Computers in Human Behavior, 117, 106660. https://doi.org/10.1016/j.chb.2020.106660
  • Eyssel, F., & Hegel, F. (2012). (S)he’s got the look: Gender stereotyping of robots. Journal of Applied Social Psychology, 42(9), 2213–2230. https://doi.org/10.1111/j.1559-1816.2012.00937.x
  • Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 205395171986054. https://doi.org/10.1177/2053951719860542
  • Fiske, S. T., Cuddy, A. J. C., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878–902. https://doi.org/10.1037/0022-3514.82.6.878
  • Flores, F., & Solomon, R. C. (1998). Creating trust. Business Ethics Quarterly, 8(2), 205–232. https://doi.org/10.2307/3857326
  • Frank, D. A., & Otterbring, T. (2023). Being seen… by human or machine? Acknowledgment effects on customer responses differ between human and robotic service workers. Technological Forecasting and Social Change, 189, 122345. https://doi.org/10.1016/j.techfore.2023.122345
  • Frey, R., Pedroni, A., Mata, R., Rieskamp, J., & Hertwig, R. (2017). Risk preference shares the psychometric structure of major psychological traits. Science Advances, 3(10), e1701381. https://doi.org/10.1126/sciadv.1701381
  • Frischknecht, R. (2021). A social cognition perspective on autonomous technology. Computers in Human Behavior, 122, 106815. https://doi.org/10.1016/j.chb.2021.106815
  • Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51–90. https://doi.org/10.2307/30036519
  • Gefen, D., & Straub, D. W. (2004). Consumer trust in B2C e-Commerce and the importance of social presence: Experiments in e-Products and e-Services. Omega, 32(6), 407–424. https://doi.org/10.1016/j.omega.2004.01.006
  • Gigerenzer, G., Reb, J., & Luan, S. (2022). Smart heuristics for individuals, teams, and organizations. Annual Review of Organizational Psychology and Organizational Behavior, 9(1), 171–198. https://doi.org/10.1146/annurev-orgpsych-012420-090506
  • Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. (2023). Trust in artificial intelligence: A global study. The University of Queensland and KPMG Australia. https://doi.org/10.14264/00d3c94
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Guo, X., Zhang, X., & Sun, Y. (2016). The privacy–personalization paradox in mHealth services acceptance of different age groups. Electronic Commerce Research and Applications, 16, 55–65. https://doi.org/10.1016/j.elerap.2015.11.001
  • Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157–169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008
  • Harris-Watson, A. M., Larson, L. E., Lauharatanahirun, N., DeChurch, L. A., & Contractor, N. S. (2023). Social perception in human-AI teams: Warmth and competence predict receptivity to AI teammates. Computers in Human Behavior, 145, 107765. https://doi.org/10.1016/j.chb.2023.107765
  • Hofeditz, L., Mirbabaie, M., & Ortmann, M. (2023). Ethical challenges for human–agent interaction in virtual collaboration at work. International Journal of Human–Computer Interaction, 40, 1–17. https://doi.org/10.1080/10447318.2023.2279400
  • James, G., Witten, D., Hastie, T., Tibshirani, R., & Taylor, J. (2023). An introduction to statistical learning: With applications in python. Springer Nature.
  • Ji, Z., Yang, Y., Fan, X., Wang, Y., Xu, Q., & Chen, Q. W. (2021). Stereotypes of social groups in mainland China in terms of warmth and competence: Evidence from a large undergraduate sample. International Journal of Environmental Research and Public Health, 18(7), 3559. https://doi.org/10.3390/ijerph18073559
  • Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04
  • Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2021). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 337–359. https://doi.org/10.1177/00187208211013988
  • Khalilzadeh, J., Ozturk, A. B., & Bilgihan, A. (2017). Security-related factors in extended UTAUT model for NFC based mobile payment in the restaurant industry. Computers in Human Behavior, 70, 460–474. https://doi.org/10.1016/j.chb.2017.01.001
  • Khan, G. F., Swar, B., & Lee, S. K. (2014). Social media risks and benefits: A public sector perspective. Social Science Computer Review, 32(5), 606–627. https://doi.org/10.1177/0894439314524701
  • Kim, D. J., Ferrin, D. L., & Rao, H. R. (2008). A trust-based consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents. Decision Support Systems, 44(2), 544–564. https://doi.org/10.1016/j.dss.2007.07.001
  • Komiak, S. X., & Benbasat, I. (2004). Understanding customer trust in agent-mediated electronic commerce, web-mediated electronic commerce, and traditional commerce. Information Technology and Management, 5(1/2), 181–207. https://doi.org/10.1023/B:ITEM.0000008081.55563.d4
  • Kong, D. T. (2018). Trust toward a group of strangers as a function of stereotype-based social identification. Personality and Individual Differences, 120, 265–270. https://doi.org/10.1016/j.paid.2017.03.031
  • Kulms, P., & Kopp, S. (2018). A social cognition perspective on human–computer trust: The effect of perceived warmth and competence on trust in decision-making with computers. Frontiers in Digital Humanities, 5. https://www.frontiersin.org/article/10.3389/fdigh.2018.00014 https://doi.org/10.3389/fdigh.2018.00014
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lee, J. G., Kim, K. J., Lee, S., & Shin, D. H. (2015). Can autonomous vehicles be safe and trustworthy? Effects of appearance and autonomy of unmanned driving systems. International Journal of Human-Computer Interaction, 31(10), 682–691. https://doi.org/10.1080/10447318.2015.1070547
  • Leeman, D., & Reynolds, D. (2012). Trust and outsourcing: Do perceptions of trust influence the retention of outsourcing providers in the hospitality industry? International Journal of Hospitality Management, 31(2), 601–608. https://doi.org/10.1016/j.ijhm.2011.08.006
  • Li, C., Wang, C., & Chau, P. Y. K. (2022). Revealing the black box: Understanding how prior self-disclosure affects privacy concern in the on-demand services. International Journal of Information Management, 67, 102547. https://doi.org/10.1016/j.ijinfomgt.2022.102547
  • Li, Y., Wu, B., Huang, Y., & Luan, S. (2024). Developing trustworthy artificial intelligence: Insights from research on interpersonal, human-automation, and human-AI Trust. Frontiers in Psychology, 15, 1382693. https://doi.org/10.3389/fpsyg.2024.1382693
  • Lockey, S., Gillespie, N., & Curtis, C. (2020). Trust in artificial intelligence: Australian insights. The University of Queensland and KPMG. https://doi.org/10.14264/b32f129
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. https://doi.org/10.5465/amr.1995.9508080335
  • McKnight, D. H., Cummings, L. L., & Chervany, N. L. (1998). Initial trust formation in new organizational relationships. Academy of Management Review, 23(3), 473–490. https://doi.org/10.5465/amr.1998.926622
  • Merritt, S. M. (2011). Affective processes in human–automation interactions. Human Factors, 53(4), 356–370. https://doi.org/10.1177/0018720811411912
  • Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. In R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, & G. Olson (Eds.), Proceedings of the SIGCHI conference on human factors in computing systems (pp. 72–78). Association for Computing Machinery.
  • Nielsen, Y. A., Thielmann, I., Zettler, I., & Pfattheicher, S. (2022). Sharing money with humans versus computers: On the role of honesty-humility and (non-) social preferences. Social Psychological and Personality Science, 13(6), 1058–1068. https://doi.org/10.1177/19485506211055622
  • Ozmen Garibay, O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., Falco, G., Fiore, S. M., Garibay, I., Grieman, K., Havens, J. C., Jirotka, M., Kacorri, H., Karwowski, W., Kider, J., Konstan, J., Koon, S., Lopez-Gonzalez, M., Maifeld-Carucci, I., … Xu, W. (2023). Six human-centered artificial intelligence grand challenges. International Journal of Human–Computer Interaction, 39(3), 391–437. https://doi.org/10.1080/10447318.2022.2153320
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. The Journal of Applied Psychology, 88(5), 879–903. https://doi.org/10.1037/0021-9010.88.5.879
  • Rheu, M., Shin, J. Y., Peng, W., & Huh-Yoo, J. (2021). Systematic review: Trust-building factors and implications for conversational agent design. International Journal of Human–Computer Interaction, 37(1), 81–96. https://doi.org/10.1080/10447318.2020.1807710
  • Russo, P. A., Duradoni, M., & Guazzini, A. (2021). How self-perceived reputation affects fairness towards humans and artificial intelligence. Computers in Human Behavior, 124, 106920. https://doi.org/10.1016/j.chb.2021.106920
  • Salisbury, W. D., Pearson, R. A., Pearson, A. W., & Miller, D. W. (2001). Perceived security and World Wide Web purchase intention. Industrial Management & Data Systems, 101(4), 165–177. https://doi.org/10.1108/02635570110390071
  • Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34(4), 1057–1084. https://doi.org/10.1007/s13347-021-00450-x
  • Schepman, A., & Rodway, P. (2023). The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory validation and associations with personality, corporate distrust, and teneral trust. International Journal of Human–Computer Interaction, 39(13), 2724–2741. https://doi.org/10.1080/10447318.2022.2085400
  • Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. https://doi.org/10.1016/j.chb.2019.04.019
  • Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
  • Simon, J. C., Styczynski, N., & Gutsell, J. N. (2020). Social perceptions of warmth and competence influence behavioral intentions and neural processing. Cognitive, Affective & Behavioral Neuroscience, 20(2), 265–275. https://doi.org/10.3758/s13415-019-00767-3
  • Sundar, S. S., & Kim, J. (2019). Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Article 538). Association for Computing Machinery. https://doi.org/10.1145/3290605.3300768
  • Sykes, T. A., Venkatesh, V., & Gosain, S. (2009). Model of acceptance with peer support: A social network perspective to understand employees’ system use. MIS Quarterly, 2, 33, 371–393. https://doi.org/10.2307/20650296
  • Troshani, I., Hill, S. R., Sherman, C., & Arthur, D. (2021). Do we trust in AI? Role of anthropomorphism and intelligence. Journal of Computer Information Systems, 61(5), 481–491. https://doi.org/10.1080/08874417.2020.1788473
  • Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 3, 27, 425–478. https://doi.org/10.2307/30036540
  • Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 1, 36, 157–178. https://doi.org/10.2307/41410412
  • The White House (2023, October 30). Fact sheet: President Biden issues executive order on safe, secure, and trustworthy artificial intelligence. The White House. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
  • Xue, J., Zhou, Z., Zhang, L., & Majeed, S. (2020). Do brand competence and warmth always influence purchase intention? The moderating role of gender. Frontiers in Psychology, 11, 248. https://doi.org/10.3389/fpsyg.2020.00248
  • Ye, C., Hofacker, C. F., Peloza, J., & Allen, A. (2020). How online trust evolves over time: The role of social perception. Psychology & Marketing, 37(11), 1539–1553. https://doi.org/10.1002/mar.21400
  • Zell, E., Strickhouser, J. E., & Krizan, Z. (2018). Subjective social status and health: A meta-analysis of community and society ladders. Health Psychology: official Journal of the Division of Health Psychology, American Psychological Association, 37(10), 979–987. https://doi.org/10.1037/hea0000667
  • Zhang, J., Luximon, Y., & Li, Q. (2022). Seeking medical advice in mobile applications: How social cue design and privacy concerns influence trust and behavioral intention in impersonal patient–physician interactions. Computers in Human Behavior, 130, 107178. https://doi.org/10.1016/j.chb.2021.107178