633
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The emotional impact of generative AI: negative emotions and perception of threat

ORCID Icon, ORCID Icon, ORCID Icon &
Received 13 Nov 2023, Accepted 18 Mar 2024, Published online: 26 Mar 2024

References

  • Branscombe, N. R., N. Ellemers, R. Spears, and B. Doosje. 1999. The Context and Content of Social Identity Threat. Social Identity: Context, Commitment, Content, 35–58.
  • Brown, T., B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, et al. 2020. “Language Models are few-Shot Learners.” Advances in Neural Information Processing Systems 33: 1877–1901.
  • Burton, J. W., M. K. Stein, and T. B. Jensen. 2020. “A Systematic Review of Algorithm Aversion in Augmented Decision Making.” Journal of Behavioral Decision Making 33 (2): 220–239. https://doi.org/10.1002/bdm.2155
  • Cabitza, F., A. Campagner, L. Ronzio, M. Cameli, G. E. Mandoli, M. C. Pastore, L. M. Sconfienza, D. Folgado, M. Barandas, and H. Gamboa. 2023. “Rams, Hounds and White Boxes: Investigating Human-AI Collaboration Protocols in Medical Diagnosis.” Artificial Intelligence in Medicine 138: 102506.
  • Cabitza, F., R. Rasoini, and G. F. Gensini. 2017. “Unintended Consequences of Machine Learning in Medicine.” Jama 318 (6): 517–518. https://doi.org/10.1001/jama.2017.7797
  • Caprariello, P. A., A. J. Cuddy, and S. T. Fiske. 2009. “Social Structure Shapes Cultural Stereotypes and Emotions: A Causal Test of the Stereotype Content Model.” Group Processes & Intergroup Relations 12 (2): 147–155. https://doi.org/10.1177/1368430208101053
  • Cath, C., S. Wachter, B. Mittelstadt, M. Taddeo, and L. Floridi. 2018. “Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach.” Science and Engineering Ethics 24: 505–528.
  • Cha, Y.-J., S. Baek, G. Ahn, H. Lee, B. Lee, J.-e. Shin, and D. Jang. 2020. “Compensating for the Loss of Human Distinctiveness: The use of Social Creativity Under Human–Machine Comparisons.” Computers in Human Behavior 103: 80–90. https://doi.org/10.1016/j.chb.2019.08.027
  • Child, R., S. Gray, A. Radford, and I. Sutskever. 2019. Generating Long Sequences with Sparse Transformers. arXiv preprint arXiv:1904.10509.
  • Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Routledge. https://doi.org/10.4324/9780203771587.
  • Connor, P., J. Varney, D. Keltner, and S. Chen. 2021. “Social Class Competence Stereotypes are Amplified by Socially Signaled Economic Inequality.” Personality and Social Psychology Bulletin 47 (1): 89–105. https://doi.org/10.1177/0146167220916640
  • Croom, A. M. 2015. “Music Practice and Participation for Psychological Well-Being: A Review of how Music Influences Positive Emotion, Engagement, Relationships, Meaning, and Accomplishment.” Musicae Scientiae 19 (1): 44–64. https://doi.org/10.1177/1029864914561709
  • Durante, F., S. T. Fiske, N. Kervyn, A. J. Cuddy, A. Akande, B. E. Adetoun, M. F. Adewuyi, et al. 2013. “Nations’ Income Inequality Predicts Ambivalence in Stereotype Content: How Societies Mind the gap.” British Journal of Social Psychology 52 (4): 726–746. https://doi.org/10.1111/bjso.12005
  • Durante, F., C. B. Tablante, and S. T. Fiske. 2017. “Poor but Warm, Rich but Cold (and Competent): Social Classes in the Stereotype Content Model.” Journal of Social Issues 73 (1): 138–157. https://doi.org/10.1111/josi.12208
  • Esses, V. M., J. F. Dovidio, L. M. Jackson, and T. L. Armstrong. 2001. “The Immigration Dilemma: The Role of Perceived Group Competition, Ethnic Prejudice, and National Identity.” Journal of Social Issues 57 (3): 389–412. https://doi.org/10.1111/0022-4537.00220
  • Esses, V. M., G. Haddock, and M. P. Zanna. 1993. “Values, Stereotypes, and Emotions as Determinants of Intergroup Attitudes.” In Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, edited by D. M. Mackie and D. L. Hamilton, 137–166. San Diego, CA: Academic Press.
  • Ford, B. 2023. IBM to Pause Hiring for Jobs That AI Could Do. Accessed August 8. https://www.bloomberg.com/news/articles/2023-05-01/ibm-to-pause-hiring-for-back-office-jobs-that-ai-could-kill?in_source = embedded-checkout-banner.
  • Gaertig, C., and J. P. Simmons. 2018. “Do People Inherently Dislike Uncertain Advice?” Psychological Science 29 (4): 504–520. https://doi.org/10.1177/0956797617739369
  • Gamez-Djokic, M., and A. Waytz. 2020. “Concerns About Automation and Negative Sentiment Toward Immigration.” Psychological Science 31(8) (8): 987–1000. https://doi.org/10.1177/0956797620929977.
  • Gaube, S., H. Suresh, M. Raue, A. Merritt, S. J. Berkowitz, E. Lermer, J. F. Coughlin, J. V. Guttag, E. Colak, and M. Ghassemi. 2021. “Do as AI say: Susceptibility in Deployment of Clinical Decision-Aids.” NPJ Digital Medicine 4 (1): 31. https://doi.org/10.1038/s41746-021-00385-9
  • GrandViewResearch. 2022. Artificial Intelligence Market Size, Share & Trends Analysis Report By Solution, By Technology (Deep Learning, Machine Learning), By End-use, By Region, And Segment Forecasts, 2023–2030. https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market/segmentation.
  • Grant, N., and C. Metz. 2022. A New Chat Bot is a ‘Code Red’for Google’s Search Business. New York Times. Accessed December 21 2022.
  • Gursoy, D., O. H. Chi, L. Lu, and R. Nunkoo. 2019. “Consumers Acceptance of Artificially Intelligent (AI) Device use in Service Delivery.” International Journal of Information Management 49: 157–169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008
  • Hamilton, D. L., S. J. Sherman, and C. M. Ruvolo. 1990. “Stereotype-Based Expectancies: Effects on Information Processing and Social Behavior.” Journal of Social Issues 46 (2): 35–60. https://doi.org/10.1111/j.1540-4560.1990.tb01922.x.
  • Haring, K. S., C. Mougenot, F. Ono, and K. Watanabe. 2014. “Cultural Differences in Perception and Attitude Towards Robots.” International Journal of Affective Engineering 13 (3): 149–157. https://doi.org/10.5057/ijae.13.149
  • Harmon-Jones, C., B. Bastian, and E. Harmon-Jones. 2016. “The Discrete Emotions Questionnaire: A new Tool for Measuring State Self-Reported Emotions.” PLoS One 11 (8): e0159915. https://doi.org/10.1371/journal.pone.0159915
  • Hayes, A. F. 2017. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. New York: Guilford Press.
  • Hodson, G., and K. Costello. 2007. “Interpersonal Disgust, Ideological Orientations, and Dehumanization as Predictors of Intergroup Attitudes.” Psychological Science 18 (8): 691–698. https://doi.org/10.1111/j.1467-9280.2007.01962.x.
  • Huang, H.-L., L.-K. Cheng, P.-C. Sun, and S.-J. Chou. 2021. “The Effects of Perceived Identity Threat and Realistic Threat on the Negative Attitudes and Usage Intentions Toward Hotel Service Robots: The Moderating Effect of the Robot’s Anthropomorphism.” International Journal of Social Robotics 13 (7): 1599–1611. https://doi.org/10.1007/s12369-021-00752-2
  • Huijts, N. M. A. 2018. “The Emotional Dimensions of Energy Projects: Anger, Fear, joy and Pride About the First Hydrogen Fuel Station in the Netherlands.” Energy Research & Social Science 44: 138–145. https://doi.org/10.1016/j.erss.2018.04.042
  • Jussupow, E., I. Benbasat, and A. Heinzl. 2020. Why are we Averse towards Algorithms? A Comprehensive Literature Review on Algorithm Aversion.
  • Jussupow, E., K. Spohrer, and A. Heinzl. 2022. “Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study with Medical Students and Professionals.” JMIR Formative Research 6 (3): e28750. https://doi.org/10.2196/28750
  • Kulviwat, S., G. C. Bruner, II, A. Kumar, S. A. Nasco, and T. Clark. 2007. “Toward a Unified Theory of Consumer Acceptance Technology.” Psychology & Marketing 24 (12): 1059–1084. https://doi.org/10.1002/mar.20196
  • Larsen, J. T., and A. P. McGraw. 2014. “The Case for Mixed Emotions.” Social and Personality Psychology Compass 8 (6): 263–274. https://doi.org/10.1111/spc3.12108
  • Legato, S. 2023. How Will Chatbots Change Education? Accessed February 10. https://www.nytimes.com/2023/01/28/opinion/letters/chatbots-education.html.
  • Lim, V., M. Rooksby, and E. S. Cross. 2021. “Social Robots on a Global Stage: Establishing a Role for Culture During Human–Robot Interaction.” International Journal of Social Robotics 13 (6): 1307–1333. https://doi.org/10.1007/s12369-020-00710-4
  • Logg, J. M., J. A. Minson, and D. A. Moore. 2019. “Algorithm Appreciation: People Prefer Algorithmic to Human Judgment.” Organizational Behavior and Human Decision Processes 151: 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
  • Maddux, W. W., A. D. Galinsky, A. J. C. Cuddy, and M. Polifroni. 2008. “When Being a Model Minority Is Good . . . and Bad: Realistic Threat Explains Negativity Toward Asian Americans.” Personality and Social Psychology Bulletin 34 (1): 74–89. https://doi.org/10.1177/0146167207309195.
  • Mahmud, H., A. N. Islam, S. I. Ahmed, and K. Smolander. 2022. “What Influences Algorithmic Decision-Making? A Systematic Literature Review on Algorithm Aversion.” Technological Forecasting and Social Change 175: 121390. https://doi.org/10.1016/j.techfore.2021.121390
  • Mandel, D. R., and O. Vartanian. 2010. “Bush v. Bin Laden: Effect of State Emotion on Perceived Threat is Mediated by Emotion Towards the Threat Agent.” Revue Internationale de Psychologie Sociale 23 (1): 5–23.
  • Matthews, M., and S. Levin. 2012. “Testing a Dual Process Model of Prejudice: Assessment of Group Threat Perceptions and Emotions.” Motivation and Emotion 36 (4): 564–574. https://doi.org/10.1007/s11031-012-9280-y
  • McKee, K. R., X. Bai, and S. T. Fiske. 2023. “Humans Perceive Warmth and Competence in Artificial Intelligence.” Iscience 26 (8): 107256.
  • Meyer, J., T. Jansen, R. Schiller, L. W. Liebenow, M. Steinbach, A. Horbach, and J. Fleckenstein. 2024. “Using LLMs to Bring Evidence-Based Feedback Into the Classroom: Ai-Generated Feedback Increases Secondary Students’ Text Revision, Motivation, and Positive Emotions.” Computers and Education: Artificial Intelligence 6: 100199. https://doi.org/10.1016/j.caeai.2023.100199
  • Miller, S. L., J. K. Maner, and D. V. Becker. 2010. “Self-protective Biases in Group Categorization: Threat Cues Shape the Psychological Boundary Between “us” and “Them”.” Journal of Personality and Social Psychology 99 (1): 62. https://doi.org/10.1037/a0018086
  • Milmo, D. 2023, February, 2nd, 2023. ChatGPT Reaches 100 Million Users two Months after Launch. Accessed February 6. https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app.
  • Mirbabaie, M., F. Brünker, N. R. J. Möllmann Frick, and S. Stieglitz. 2022. “The Rise of Artificial Intelligence–Understanding the AI Identity Threat at the Workplace.” Electronic Markets 32: 73–99. https://doi.org/10.1007/s12525-021-00496-x.
  • Morewedge, C. K. 2022. “Preference for Human, not Algorithm Aversion.” Trends in Cognitive Sciences 26 (10): 824–826.
  • Mori, M. 1970. “Bukimi no Tani (the Uncanny Valley).” Energy 7 (4): 33–35.
  • Ng, D. T. K., J. K. L. Leung, S. K. W. Chu, and M. S. Qiao. 2021. “Conceptualizing AI Literacy: An Exploratory Review.” Computers and Education: Artificial Intelligence 2: 100041. https://doi.org/10.1016/j.caeai.2021.100041
  • Ngo, T. T. A. 2023. “The Perception by University Students of the use of ChatGPT in Education.” International Journal of Emerging Technologies in Learning (Online) 18 (17): 4. https://doi.org/10.3991/ijet.v18i17.39019
  • Nolan, B. 2023, July 7. These are the 3 Biggest Fears about AI – and here’s how Worried you should be about them. https://www.businessinsider.in/tech/news/these-are-the-3-biggest-fears-about-ai-do-they-really-stack-up/articleshow/101073780.cms.
  • OpenAI. 2023. ChatGPT: Optimizing Language Models for Dialogue. Accessed February 10. https://openai.com/blog/chatgpt/.
  • Ore, O., and M. Sposato. 2022. “Opportunities and Risks of Artificial Intelligence in Recruitment and Selection.” International Journal of Organizational Analysis 30 (6): 1771–1782. https://doi.org/10.1108/IJOA-07-2020-2291
  • Ploug, T., and S. Holm. 2020. “The Four Dimensions of Contestable AI Diagnostics-A Patient-Centric Approach to Explainable AI.” Artificial Intelligence in Medicine 107: 101901. https://doi.org/10.1016/j.artmed.2020.101901
  • Ransan-Cooper, H., H. Lovell, P. Watson, A. Harwood, and V. Hann. 2020. “Frustration, Confusion and Excitement: Mixed Emotional Responses to new Household Solar-Battery Systems in Australia.” Energy Research & Social Science 70: 101656. https://doi.org/10.1016/j.erss.2020.101656
  • Riek, B. M., E. W. Mania, and S. L. Gaertner. 2006. “Intergroup Threat and Outgroup Attitudes: A Meta-Analytic Review.” Personality and Social Psychology Review 10 (4): 336–353. https://doi.org/10.1207/s15327957pspr1004_4
  • Rosenthal-Von Der Pütten, A. M., and N. C. Krämer. 2014. “How Design Characteristics of Robots Determine Evaluation and Uncanny Valley Related Responses.” Computers in Human Behavior 36: 422–439. https://doi.org/10.1016/j.chb.2014.03.066
  • Rozin, P., J. Haidt, and C. R. McCauley. 2008. Disgust.
  • Russell, S. J., and P. Norvig. 2020. Artificial Intelligence: Modern Approach. Boston: Pearson.
  • Schepman, A., and P. Rodway. 2020. “Initial Validation of the General Attitudes Towards Artificial Intelligence Scale.” Computers in Human Behavior Reports 1: 100014. https://doi.org/10.1016/j.chbr.2020.100014
  • Scherer, K. R., A. Schorr, and T. Johnstone. 2001. Appraisal Processes in Emotion: Theory, Methods, Research. Oxford University Press.
  • Schermer, C. 2023. Does ChatGPT Pose an Existential Threat to Marketers? Accesseed February 11. https://martech.org/does-chatgpt-pose-an-existential-threat-to-marketers/.
  • Schmid, K., A. A. Ramiah, and M. Hewstone. 2014. “Neighborhood Ethnic Diversity and Trust: The Role of Intergroup Contact and Perceived Threat.” Psychological Science 25 (3): 665–674. https://doi.org/10.1177/0956797613508956
  • Schoemann, A. M., A. J. Boulton, and S. D. Short. 2017. “Determining Power and Sample Size for Simple and Complex Mediation Models.” Social Psychological and Personality Science 8 (4): 379–386. https://doi.org/10.1177/1948550617715068
  • Sherif, M., O. Harvey, B. J. White, W. R. Hood, and C. W. Sherif. 1961. Intergroup Conflict and Cooperation: The Robbers Cave Experiment (Vol. 10). Norman, OK: University Book Exchange.
  • Smith, E. R., and D. M. Mackie. 2008. “Intergroup Emotions.” Handbook of Emotions 3: 428–439.
  • Smith, E. R., and D. M. Mackie. 2015. “Dynamics of Group-Based Emotions: Insights from Intergroup Emotions Theory.” Emotion Review 7 (4): 349–354. https://doi.org/10.1177/1754073915590614
  • Spears, R., J. Jetten, and D. Scheepers. 2002. “Distinctiveness and the Definition of Collective Self: A Tripartite Model.” In Self and Motivation: Emerging Psychological Perspectives, 147–171. American Psychological Association. https://doi.org/10.1037/10448-006.
  • Stephan, W. G., C. L. Renfro, and M. D. Davis. 2008. “The Role of Threat in Intergroup Relations.” Improving Intergroup Relations: Building on the Legacy of Thomas F. Pettigrew, 55–72. https://doi.org/10.1002/9781444303117.ch5
  • Stephan, W. G., and C. W. Stephan. 1985. “Intergroup Anxiety.” Journal of Social Issues 41 (3): 157–175. https://doi.org/10.1111/j.1540-4560.1985.tb01134.x
  • Stephan, W. G., O. Ybarra, and G. Bachman. 1999. “Prejudice Toward Immigrants 1.” Journal of Applied Social Psychology 29 (11): 2221–2237. https://doi.org/10.1111/j.1559-1816.1999.tb00107.x
  • Stephan, W. G., O. Ybarra, and K. Rios. 2016. Intergroup threat theory.
  • Tajfel, H. 1974. “Social Identity and Intergroup Behaviour.” Social Science Information 13 (2): 65–93. https://doi.org/10.1177/053901847401300204
  • Theophilou, E., C. Koyutürk, M. Yavari, S. Bursic, G. Donabauer, A. Telari, A. Testa, et al. 2023, November. “Learning to Prompt in the Classroom to Understand AI Limits: A Pilot Study.” In AIxIA 2023 – Advances in Artificial Intelligence. AIxIA 2023. Lecture Notes in Computer Science. Vol. 14318, edited by R. Basili, D. Lembo, C. Limongelli, and A. Orlandini, 481–496. Cham: Springer. https://doi.org/10.1007/978-3-031-47546-7_33.
  • Thorp, H. H.. 2023. “ChatGPT is Fun, but not an Author.” Science 379: 313–313. https://doi.org/10.1126/science.adg787.
  • Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł Kaiser, and I. Polosukhin. 2017. “Attention is All you Need. Neural Information Processing Systems."
  • Wang, S., S. O. Lilienfeld, and P. Rochat. 2015. “The Uncanny Valley: Existence and Explanations.” Review of General Psychology 19 (4): 393–407. https://doi.org/10.1037/gpr0000056
  • Weidinger, L., J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, M. Cheng, et al. 2021. Ethical and Social Risks of Harm from Language Models. arXiv preprint arXiv:2112.04359.
  • Wells, G. L., and P. D. Windschitl. 1999. “Stimulus Sampling and Social Psychological Experimentation.” Personality and Social Psychology Bulletin 25 (9): 1115–1125. https://doi.org/10.1177/01461672992512005
  • Winder, D. 2023. Does ChatGPT Pose A Cybersecurity Threat? Here’s The AI Bot’s Answer. Accessed February. https://www.forbes.com/sites/daveywinder/2023/02/03/does-chatgpt-pose-a-cybersecurity-threat-heres-the-ai-bots-answer/?sh = 48de055a505d.
  • Ybarra, O., L. Schaberg, and S. Keiper. 1999. “Favorable and Unfavorable Target Expectancies and Social Information Processing.” Journal of Personality and Social Psychology 77 (4): 698–709. https://doi.org/10.1037/0022-3514.77.4.698.
  • Yogeeswaran, K., J. Złotowski, M. Livingstone, C. Bartneck, H. Sumioka, and H. Ishiguro. 2016. “The Interactive Effects of Robot Anthropomorphism and Robot Ability on Perceived Threat and Support for Robotics Research.” Journal of Human-Robot Interaction 5 (2): 29–47. https://doi.org/10.5898/JHRI.5.2.Yogeeswaran
  • Yokoi, R., Y. Eguchi, T. Fujita, and K. Nakayachi. 2021. “Artificial Intelligence is Trusted Less Than a Doctor in Medical Treatment Decisions: Influence of Perceived Care and Value Similarity.” International Journal of Human–Computer Interaction 37 (10): 981–990. https://doi.org/10.1080/10447318.2020.1861763
  • Yzerbyt, V., D. Muller, C. Batailler, and C. M. Judd. 2018. “New Recommendations for Testing Indirect Effects in Mediational Models: The Need to Report and Test Component Paths.” Journal of Personality and Social Psychology 115 (6): 929. https://doi.org/10.1037/pspa0000132
  • Zhou, Y., Y. Shi, W. Lu, and F. Wan. 2022. “Did Artificial Intelligence Invade Humans? The Study on the Mechanism of Patients’ Willingness to Accept Artificial Intelligence Medical Care: From the Perspective of Intergroup Threat Theory.” Frontiers in Psychology 13: 866124. https://doi.org/10.3389/fpsyg.2022.866124.
  • Złotowski, J. A., H. Sumioka, S. Nishio, D. F. Glas, C. Bartneck, and H. Ishiguro. 2015. “Persistence of the Uncanny Valley: The Influence of Repeated Interactions and a Robot's Attitude on its Perception.” Frontiers in Psychology 6: 883. https://doi.org/10.3389/fpsyg.2015.00883

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.