1,055
Views
1
CrossRef citations to date
0
Altmetric
PERSPECTIVES

A Cogitation on the ChatGPT Craze from the Perspective of Psychological Algorithm Aversion and Appreciation

ORCID Icon
Pages 3837-3844 | Received 17 Jul 2023, Accepted 08 Sep 2023, Published online: 13 Sep 2023

References

  • Yu H. Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Front Psychol. 2023;14. doi:10.3389/fpsyg.2023.1181712
  • Cheng K, Guo Q, He Y, Lu Y, Gu S, Wu H. Exploring the potential of GPT-4 in biomedical engineering: the dawn of a new era. Ann Biomed Eng. 2023;51(8):1645–1653. doi:10.1007/s10439-023-03221-1
  • Cheng K, Li Z, Guo Q, Sun Z, Wu H, Li C. Emergency surgery in the era of artificial intelligence: chatGPT could be the doctor’s right-hand man. Int J Surg. 2023;109(6):1816–1818. doi:10.1097/JS9.0000000000000410
  • Carayon P, Hoonakker PL, Hundt AS, et al. Application of human factors to improve usability of clinical decision support for diagnostic decision-making: a scenario-based simulation study. BMJ Qual Saf. 2019;29(4):329–340. doi:10.1136/bmjqs-2019-009857
  • Surameery NM, Shakor MY. Use Chat GPT to solve programming bugs. Int J Inform Technol Comput Eng. 2023;31:17–22. doi:10.55529/ijitc.31.17.22
  • Grove WM. Clinical versus statistical prediction: the contribution of Paul E. Meehl. J Clin Psychol. 2005;61(10):1233–1243. doi:10.1002/jclp.20179
  • Prasad K, Vaidya R, Mangipudic MR. Effect of occupational stress and remote working on psychological well-being of employees: an empirical analysis during covid-19 pandemic concerning information technology industry in Hyderabad. Indian J Commerce Manag Stud. 2020;XI(2):1. doi:10.18843/ijcms/v11i2/01
  • Dietvorst BJ, Simmons JP, Massey C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J Exp Psychol. 2015;144(1):114–126. doi:10.1037/xge0000033
  • Thurman NJ, Moeller J, Helberger N, Trilling D. My friends, editors, algorithms, and I. Digit Journalism. 2018;7:447–469. doi:10.1080/21670811.2018.1493936
  • Logg JM, Minson JA, Moore DA. Algorithm appreciation: people prefer algorithmic to human judgment. Microeconomics. 2018;10(1):1–40. doi:10.1257/mic.20160125
  • Ge H, Ge Z, Sun L, Wang Y. Enhancing cooperation by cognition differences and consistent representation in multi-agent reinforcement learning. Appl Intell. 2022;52(9):9701–9716. doi:10.1007/s10489-021-02873-7
  • Porter E, Murphy M, O’Connor C. Chat GPT in dermatology: progressive or problematic? J Eur Acad Dermatol Venereol. 2023;37(7). doi:10.1111/jdv.19174
  • Johnson PD, Smith MB, Wallace JC, Hill AD, Baron RA. A review of multilevel regulatory focus in organizations. J Manage. 2015;41(5):1501–1529. doi:10.1177/0149206315575552
  • Yu H, Guo Y. Generative artificial intelligence empowers educational reform: current status, issues, and prospects. Frontiers Educ. 2023;8. doi:10.3389/feduc.2023.1183162
  • Wouters LT, Zwart DL, Erkelens DC, et al. Tinkering and overruling the computer decision support system: working strategies of telephone triage-nurses who assess the urgency of callers suspected of having an acute cardiac event. J Clin Nurs. 2019;29(7–8):1175–1186.
  • Larkin C, Drummond Otten C, Arvai JL. Paging Dr. JARVIS! Will people accept advice from artificial intelligence for consequential risk management decisions? J Risk Res. 2021;25(4):407–422. doi:10.1080/13669877.2021.1958047
  • de Achaval S, Fraenkel L, Volk RJ, Cox V, Suarez‐Almazor ME. Impact of educational and patient decision aids on decisional conflict associated with total knee arthroplasty. Arthritis Care Res. 2012;64(2):229–237.
  • Castelo N, Bos MW, Lehmann DR. Task-dependent algorithm aversion. J Market Res. 2019;56(5):809–825. doi:10.1177/0022243719851788
  • Mahmud H, Islam A, Ahmed S, Smolander K. What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technol Forecast Soc Change. 2022;175:121390. doi:10.1016/j.techfore.2021.121390
  • Chiesa M, Kamisiński A, Rak J, Rétvári G, Schmid S. A survey of fast-recovery mechanisms in packet-switched networks. IEEE Commun Sur Tutor. 2021;23(2):1253–1301. doi:10.1109/COMST.2021.3063980
  • Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. Econ Transit Econom. 2003;2003:425–478.
  • Efendić E, van de Calseyde P, Evans AM, Madrian BC. Slow response times undermine trust in algorithmic (but not human) predictions. Organ Behav Hum Decis Process. 2019;163:6–16. doi:10.1016/j.obhdp.2019.02.001
  • Madhavan P, Wiegmann DA. Similarities and differences between human–human and human–automation trust: an integrative review. Theoret Issues Ergonom Sci. 2007;8(4):277–301. doi:10.1080/14639220500337708
  • Prahl A, Swol LM. Understanding algorithm aversion: when is advice from automation discounted? J Forecast. 2017;36(6):691–702. doi:10.1002/for.2464
  • Liu P, Du Y, Xu Z. Machines versus humans: people’s biased responses to traffic accidents involving self-driving vehicles. Accident Anal Prevent. 2019;125:232–240.
  • Epley N, Waytz A, Cacioppo JT. On seeing human: a three-factor theory of anthropomorphism. Psychol Rev. 2007;114(4):864–886. doi:10.1037/0033-295X.114.4.864
  • Karra S, Nguyen S, Tulabandhula T. AI personification: estimating the personality of language models. ArXiv. 2022. doi:10.48550/arXiv.2204.12000
  • Rhim JS, Kwak M, Gong Y, Gweon G. Application of humanization to survey chatbots: change in chatbot perception, interaction experience, and survey data quality. Comput Hum Behav. 2022;126:107034. doi:10.1016/j.chb.2021.107034
  • Go E, Sundar SS. Humanizing chatbots: the effects of visual, identity and conversational cues on humanness perceptions. Comput Hum Behav. 2019;97:304–316. doi:10.1016/j.chb.2019.01.020
  • Lee S, Lee N, Sah YJ. Perceiving a mind in a chatbot: effect of mind perception and social cues on co-presence, closeness, and intention to use. Int J Human. 2019;36:930–940.
  • Kim J, Im I. Anthropomorphic response: understanding interactions between humans and artificial intelligence agents. Comput Hum Behav. 2022;139:107512. doi:10.1016/j.chb.2022.107512
  • Yen C, Chiang M. Trust me, if you can: a study on the factors that influence consumers’ purchase intention triggered by chatbots based on brain image evidence and self-reported assessments. Behav Inf Technol. 2020;40(11):1177–1194. doi:10.1080/0144929X.2020.1743362
  • Gu D, Shi F, Hua R, et al. An artificial‐intelligence‐based age‐specific template construction framework for brain structural analysis using magnetic resonance images. Hum Brain Mapp. 2022;44:861–875.
  • Kätsyri J, Förger K, Mäkäräinen M, Takala T. A review of empirical evidence on different uncanny valley hypotheses: support for perceptual mismatch as one road to The Valley of eeriness. Front Psychol. 2015;6. doi:10.3389/fpsyg.2015.00390
  • Stokel-Walker C, van Noorden R. What ChatGPT and generative AI mean for science. Nature. 2023;614(7947):214–216. doi:10.1038/d41586-023-00340-6
  • Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379(6630):313. doi:10.1126/science.adg7879
  • Wilson GF. Operator functional state assessment for adaptive automation implementation. SPIE Defense + Commercial Sensing; 2005.
  • Alexander V, Blinder C, Zak PJ. Why trust an algorithm? Performance, cognition, and neurophysiology. Comput Hum Behav. 2018;89:279–288. doi:10.1016/j.chb.2018.07.026
  • Harari YN. Homo Deus: A Brief History of Tomorrow. Random House; 2016.
  • Dietvorst BJ, Simmons JP, Massey C. Overcoming algorithm aversion: people will use imperfect algorithms if they can (Even Slightly) modify them. Cognit Math. 2016;64(3):1155–1170.
  • Kranzfelder M, Staub C, Fiolka A, et al. Toward increased autonomy in the surgical OR: needs, requests, and expectations. Surg Endosc. 2013;27(5):1681–1688. doi:10.1007/s00464-012-2656-y
  • Cheng K, Sun Z, He Y, Gu S, Wu H. The potential impact of ChatGPT/GPT-4 on surgery: will it topple the profession of surgeons? Int J Surg. 2023;109(5):1545–1547. doi:10.1097/JS9.0000000000000388