905
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

When communicative AIs are cooperative actors: a prisoner’s dilemma experiment on human–communicative artificial intelligence cooperation

ORCID Icon
Pages 2141-2151 | Received 21 Dec 2021, Accepted 29 Jul 2022, Published online: 09 Aug 2022

References

  • Axelrod, R. 1980a. “Effective Choice in the Prisoner’s Dilemma.” Journal of Conflict Resolution 24: 3–25. doi:10.1177/002200278002400101.
  • Axelrod, R. 1980b. “More Effective Choice in the Prisoner’s Dilemma.” Journal of Conflict Resolution 24: 379–403. doi:10.1177/002200278002400301.
  • Axelrod, R., and W. Hamilton. 1981. “The Evolution of Cooperation.” Science 211: 1390–1396. doi:10.1126/science.7466396.
  • Baker, F., and H. Rachlin. 2002. “Teaching and Learning in a Probabilistic Prisoner’s Dilemma.” Behavioural Processes 57: 211–226. doi:10.1016/S0376-6357(02)00015-3.
  • Balliet, D. 2010. “Communication and Cooperation in Social Dilemmas: A Meta-Analytic Review.” Journal of Conflict Resolution 54: 39–57. doi:10.1177/0022002709352443.
  • Balliet, D., L. B. Mulder, and P. A. M. Van Lange. 2011. “Reward, Punishment, and Cooperation: A Meta-Analysis.” Psychological Bulletin 137: 594–615. doi:10.1037/a0023489.
  • Balliet, D., and P. A. M. Van Lange. 2013. “Trust, Conflict, and Cooperation: A Meta-Analysis.” Psychological Bulletin 139: 1090–1112. doi:10.1037/a0030939.
  • Buhrmester, M., T. Kwang, and S. D. Gosling. 2011. “Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data?” Perspectives on Psychological Science 6: 3–5. doi:10.1177/1745691610393980.
  • Buhrmester, M., S. Talaifar, and S. Gosling. 2018. “An Evaluation of Amazon’s Mechanical Turk, Its Rapid Rise, and Its Effective Use.” Perspectives on Psychological Science 13: 149–154. doi:10.1177/1745691617706516.
  • Chattopadhyay, P., D. Yadav, V. Prabhu, A. Chandrasekaran, A. Das, S. Lee, D. Batra, and D. Parikh. 2017. “Evaluating Visual Conversational Agents Via Cooperative Human-AI Games.” Proceedings of the Fifth Conference on Human Computation and Crowdsourcing, 2–10. http://arxiv.org/abs/1708.05122.
  • Cohen, J. 1992. “A Power Primer.” Psychological Bulletin 112: 155–159. doi:10.1037/0033-2909.112.1.155.
  • Cohen, T. R., T. Wildschut, and C. A. Insko. 2010. “How Communication Increases Interpersonal Cooperation in Mixed-Motive Situations.” Journal of Experimental Social Psychology 46: 39–50. doi:10.1016/j.jesp.2009.09.009.
  • Colpaert, L., D. Muller, M.-P. Fayant, and F. Butera. 2015. “A Mindset of Competition Versus Cooperation Moderates the Impact of Social Comparison on Self-Evaluation.” Frontiers in Psychology 6: 1–10. doi:10.3389/fpsyg.2015.01337.
  • Cooper, H., and M. Findley. 1982. “Expected Effect Sizes: Estimates for Statistical Power Analysis in Social Psychology.” Personality and Social Psychology Bulletin 8: 168–173. doi:10.1177/014616728281026.
  • Crandall, J. W., M. O. Tennom, F. Ishowo-Oloko, S. Abdallah, J.-F. Bonnefon, M. Cebrian, A. Shariff, M. A. Goodrich, and I. Rahwan. 2018. “Cooperating with Machines.” Nature Communications 9: 1–12. doi:10.1038/s41467-017-02597-8.
  • Dawes, R. M. 1980. “Social Dilemmas.” Annual Review of Psychology 31: 169–193. doi:10.1146/annurev.ps.31.020180.001125.
  • Dawes, R. M., A. J. C. Van De Kragt, and J. M. Orbell. 1990. “Cooperation for the Benefit of Us – Not Me, or My Conscience.” In Beyond Self Interest, edited by J. Mansbridge, 97–110. Chicago: University of Chicago Press.
  • de Melo, C. M., S. Marsella, and J. Gratch. 2019. “Human Cooperation When Acting Through Autonomous Machines.” Proceedings of the National Academy of Sciences 116: 3482–3487. doi:10.1073/pnas.1817656116.
  • de Melo, C. M., and K. Terada. 2019. “Cooperation with Autonomous Machines Through Culture and Emotion.” PLOS ONE 14: 1–12. doi:10.1371/journal.pone.0224758.
  • Edwards, C., A. Edwards, B. Stoll, X. Lin, and N. Massey. 2019. “Evaluations of an Artificial Intelligence Instructor’s Voice: Social Identity Theory in Human-Robot Interactions.” Computers in Human Behavior 90: 357–362. doi:10.1016/j.chb.2018.08.027.
  • Faul, F., E. Erdfelder, A. Buchner, and A.-G. Lang. 2009. “Statistical Power Analyses Using G*Power 3.1: Tests for Correlation and Regression Analyses.” Behavior Research Methods 41: 1149–1160. doi:10.3758/BRM.41.4.1149.
  • Feine, J., U. Gnewuch, S. Morana, and A. Maedche. 2019. “A Taxonomy of Social Cues for Conversational Agents.” International Journal of Human-Computer Studies 132: 138–161. doi:10.1016/j.ijhcs.2019.07.009.
  • Franke, T., C. Attig, and D. Wessel. 2019. “A Personal Resource for Technology Interaction: Development and Validation of the Affinity for Technology Interaction (ATI) Scale.” International Journal of Human–Computer Interaction 35: 456–467. doi:10.1080/10447318.2018.1456150.
  • Frankish, K., and W. M. Ramsey. 2014. The Cambridge Handbook of Artificial Intelligence. Cambridge University Press. doi:10.1017/CBO9781139046855.
  • Guzman, A. L. 2018. “What is Human-Machine Communication, Anyway?” In Human-machine Communication: Rethinking Communication, Technology, and Ourselves, edited by A. L. Guzman, 1–28. Peter Lang US. doi:10.3726/b14399.
  • Guzman, A. L. 2019. “Voices in and of the Machine: Source Orientation Toward Mobile Virtual Assistants.” Computers in Human Behavior 90: 343–350. doi:10.1016/j.chb.2018.08.009.
  • Guzman, A. L., and S. C. Lewis. 2020. “Artificial Intelligence and Communication: A Human–Machine Communication Research Agenda.” New Media & Society 22: 70–86. doi:10.1177/1461444819858691.
  • Han, T. A., C. Perret, and S. T. Powers. 2021. “When to (or not to) Trust Intelligent Machines: Insights from an Evolutionary Game Theory Analysis of Trust in Repeated Games.” Cognitive Systems Research 68: 111–124. doi:10.1016/j.cogsys.2021.02.003.
  • Hardin, G. 1968. “The Tragedy of the Commons.” Science 162: 1243–1248. doi:10.1126/science.162.3859.1243.
  • Hong, J.-W., S. Choi, and D. Williams. 2020. “Sexist AI: An Experiment Integrating CASA and ELM.” International Journal of Human–Computer Interaction 36: 1928–1941. doi:10.1080/10447318.2020.1801226.
  • Ishowo-Oloko, F., J.-F. Bonnefon, Z. Soroye, J. Crandall, I. Rahwan, and T. Rahwan. 2019. “Behavioural Evidence for a Transparency–Efficiency Tradeoff in Human–Machine Cooperation.” Nature Machine Intelligence 1: 517–521. doi:10.1038/s42256-019-0113-5.
  • Katzenbach, C., and L. Ulbricht. 2019. “Algorithmic Governance.” Internet Policy Review 8: 1–18. doi:10.14763/2019.4.1424.
  • Kerr, N. L., and C. M. Kaufman-Gilliland. 1994. “Communication, Commitment, and Cooperation in Social Dilemma.” Journal of Personality and Social Psychology 66: 513–529. doi:10.1037/0022-3514.66.3.513.
  • Kiesler, S., L. Sproull, and K. Waters. 1996. “A Prisoner’s Dilemma Experiment on Cooperation with People and Human-Like Computers.” Journal of Personality and Social Psychology 70: 47–65. doi:10.1037/0022-3514.70.1.47.
  • Kollock, P. 1998. “Social Dilemmas: The Anatomy of Cooperation.” Annual Review of Sociology 24: 183–214. doi:10.1146/annurev.soc.24.1.183.
  • Komorita, S. S., and C. D. Parks. 1995. “Interpersonal Relations: Mixed-Motive Interaction.” Annual Review of Psychology 46: 183–207. doi:10.1146/annurev.ps.46.020195.001151.
  • Kreps, D. M., P. Milgrom, J. Roberts, and R. Wilson. 1982. “Rational Cooperation in the Finitely Repeated Prisoners’ Dilemma.” Journal of Economic Theory 27: 245–252. doi:10.1016/0022-0531(82)90029-1.
  • Kucherbaev, P., A. Bozzon, and G.-J. Houben. 2018. “Human-aided Bots.” IEEE Internet Computing 22: 36–43. doi:10.1109/MIC.2018.252095348.
  • Macy, M. 1996. “Natural Selection and Social Learning in Prisoner’s Dilemma: Coadaptation with Genetic Algorithms and Artificial Neural Networks.” Sociological Methods & Research 25: 103–137. doi:10.1177/0049124196025001004.
  • March, C. 2019. “The Behavioral Economics of Artificial Intelligence: Lessons from Experiments with Computer Players”, CESifo Working Paper No. 7926.
  • McClure, P. K. 2018. “You’re Fired,” Says the Robot: The Rise of Automation in the Workplace, Technophobes, and Fears of Unemployment.” Social Science Computer Review 36: 139–156. doi:10.1177/0894439317698637.
  • Miwa, K., and H. Terai. 2012. “Impact of Two Types of Partner, Perceived or Actual, in Human–Human and Human–Agent Interaction.” Computers in Human Behavior 28: 1286–1297. doi:10.1016/j.chb.2012.02.012.
  • Moon, Y., and C. Nass. 1996. “How “Real” Are Computer Personalities? Psychological Responses to Personality Types in Human-Computer Interaction.” Communication Research 23: 651–674. doi:10.1177/009365096023006002.
  • Mulford, M., J. Jackson, and H. Svedsäter. 2008. “Encouraging Cooperation: Revisiting Solidarity and Commitment Effects in Prisoner’s Dilemma Games.” Journal of Applied Social Psychology 38: 2964–2989. doi:10.1111/j.1559-1816.2008.00421.x.
  • Nass, C., B. J. Fogg, and Y. Moon. 1996. “Can Computers be Teammates?” International Journal of Human-Computer Studies 45: 669–678. doi:10.1006/ijhc.1996.0073.
  • Nass, C., and Y. Moon. 2000. “Machines and Mindlessness: Social Responses to Computers.” Journal of Social Issues 56: 81–103. doi:10.1111/0022-4537.00153.
  • Nass, C., Y. Moon, and P. Carney. 1999. “Are People Polite to Computers? Responses to Computer-Based Interviewing Systems.” Journal of Applied Social Psychology 29: 1093–1109. doi:10.1111/j.1559-1816.1999.tb00142.x.
  • Nass, C., J. Steuer, and E. R. Tauber. 1994. "Computers are Social Actors." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Celebrating Interdependence - CHI ‘94, 72–78. doi:10.1145/191666.191703.
  • Oskamp, S. 1971. “Effects of Programmed Strategies on Cooperation in the Prisoner’s Dilemma and Other Mixed-Motive Games.” Journal of Conflict Resolution 15: 225–259. doi:10.1177/002200277101500207.
  • Paay, J., J. Kjeldskov, K. M. Hansen, T. Jørgensen, and K. L. Overgaard. 2020. “Digital Ethnography of Home use of Digital Personal Assistants.” Behaviour & Information Technology, 1–19. doi:10.1080/0144929X.2020.1834620.
  • Paolacci, G., and J. Chandler. 2014. “Inside the Turk: Understanding Mechanical Turk as a Participant Pool.” Current Directions in Psychological Science 23: 184–188. doi:10.1177/0963721414531598.
  • Parise, S., S. Kiesler, L. Sproull, and K. Waters. 1999. “Cooperating with Life-Like Interface Agents.” Computers in Human Behavior 15: 123–142. doi:10.1016/S0747-5632(98)00035-1.
  • Pitardi, V., and H. R. Marriott. 2021. “Alexa, She’s Not Human but … Unveiling the Drivers of Consumers’ Trust in Voice-Based Artificial Intelligence.” Psychology & Marketing 38: 626–642. doi:10.1002/mar.21457.
  • Rahwan, I., M. Cebrian, N. Obradovich, J. Bongard, J.-F. Bonnefon, C. Breazeal, J. W. Crandall, et al. 2019. “Machine Behaviour.” Nature 568: 477–486. doi:10.1038/s41586-019-1138-y.
  • Rapoport, A., and A. Chammah. 1965. Prisoner’s Dilemma. University of Michigan Press. doi:10.3998/mpub.20269.
  • Reeves, J. 2016. “Automatic for the People: The Automation of Communicative Labor.” Communication and Critical/Cultural Studies 13: 150–165. doi:10.1080/14791420.2015.1108450.
  • Reeves, B., and C. Nass. 1996. The Media Equation: How People Treat Computers, Television and New Media Like Real People and Place. New York: Cambridge University Press.
  • Ruckenstein, M., and J. Granroth. 2020. “Algorithms, Advertising and the Intimacy of Surveillance.” Journal of Cultural Economy 13: 12–24. doi:10.1080/17530350.2019.1574866.
  • Sandoval, E. B., J. Brandstetter, M. Obaid, and C. Bartneck. 2016. “Reciprocity in Human-Robot Interaction: A Quantitative Approach Through the Prisoner’s Dilemma and the Ultimatum Game.” International Journal of Social Robotics 8: 303–317. doi:10.1007/s12369-015-0323-x.
  • Sedlmeier, P., and G. Gigerenzer. 1989. “Do Studies of Statistical Power Have an Effect on the Power of Studies?” Psychological Bulletin 105: 309–316. doi:10.1037/0033-2909.105.2.309.
  • Shank, D. B., and A. DeSanti. 2018. “Attributions of Morality and Mind to Artificial Intelligence After Real-World Moral Violations.” Computers in Human Behavior 86: 401–411. doi:10.1016/j.chb.2018.05.014.
  • Shank, D. B., A. DeSanti, and T. Maninger. 2019. “When are Artificial Intelligence Versus Human Agents Faulted for Wrongdoing? Moral Attributions After Individual and Joint Decisions.” Information, Communication & Society 22: 648–663. doi:10.1080/1369118X.2019.1568515.
  • Smith, A. 2016. Public Predictions for the Future of Workforce Automation. https://www.pewresearch.org/internet/2016/03/10/public-predictions-for-the-future-of-workforce-automation/.
  • Strømland, E., S. Tjøtta, and G. Torsvik. 2018. “Mutual Choice of Partner and Communication in a Repeated Prisoner’s Dilemma.” Journal of Behavioral and Experimental Economics 75: 12–23. doi:10.1016/j.socec.2018.05.002.
  • Tan, S.-M., and T. W. Liew. 2020. “Designing Embodied Virtual Agents as Product Specialists in a Multi-Product Category e-Commerce: The Roles of Source Credibility and Social Presence.” International Journal of Human–Computer Interaction 36: 1136–1149. doi:10.1080/10447318.2020.1722399.
  • Velez, J. A., T. Loof, C. A. Smith, J. M. Jordan, J. A. Villarreal, and D. R. Ewoldsen. 2019. “Switching Schemas: Do Effects of Mindless Interactions with Agents Carry Over to Humans and Vice Versa?” Journal of Computer-Mediated Communication 24: 335–352. doi:10.1093/jcmc/zmz016.
  • Xu, K., and M. Lombard. 2017. “Persuasive Computing: Feeling Peer Pressure from Multiple Computer Agents.” Computers in Human Behavior 74: 152–162. doi:10.1016/j.chb.2017.04.043.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.