References
- Berlo, D. K. (1960). The process of communication. An introduction to theory and practice. Holt, Rinehart and Winston.
- Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49(4), 632–658. https://doi.org/10.1007/s11747-020-00762-y
- Booth, A., Sutton, A., Clowes, M., & Martyn-St James, M. (2022). Systematic approaches to a successful literature review. (3rd ed.). SAGE.
- Borchers, N. S. (2019). Social media influencers in strategic communication. International Journal of Strategic Communication, 13(4), 255–260. https://doi.org/10.1080/1553118X.2019.1634075
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
- Braun, V., & Clarke, V. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qualitative Research in Psychology, 18(3), 328–352. https://doi.org/10.1080/14780887.2020.1769238
- Braun, V., Clarke, V., Hayfield, N., & Terry, G. (2019). Thematic analysis. In P. Liamputtong (Ed.), Handbook of research methods in health social sciences (pp. 843–860). Springer. https://doi.org/10.1007/978-981-10-5251-4_103
- Budhwar, P., Malik, A., Silva de, M., & Thevisuthan, P. (2022). Artificial intelligence - challenges and opportunities for international HRM: A review and research agenda. The International Journal of Human Resource Management, 33(6), 1065–1097. https://doi.org/10.1080/09585192.2022.2035161
- Burgoon, J. K. (1978). A communication model of personal space violations: Explication and an initial test. Human Communication Research, 4(2), 129–142. https://doi.org/10.1111/j.1468-2958.1978.tb00603.x
- Burgoon, J. K., & Jones, S. B. (1976). Toward a theory of personal space expectations and their violations. Human Communication Research, 2(2), 131–146. https://doi.org/10.1111/j.1468-2958.1976.tb00706.x
- Byun, K. J., & Ahn, S. J. (2023). A systematic review of virtual influencers: Similarities and differences between human and virtual influencers in interactive advertising. Journal of Interactive Advertising, 23(4), 293–306. https://doi.org/10.1080/15252019.2023.2236102
- Chaffee, S. H. (1991). Explication. SAGE Publications.
- Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1820. https://doi.org/10.2139/ssrn.3213954
- Cho, H., Lee, D., & Lee, J. G. (2023). User acceptance on content optimization algorithms: Predicting filter bubbles in conversational AI services. Universal Access in the Information Society, 22(4), 1325–1338. https://doi.org/10.1007/s10209-022-00913-8
- Coeckelbergh, M. (2022). Three responses to anthropomorphism in social robotics: Towards a critical, relational, and hermeneutic approach. International Journal of Social Robotics, 14(10), 2049–2061. https://doi.org/10.1007/s12369-021-00770-0
- Danzon-Chambaud, S. (2021). A systematic review of automated journalism scholarship: Guidelines and suggestions for future research [version 1; peer review: 2 approved]. Open Research Europe, 1(4), Article 4, 4–19. https://doi.org/10.12688/openreseurope.13096.1
- Da Silva Oliveira, A. B., & Chimenti, P. (2021). “Humanized robots”: A proposition of categories to understand virtual influencers. Australasian Journal of Information Systems, 25, 1–27. https://doi.org/10.3127/ajis.v25i0.3223
- Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
- Davis, J., Mengersen, K., Bennett, S., & Mazerolle, L. (2014). Viewing systematic reviews and meta-analysis in social research through different lenses. Springer Plus, 3(1), 1–9. https://doi.org/10.1186/2193-1801-3-511
- Dehnert, M., & Mongeau, P. A. (2022). Persuasion in the age of artificial intelligence (AI): Theories and complications of AI-based persuasion. Human Communication Research, 48(3), 386–403. https://doi.org/10.1093/hcr/hqac006
- Dörr, K. N. (2016). Mapping the field of algorithmic journalism. Digital Journalism, 4(6), 700–722. https://doi.org/10.1080/21670811.2015.1096748
- Esposito, E. (2022). Artificial communication: How algorithms produce social intelligence. The MIT Press.
- Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104. https://doi.org/10.1145/2818717
- Galloway, C., & Swiatek, L. (2018). Public relations and artificial intelligence: It’s not (just) about robots. Public Relations Review, 44(5), 734–740. https://doi.org/10.1016/j.pubrev.2018.10.008
- Gambino, A., Fox, J., & Ratan, R. A. (2020). Building a stronger CASA: Extending the computers are social actors paradigm. Human-Machine Communication, 1(1), 71–86. https://doi.org/10.30658/hmc.1.5
- García-Orosa, B., Canavilhas, J., & Vázquez-Herrero, J. (2023). Algoritmos y comunicación: Revisión sistematizada de la literatura: Algorithms and communication: A systematized literature review. Comunicar, 31(74), 9–21. https://doi.org/10.3916/C74-2023-01
- Geise, S., Klinger, U., Magin, M., Müller, K. F., Nitsch, C., Riesmeyer, C., Rothenberger, L., Schumann, C., Sehl, A., Wallner, C., & Zillich, A. F. (2022). The normativity of communication research: A content analysis of normative claims in peer-reviewed journal articles (1970–2014). Mass Communication & Society, 25(4), 528–553. https://doi.org/10.1080/15205436.2021.1987474
- Graefe, A. (2016). Guide to automated journalism. Tow Center for Digital Journalism, Columbia University. https://doi.org/10.7916/D80G3XDJ
- Graefe, A., & Bohlken, N. (2020). Automated journalism: A meta-analysis of readers’ perceptions of human-written in comparison to automated news. Media and Communication, 8(3), 50–59. https://doi.org/10.17645/mac.v8i3.3019
- Gunkel, D. J. (2012). Communication and artificial intelligence: Opportunities and challenges for the 21st century. Communication+1, 1(1), 1–26. https://doi.org/10.7275/R5QJ7F7R
- Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human–machine communication research agenda. New Media & Society, 22(1), 70–86. https://doi.org/10.1177/1461444819858691
- Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100. https://doi.org/10.1093/jcmc/zmz022
- Heide, M., Platen von, S., Simonsson, C., & Falkheimer, J. (2018). Expanding the scope of strategic communication: Towards a holistic understanding of organizational complexity. International Journal of Strategic Communication, 12(4), 452–468. https://doi.org/10.1080/1553118X.2018.1456434
- Hepp, A., Loosen, W., Dreyer, S., Jarke, J., Kannengießer, S., Katzenbach, C., Malaka, R., Pfadenhauer, M., Puschmann, C., & Schulz, W. (2023). ChatGPT, LaMDA, and the hype around communicative AI: The automation of communication as a field of research in media and communication studies. Human-Machine Communication, 6, 41–63. https://doi.org/10.30658/hmc.6.4
- Hoy, M. B. (2018). Alexa, Siri, Cortana, and more: An introduction to voice assistants. Medical Reference Services Quarterly, 37(1), 81–88. https://doi.org/10.1080/02763869.2018.1404391
- Hwang, G. J., & Chang, C. Y. (2023). A review of opportunities and challenges of chatbots in education. Interactive Learning Environments, 31(7), 4099–4112. https://doi.org/10.1080/10494820.2021.1952615
- Jacobs, S., & Liebrecht, C. (2023). Responding to online complaints in webcare by public organizations: The impact on continuance intention and reputation. Journal of Communication Management, 27(1), 1–20. https://doi.org/10.1108/JCOM-11-2021-0132
- Jenneboer, L., Herrando, C., & Constantinides, E. (2022). The impact of chatbots on customer loyalty: A systematic literature review. Journal of Theoretical & Applied Electronic Commerce Research, 17(1), 212–229. https://doi.org/10.3390/jtaer17010011
- Johnson, D. G., & Verdicchio, M. (2017). AI anxiety. Journal of the Association for Information Science and Technology, 68(9), 2267–2270. https://doi.org/10.1002/asi.23867
- Kalpokas, I. (2021). Problematising reality: The promises and perils of synthetic media. SN Social Sciences, 1(1), 1–11. https://doi.org/10.1007/s43545-020-00010-8
- Katz, E., Blumler, J. G., & Gurevitch, M. (1973). Uses and gratifications research. Public Opinion Quarterly, 37(4), 509–523. https://doi.org/10.1086/268109
- Keller, K. L. (2001). Mastering the marketing communications mix: Micro and macro perspectives on integrated marketing communication programs. Journal of Marketing Management, 17(7–8), 819–847. https://doi.org/10.1362/026725701323366836
- Kietzmann, J., Mills, A. J., & Plangger, K. (2021). Deepfakes: Perspectives on the future “reality” of advertising and branding. International Journal of Advertising, 40(3), 473–485. https://doi.org/10.1080/02650487.2020.1834211
- Klaus, P., & Zaichkowsky, J. (2020). AI voice bots: A services marketing research agenda. The Journal of Services Marketing, 34(3), 389–398. https://doi.org/10.1108/JSM-01-2019-0043
- Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388–409. https://doi.org/10.1080/0960085X.2021.1927212
- Krippendorff, K. (2004). Content analysis: An introduction to its methodology (2nd ed.). SAGE Publications.
- Laszkiewicz, A., & Kalinska-Kula, M. (2023). Virtual influencers as an emerging marketing theory: A systematic literature review. International Journal of Consumer Studies, 47(6), 2479–2494. https://doi.org/10.1111/ijcs.12956
- Latour, B. (2005). Reassembling the social: An introduction to actor-network theory. Oxford University Press.
- Lebeuf, C., Zagalsky, A., Foucault, M., & Storey, M. A. (2019). Defining and classifying software bots: A faceted taxonomy. Proceedings of the 2019 IEEE/ACM 1st International Workshop on Bots in Software Engineering (pp. 1–6). IEEE. https://doi.org/10.1109/BotSE.2019.00008
- Lewis, S. C., Guzman, A. L., & Schmidt, T. R. (2019). Automation, journalism, and human-machine communication: Rethinking roles and relationships of humans and machines in news. Digital Journalism, 7(4), 409–427. https://doi.org/10.1080/21670811.2019.1577147
- Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. International Journal of Management Education, 21(2), Article 100790, 1–13. https://doi.org/10.1016/j.ijme.2023.100790
- Lim, W. M., Kumar, S., Verma, S., & Chaturvedi, R. (2022). Alexa, what do we know about conversational commerce? Insights from a systematic literature review. Psychology & Marketing, 39(6), 1129–1155. https://doi.org/10.1002/mar.21654
- Ling, E. C., Tussyadiah, I., Tuomi, A., & Stienmetz, J. (2021). Factors influencing users’ adoption and use of conversational agents: A systematic review. Psychology & Marketing, 38(7), 1021–1051. https://doi.org/10.1002/mar.21491
- Lock, I. (2019). Explicating communicative organization-stakeholder relationships in the digital age: A systematic review and research agenda. Public Relations Review, 45(4), 1–13. https://doi.org/10.1016/j.pubrev.2019.101829
- Lock, I. (2023). Conserving complexity: A complex systems paradigm and framework to study public relations’ contribution to grand challenges. Public Relations Review, 49(2), 1–11. https://doi.org/10.1016/j.pubrev.2023.102310 2
- Lock, I., & Giani, S. (2021). Finding more needles in more haystacks: Rigorous literature searching for systematic reviews and meta-analyses in management and organization studies. https://doi.org/10.31235/osf.io/8ek6h
- Lock, I., Seele, P., & Heath, R. L. (2016). Where grass has no roots: The concept of ‘shared strategic communication’ as an answer to unethical astroturf lobbying. International Journal of Strategic Communication, 10(2), 87–100. https://doi.org/10.1080/1553118X.2015.1116002
- Lock, I., Wonneberger, A., Verhoeven, P., & Hellsten, I. (2020). Back to the roots? The applications of communication science theories in strategic communication research. International Journal of Strategic Communication, 14(1), 1–24. https://doi.org/10.1080/1553118X.2019.1666398
- Löffler, N., Röttger, U., & Wiencierz, C. (2021). Data-based strategic communication as a mediator of trust: Recipients’ perception of an NPO’s automated posts. In B. Blöbaum (Ed.), Trust and communication (pp. 115–134). Springer. https://doi.org/10.1007/978-3-030-72945-5_6
- López Jiménez, E. A., & Ouariachi, T. (2021). An exploration of the impact of artificial intelligence (AI) and automation for communication professionals. Journal of Information, Communication and Ethics in Society, 19(2), 249–267. https://doi.org/10.1108/JICES-03-2020-0034
- Mori, M. (1970). Bukimi no tani [The uncanny valley]. Energy, 7, 33–35. https://cir.nii.ac.jp/crid/1370013168736887425
- Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. The Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
- Natale, S. (2021). Communicating through or communicating with: Approaching artificial intelligence from a communication and media studies perspective. Communication Theory, 31(4), 905–910. https://doi.org/10.1093/ct/qtaa022
- Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S. , and Mohler, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Systematic Reviews, 10(1). https://doi.org/10.1186/s13643-021-01626-4
- Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
- Pawelec, M. (2022). Deepfakes and democracy (theory): How synthetic audio-visual media for disinformation and hate speech threaten core democratic functions. Digital Society, 1(2), 1–37. https://doi.org/10.1007/s44206-022-00010-6
- Primo, A., & Zago, G. (2015). Who and what do journalism? An actor-network perspective. Digital Journalism, 3(1), 38–52. https://doi.org/10.1080/21670811.2014.927987
- Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press.
- Riffe, D., Lacy, S., & Fico, F. (2014). Analyzing media messages: Using quantitative content analysis in research (3rd ed.). Routledge.
- Roy, D., & Dutta, M. (2022). A systematic review and research perspective on recommender systems. Journal of Big Data, 9(1), 1–36. https://doi.org/10.1186/s40537-022-00592-5
- Schölzel, H., & Nothhaft, H. (2016). The establishment of facts in public discourse: Actor-network-theory as a methodological approach in PR-research. Public Relations Inquiry, 5(1), 53–69. https://doi.org/10.1177/2046147X15625711
- Seele, P., & Schultz, M. D. (2022). From greenwashing to machinewashing: A model and future directions derived from reasoning by analogy. Journal of Business Ethics, 178(4), 1063–1089. https://doi.org/10.1007/s10551-022-05054-9
- Shawar, B. A., & Atwell, E. (2007). Chatbots: Are they really useful? Journal for Language Technology and Computational Linguistics, 22(1), 29–49. https://doi.org/10.21248/jlcl.22.2007.88
- Siitonen, M., Laajalahti, A., & Venäläinen, P. (2024). Mapping automation in journalism studies 2010–2019: A literature review. Journalism Studies, 25(3), 299–318. https://doi.org/10.1080/1461670X.2023.2296034
- Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research, 104(1), 333–339. https://doi.org/10.1016/j.jbusres.2019.07.039
- Somerville, I. (2021). Public relations and actor-network-theory. In C. Valentini (Ed.), Public relations (pp. 525–540). De Gruyter Mouton. https://doi.org/10.1515/9783110554250-027
- Stenbom, A., Wiggberg, M., & Norlund, T. (2023). Exploring communicative AI: Reflections from a Swedish newsroom. Digital Journalism, 11(9), 1622–1640. https://doi.org/10.1080/21670811.2021.2007781
- Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73–100). The MIT Press. https://doi.org/10.1162/dmal.9780262562324.073
- Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74–88. https://doi.org/10.1093/jcmc/zmz026
- Syvänen, S., & Valentini, C. (2020). Conversational agents in online organization–stakeholder interactions: A state-of-the-art analysis and implications for further research. Journal of Communication Management, 24(4), 339–362. https://doi.org/10.1108/JCOM-11-2019-0145
- Vagia, M., Transeth, A. A., & Fjerdingen, S. A. (2016). A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed? Applied Ergonomics, 53, 190–202. https://doi.org/10.1016/j.apergo.2015.09.013
- van der Kaa, H., & Krahmer, E. J. (2014). Journalist versus news consumer: The perceived credibility of machine written news. In Computation + Journalism Symposium 2014, New York, USA.
- van Pinxteren, M., Pluymaekers, M., & Lemmink, J. (2020). Human-like communication in conversational agents: A literature review and research agenda. Journal of Service Management, 31(2), 203–225. https://doi.org/10.1108/JOSM-06-2019-0175
- Vasist, P. N., & Krishnan, S. (2022). Deepfakes: An integrative review of the literature and an agenda for future research. Communications of the Association for Information Systems, 51(1), Article 14, 556–602. https://doi.org/10.17705/1CAIS.05126
- Verhoeven, P. (2018). On Latour: Actor-networks, modes of existence and public relations. In Ø. Ihlen & M. Fredriksson (Eds.), Public relations and social theory (2nd ed., pp. 99–116). Routledge.
- Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39–52. https://doi.org/10.22215/timreview/1282
- Wiesenberg, M., Zerfass, A., & Moreno, A. (2017). Big data and automation in strategic communication. International Journal of Strategic Communication, 11(2), 95–114. https://doi.org/10.1080/1553118X.2017.1285770
- Wu, S., Wong, P. W., Tandoc, E. C., & Salmon, C. T. (2023). Reconfiguring human-machine relations in the automation age: An actor-network analysis on automation’s takeover of the advertising media planning industry. Journal of Business Research, 168, 1–8. https://doi.org/10.1016/j.jbusres.2023.114234
- Zerfass, A., Verçiç, D., Nothaft, H., & Werder, K. P. (2018). Strategic communication: Defining the field and its contribution to research and practice. International Journal of Strategic Communication, 12(4), 487–505. https://doi.org/10.1080/1553118X.2018.1493485
Articles included in the systematic review
- Ahmad, N., Haque, S., & Ibahrine, M. (2023). The news ecosystem in the age of AI: Evidence from the UAE. Journal of Broadcasting & Electronic Media, 67(3), 323–352. https://doi.org/10.1080/08838151.2023.2173197
- Ahmed, S. (2023). Navigating the maze: Deepfakes, cognitive ability, and social media news skepticism. New Media & Society, 25(5), 1108–1129. https://doi.org/10.1177/14614448211019198
- Alboqami, H. (2023). Trust me, I’m an influencer! Causal recipes for customer trust in artificial intelligence influencers in the retail industry. Journal of Retailing and Consumer Service, 72, Article 103242, 1–12. https://doi.org/10.1016/j.jretconser.2022.103242
- Arango, L., Singaraju, S. P., & Niininen, O. (2023). Consumer responses to AI-generated charitable giving ads. Journal of Advertising, 52(4), 486–503. https://doi.org/10.1080/00913367.2023.2183285
- Arsenyan, J., & Mirowska, A. (2021). Almost human? A comparative case study on the social media presence of virtual influencers. International Journal of Human-Computer Studies, 155, Article 102694, 1–16. https://doi.org/10.1016/j.ijhcs.2021.102694
- (Articles are referenced according to how they appeared at the time of the analysis. Since the analysis, some articles have been published offline as well; hence, the publication year may differ).
- Campbell, C., Plangger, K., Sands, S., Kietzmann, J., & Bates, K. (2022). How deepfakes and artificial intelligence could reshape the advertising industry. Journal of Advertising Research, 62(4), 241–251. https://doi.org/10.2501/JAR-2022-017
- Carlson, M. (2015). The robotic reporter: Automated journalism and the redefinition of labor, compositional forms, and journalistic authority. Digital Journalism, 3(3), 416–431. https://doi.org/10.1080/21670811.2014.976412
- Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To be or not to be … human? Theorizing the role of human-like competencies in conversational artificial intelligence agents. Journal of Management Information Systems, 39(4), 969–1005. https://doi.org/10.1080/07421222.2022.2127441
- Cheng, Y., & Jiang, H. (2020). How do AI-driven chatbots impact user experience? Examining gratifications, perceived privacy risk, satisfaction, loyalty, and continued use. Journal of Broadcasting & Electronic Media, 64(4), 592–614. https://doi.org/10.1080/08838151.2020.1834296
- Clerwall, C. (2014). Enter the robot journalist: Users’ perceptions of automated content. Journalism Practice, 8(5), 519–531. https://doi.org/10.1080/17512786.2014.883116
- Cloudy, J., Banks, J., & Bowman, N. D. (2021). The str(AI)ght scoop: Artificial intelligence cues reduce perceptions of hostile media bias. Digital Journalism, 1–20. https://doi.org/10.1080/21670811.2021.1969974
- Cover, R. (2022). Deepfake culture: The emergence of audio-video deception as an object of social anxiety and regulation. Continuum: Journal of Media & Cultural Studies, 36(4), 609–621. https://doi.org/10.1080/10304312.2022.2084039
- Danzon-Chambaud, S., & Cornia, A. (2023). Changing or reinforcing the “rules of the game”: A field theory perspective on the impacts of automated journalism on media practitioners. Journalism Practice, 17(2), 174–188. https://doi.org/10.1080/17512786.2021.1919179
- Diakopoulos, N., & Johnson, D. (2021). Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society, 23(7), 2072–2098. https://doi.org/10.1177/1461444820925811
- Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364
- Dörr, K. N., & Hollnbuchner, K. (2017). Ethical challenges of algorithmic journalism. Digital Journalism, 5(4), 404–419. https://doi.org/10.1080/21670811.2016.1167612
- Edwards, C., Edwards, A., Spence, P. R., & Shelton, A. K. (2014). Is that a bot running the social media feed? Testing the differences in perceptions of communication quality for a human agent and a bot agent on Twitter. Computers in Human Behavior, 33, 372–376. https://doi.org/10.1016/j.chb.2013.08.013
- Flavián, C., Akdim, K., & Casaló, L. V. (2023). Effects of voice assistant recommendations on consumer behavior. Psychology & Marketing, 40(2), 328–346. https://doi.org/10.1002/mar.21765
- Franke, C., Groeppel-Klein, A., & Müller, K. (2023). Consumers’ responses to virtual influencers as advertising endorsers: Novel and effective or uncanny and deceiving? Journal of Advertising, 52(4), 523–539. https://doi.org/10.1080/00913367.2022.2154721
- Garvey, A. M., Kim, T. W., & Duhachek, A. (2023). Bad news? Send an AI. Good news? Send a human. Journal of Marketing, 87(1), 10–25. https://doi.org/10.1177/00222429211066972
- Graefe, A., Haim, M., Haarmann, B., & Brosius, H. B. (2018). Readers’ perception of computer-generated news: Credibility, expertise, and readability. Journalism, 19(5), 595–610. https://doi.org/10.1177/1464884916641269
- Haim, M., & Graefe, A. (2017). Automated news. Better than expected? Digital Journalism, 5(8), 1044–1059. https://doi.org/10.1080/21670811.2017.1345643
- Ham, J., Li, S., Shah, P., & Eastin, M. S. (2023). The “mixed” reality of virtual brand endorsers: Understanding the effect of brand engagement and social cues on technological perceptions and advertising effectiveness. Journal of Interactive Advertising, 23(2), 98–113. https://doi.org/10.1080/15252019.2023.2185557
- Hameleers, M., van der Meer, T., & Dobber, T. (2022). You won’t believe what they just said! The effects of political deepfakes embedded as vox populi on social media. Social Media + Society, 8(3), 1–12. https://doi.org/10.1177/20563051221116346
- Hepp, A. (2020). Artificial companions, social bots and work bots: Communicative robots as research objects of media and communication studies. Media Culture & Society, 42(7–8), 1410–1426. https://doi.org/10.1177/0163443720916412
- Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712–733. https://doi.org/10.1093/joc/jqy026
- Huang, R., Kim, M., & Lennon, S. (2022). Trust as a second-order construct: Investigating the relationship between consumers and virtual agents. Telematics and Informatics, 70, Article 101811, 1–12. https://doi.org/10.1016/j.tele.2022.101811
- Illia, L., Colleoni, E., & Zyglidopoulos, S. (2023). Ethical implications of text generation in the age of artificial intelligence. Business Ethics, the Environment & Responsibility, 32(1), 201–210. https://doi.org/10.1111/beer.12479
- Jamil, S. (2021). Automated journalism and the freedom of media: Understanding legal and ethical implications in competitive authoritarian regime. Journalism Practice, 1–24. https://doi.org/10.1080/17512786.2021.1981148
- Jang, W., Chun, J. W., Kim, S., & Kang, Y. W. (2023). The effects of anthropomorphism on how people evaluate algorithm-written news. Digital Journalism, 11(1), 103–124. https://doi.org/10.1080/21670811.2021.1976064
- Jia, C. (2020). Chinese automated journalism: A comparison between expectations and perceived quality. International Journal of Communication, 14, 2611–2632. https://ijoc.org/index.php/ijoc/article/view/13334/3083
- Jia, C., & Johnson, T. J. (2021). Source credibility matters: Does automated journalism inspire selective exposure? International Journal of Communication, 15, 3760–3781. https://ijoc.org/index.php/ijoc/article/view/16546
- Jung, J., Song, H., Kim, Y., Im, H., & Oh, S. (2017). Intrusion of software robots into journalism: The public’s and journalists’ perceptions of news written by algorithms and human journalists. Computers in Human Behavior, 71, 291–298. https://doi.org/10.1016/j.chb.2017.02.022
- Jungherr, A., & Schroeder, R. (2023). Artificial intelligence and the public arena. Communication Theory, 33(2–3), 164–173. https://doi.org/10.1093/ct/qtad006
- Keller, T. R., & Klinger, U. (2019). Social bots in election campaigns: Theoretical, empirical, and methodological implications. Political Communication, 36(1), 171–189. https://doi.org/10.1080/10584609.2018.1526238
- Kreps, S., McCain, R. M., & Brundage, M. (2022). All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. Journal of Experimental Political Science, 9(1), 104–117. https://doi.org/10.1017/XPS.2020.37
- Lee, Y. A., Huang, K. T., Blom, R., Schriner, R., & Ciccarelli, C. A. (2021). To believe or not to believe: Framing analysis of content and audience response on top 10 deepfake videos on YouTube. Cyberpsychology, Behavior and Social Networking, 24(3), 1–6. https://doi.org/10.1089/cyber.2020.0176
- Lermann Henestrosa, A., Greving, H., & Kimmerle, J. (2023). Automated journalism: The effects of AI authorship and evaluative information on the perception of a science journalism article. Computers in Human Behavior, 138, Article 107445, 1–11. https://doi.org/10.1016/j.chb.2022.107445
- Li, J., Kizilcec, R., Bailenson, J., & Ju, W. (2016). Social robots and virtual agents as lecturers for video instruction. Computers in Human Behavior, 55, 1222–1230. https://doi.org/10.1016/j.chb.2015.04.005
- Liu, B., & Wei, L. (2019). Machine authorship in situ. Digital Journalism, 7(5), 635–657. https://doi.org/10.1080/21670811.2018.1510740
- Lou, C., Kiew, S., Chen, T., Lee, T., Ong, J., & Phua, Z. (2022). Authentically fake? How consumers respond to the influence of virtual influencers. Journal of Advertising, 1–18. https://doi.org/10.1080/00913367.2022.2149641
- Magno, F., & Dossena, G. (2022). The effects of chatbots’ attributes on customer relationships with brands: PLS-SEM and importance–performance map analysis. The TQM Journal, 1–14. https://doi.org/10.1108/TQM-02-2022-0080
- Mirowska, A., & Arsenyan, J. (2023). Sweet escape: The role of empathy in social media engagement with human versus virtual influencers. International Journal of Human-Computer Studies, 174, Article 103008, 1–18. https://doi.org/10.1016/j.ijhcs.2023.103008
- Mohd Rahim, N. I., Iahad, N. A., Yusof, A. F., & Al-Sharafi, M. A. (2022). AI-based chatbots adoption model for higher-education institutions: A hybrid PLS-SEM neural network modelling approach. Sustainability, 14(19), Article 12726, 1–22. https://doi.org/10.3390/su141912726
- Montal, T., & Reich, Z. (2017). I, robot. You, journalist. Who is the author? Authorship, bylines and full disclosure in automated journalism. Digital Journalism, 5(7), 829–849. https://doi.org/10.1080/21670811.2016.1209083
- Mou, Y., & Xu, K. (2017). The media inequality: Comparing the initial human-human and human-AI social interactions. Computers in Human Behavior, 72, 432–440. https://doi.org/10.1016/j.chb.2017.02.067
- Neo, M. (2022). The Merlin project: Malaysian students’ acceptance of an AI chatbot in their learning process. Turkish Online Journal of Distance Education, 23(3), 31–48. https://doi.org/10.17718/tojde.1137122
- Noain-Sánchez, A. (2022). Addressing the impact of artificial intelligence on journalism: The perception of experts, journalists and academics. Communication & Society, 35(3), 105–121. https://doi.org/10.15581/003.35.3.105-121
- Rajaobelina, L., Prom Tep, S., Arcand, M., & Ricard, L. (2021). Creepiness: Its antecedents and impact on loyalty when interacting with a chatbot. Psychology & Marketing, 38(12), 2339–2356. https://doi.org/10.1002/mar.21548
- Rese, A., Ganster, L., & Baier, D. (2020). Chatbots in retailers’ customer communication: How to measure their acceptance? Journal of Retailing and Consumer Service, 56, Article 102176, 1–14. https://doi.org/10.1016/j.jretconser.2020.102176
- Sánchez Gonzales, H. M., & Sánchez González, M. (2020). Conversational bots used in political news from the point of view of the user’s experience: Politibot. Communication & Society, 33(4), 155–168. https://doi.org/10.15581/003.33.4.155-168
- Sands, S., Campbell, C. L., Plangger, K., & Ferraro, C. (2022). Unreal influence: Leveraging AI in influencer marketing. European Journal of Marketing, 56(6), 1721–1747. https://doi.org/10.1108/EJM-12-2019-0949
- Sands, S., Campbell, C., Plangger, K., & Pitt, L. (2022). Buffer bots: The role of virtual service agents in mitigating negative effects when service fails. Psychology & Marketing, 39(11), 2039–2054. https://doi.org/10.1002/mar.21723
- Sands, S., Ferraro, C., Demsar, V., & Chandler, G. (2022). False idols: Unpacking the opportunities and challenges of falsity in the context of virtual influencers. Business Horizons, 65(6), 777–788. https://doi.org/10.1016/j.bushor.2022.08.002
- Schapals, A. K., & Porlezza, C. (2020). Assistance or resistance? Evaluating the intersection of automated journalism and journalistic role conceptions. Media and Communication, 8(3), 16–26. https://doi.org/10.17645/mac.v8i3.3054
- Schultz, B., & Sheffer, M. L. (2017). Newspaper trust and credibility in the age of robot reporters. Journal of Applied Journalism & Media Studies, 6(2), 339–355. https://doi.org/10.1386/ajms.6.2.339_1
- Söderlund, M., Oikarinen, E. L., & Tan, T. M. (2022). The hard-working virtual agent in the service encounter boosts customer satisfaction. The International Review of Retail, Distribution & Consumer Research, 32(4), 388–404. https://doi.org/10.1080/09593969.2022.2042715
- Tandoc, E. C., Jr., Yao, L. J., & Wu, S. (2020). Man vs. machine? The impact of algorithm authorship on news credibility. Digital Journalism, 8(4), 548–562. https://doi.org/10.1080/21670811.2020.1762102
- Thäsler-Kordonouri, S., & Barling, K. (2023). Automated journalism in UK local newsrooms: Attitudes, integration, impact. Journalism Practice, 1–18. https://doi.org/10.1080/17512786.2023.2184413
- Thomas, V. L., & Fowler, K. (2021). Close encounters of the AI kind: Use of AI influencers as brand endorsers. Journal of Advertising, 50(1), 11–25. https://doi.org/10.1080/00913367.2020.1810595
- Tsai, W. H., Liu, Y., & Chuan, C. H. (2021). How chatbots’ social presence communication enhances consumer engagement: The mediating role of parasocial interaction and dialogue. Journal of Research in Interactive Marketing, 15(3), 460–482. https://doi.org/10.1108/JRIM-12-2019-0200
- Túñez-López, J. M., Fieiras Ceide, C., & Vaz-Álvarez, M. (2021). Impact of artificial intelligence on journalism: Transformations in the company, products, contents and professional profile. Communication & Society, 34(1), 177–193. https://doi.org/10.15581/003.34.1.177-193
- van Dalen, A. (2012). The algorithms behind the headlines. How machine-written news redefines the core skills of human journalists. Journalism Practice, 6(5–6), 648–658. https://doi.org/10.1080/17512786.2012.667268
- van Pinxteren, M., Pluymaekers, M., Lemmink, J., & Krispin, A. (2023). Effects of communication style on relational outcomes in interactions between customers and embodied conversational agents. Psychology & Marketing, 40(5), 938–953. https://doi.org/10.1002/mar.21792
- Waddell, T. F. (2018). A robot wrote this? How perceived machine authorship affects news credibility. Digital Journalism, 6(2), 236–255. https://doi.org/10.1080/21670811.2017.1384319
- Waddell, T. F. (2019a). Attribution practices for the man-machine marriage: How perceived human intervention, automation metaphors, and byline location affect the perceived bias and credibility of purportedly automated content. Journalism Practice, 13(10), 1255–1272. https://doi.org/10.1080/17512786.2019.1585197
- Waddell, T. F. (2019b). Can an algorithm reduce the perceived bias of news? Testing the effect of machine attribution on news readers’ evaluations of bias, anthropomorphism, and credibility. Journalism & Mass Communication Quarterly, 96(1), 82–100. https://doi.org/10.1177/1077699018815891
- Wang, J., Li, S., Xue, K., & Chen, L. (2022). What is the competence boundary of algorithms? An institutional perspective on AI-based video generation? Displays, 73, Article 102240, 1–10. https://doi.org/10.1016/j.displa.2022.102240
- Wang, J., & Tanes-Ehle, Z. (2023). Examining the effects of conversational chatbots on changing conspiracy beliefs about science: The paradox of interactivity. Journal of Broadcasting & Electronic Media, 67(1), 68–89. https://doi.org/10.1080/08838151.2022.2153842
- Whittaker, L., Kietzmann, J., Letheren, K., Mulcahy, R., & Russell-Bennett, R. (2023). Brace yourself! Why managers should adopt a synthetic media incident response playbook in an age of falsity and synthetic media. Business Horizons, 66(2), 277–290. https://doi.org/10.1016/j.bushor.2022.07.004
- Wiesenberg, M., & Tench, R. (2020). Deep strategic mediatization: Organizational leaders’ knowledge and usage of social bots in an era of disinformation. International Journal of Information Management, 51. https://doi.org/10.1016/j.ijinfomgt.2019.102042
- Wölker, A., & Powell, T. E. (2021). Algorithms in the newsroom? News readers’ perceived credibility and selection of automated journalism. Journalism, 22(1), 86–103. https://doi.org/10.1177/1464884918757072
- Wu, Y. (2020). Is automated journalistic writing less biased? An experimental test of auto-written and human-written news stories. Journalism Practice, 14(8), 1008–1028. https://doi.org/10.1080/17512786.2019.1682940
- Wu, Y., Mou, Y., Li, Z., & Xu, K. (2020). Investigating American and Chinese subjects’ explicit and implicit perceptions of AI-generated artistic work. Computers in Human Behavior, 104, Article 106186, 1–11. https://doi.org/10.1016/j.chb.2019.106186
- Yang, J., Chuenterawong, P., Lee, H., & Chock, T. M. (2022). Anthropomorphism in CSR endorsement: A comparative study on human- vs. cartoon-like virtual influencers’ climate change messaging. Journal of Promotion Management, 1–30. https://doi.org/10.2139/ssrn.4145550
- Zheng, Y., Zhong, B., & Yang, F. (2018). When algorithms meet journalism: The user perception to automated news in a cross-cultural context. Computers in Human Behavior, 86, 266–275. https://doi.org/10.1016/j.chb.2018.04.046