88
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Legal implications of using generative AI in the media

Published online: 01 Jul 2024
 

ABSTRACT

Generative AI is increasingly being used, among others also in the professional media industry. This article provides an overview of the main ethical and legal challenges regarding content generated by, or with the help of generative AI. Among the ethical challenges, special attention is devoted to the development of journalistic standards. The legal discussion highlights the questions of civil and criminal responsibility regarding content, with a primary focus on the attribution of liability. It scrutinises the roles of deployers, assessing their contributions to the final product. It suggests that all human participants’ contributions should be accordingly acknowledged, potentially necessitating the creation of novel categories such as hybrid authorship and shared responsibility.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 A Hepp and others, ‘ChatGPT, LaMDA, and the Hype Around Communicative AI: The Automation of Communication as a Field of Research in Media and Communication Studies’ (2023) 6(1) Human-Machine Communication 4.

2 T Conraths, ‘Wege zum Roboterjournalismus? – Wie verändert KI die journalistische Arbeit?’ (2023) ZUM 574; Sachita Nishal and Nicholas Diakopoulos, ‘Envisioning the Applications and Implications of Generative AI for News Media’ (CHI ‘23 Generative AI and HCI Workshop, April 23–28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 7 pages). Citing Nicholas Diakopoulos, Automating the News: How Algorithms are Rewriting the Media (Harvard University Press, 2019).

3 Barbara Gruber, ‘Facts, Fakes and Figures: How AI is Influencing Journalism’ Kulturtechniken 4.0 <https://www.goethe.de/prj/k40/en/lan/aij.html>. ‘AP’s newsroom AI technology automatically generates roughly 40,000 stories a year’.

4 Valeria Resendez and others, ‘Hey Google, What is in the News? The Influence of Conversational Agents on Issue Salience’ (2023) Digital Journalism 1, doi:10.1080/21670811.2023.2234953.

5 The AI Act is a landmark framework regulation passed by the European Union in March 2024. At the time of its writing, the final text does not have an official publication and official number yet. It is likely to be published in the Official Journal in May or June 2024. This writing refers to the final text, accepted as a political compromise in December 2023, which, however, does not have an officially accessible version. The original proposal can be found under the title ‘Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’, COM/2021/206 Final. Find detailed information at: <https://artificialintelligenceact.eu/documents/> accessed 15 April 2024. To find the official text later, enter the title of the proposed act in the search line at <https://eur-lex.europa.eu/homepage.html> or enter the title with ‘Eur-Lex’ in a search engine.

6 BERT is an open-source language model, the name stands for: Bidirectional Encoder Representations from Transformers.

7 DALL-E is an image generative AI model by OpenAI.

8 ChatGPT was the second chatbot that passed the Turing test <https://www.mlyearning.org/chatgpt-passes-turing-test/> accessed 4 March 2024. See more on the Turing test: <https://www.techtarget.com/searchenterpriseai/definition/Turing-test> accessed 4 March 2024.

9 This event has been so widely reported on, that it appears unnecessary to give a reference. Google it, or try it yourself: have a chat with ChatGPT here: <chat.openai.com> accessed 4 March 2024.

10 <chat.openai.com> accessed 4 March 2024.

11 For example, see Sunday Times v UK (App no 6538/74) (26/04/1980) Observer and Guardian v UK (App no 13585/88) (26/11/1991), etc.

12 A. Sajó, Freedom of Expression (Institute of Public Affairs, 2004).

13 E Barendt, Freedom of Speech (OUP, 2005); A Meiklejohn, ‘Everything Worth Saying Should Be Said; An Educator Says We Talk of Free Speech, but Hedge that Freedom with Too Many Reservations’ The New York Times (18 July 1948) <https://www.nytimes.com/1948/07/18/archives/everything-worth-saying-should-be-said-an-educator-says-we-talk-of.html> accessed 4 March 2024; also: R Post, ‘Participatory Democracy and Free Speech’ (2011) 97 Virginia Law Review 485.

14 Article 5 German Basic Law, Section 1 Sentence 2; BVerfGE 62, page 230, 243.

15 See: F Fechner, Medienrecht (Mohr Siebeck, 2016) ch 8 para 29.

16 Sections 2 and 3 Digital Services Act. See also M Schaper and O Wolters, ‘Plattformen, Sperrung von Inhalten’ in M Ebers (ed) StichwortKommentar Legal Tech (Nomos, 2023) 927; See also Judit Bayer, ‘Procedural Rights as Safeguard for Human Rights in Platform Regulation’ (2002) Policy&Internet 1 <https://doi.org/10.1002/poi3.298> accessed 4 March 2024.

17 TW Dornis, ‘Die “Schöpfung ohne Schöpfer” – Klarstellungen zur “KI-Autonomie” im Urheber- und Patentrecht’ (2021) GRUR 784.

18 Conraths (n 2) 574; Martin Ebers and others, Künstliche Intelligenz und Robotik (Beck, 2020), § 29, Rn. 26 ff.

19 Article 6 AI Act, see also: Jürgen Kühling, ‘Der Einsatz von Künstlicher Intelligenz durch Unternehmen und Aufsichtsbehörden bei der Bekämpfung von Hassrede’ (2023) ZUM 566.

20 Helberger and Diakopoulos suggested to give generative AI and general purpose AI a general category of their own right, Natali Helberger and Nicholas Diakopoulos, ‘ChatGPT and the AI Act’ (2023) 12(1) Internet Policy Review 6 <https://doi.org/10.14763/2023.1.1682>.

21 Stanford Univeresity CRFM – HAI, ‘On the Opportunities and Risks of Foundation Models’ (2021) <https://fsi.stanford.edu/publication/opportunities-and-risks-foundation-models> cited by Luca Bertuzzi, ‘AI Act: MEPs Close in on Rules for General Purpose AI, Foundation Models’ Euractiv (20 April 2023) <https://www.euractiv.com/section/artificial-intelligence/news/ai-act-meps-close-in-on-rules-for-general-purpose-ai-foundation-models/> accessed 10 April 2024.

22 AI Act EP Mandate for the Trilogue. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts. 2021/0106(COD) DRAFT 20-06-2023 at 16h53. <https://www.kaizenner.eu> accessed 3 March 2024.

23 Article 3 (44b) AI Act.

24 Recital 60(n) AI Act, see also: Philipp Hacker, ‘What’s Missing from the EU AI Act: Addressing the Four Key Challenges of Large Language Models’ Verfassungsblog (13 December 2023) <https://verfassungsblog.de/whats-missing-from-the-eu-ai-act/> accessed 4 March 2024, doi: 10.59704/3f4921d4a3fbeeee.

25 Article 52a AI Act.

26 Article 52d AI Act.

27 Article 52 AI Act.

28 Barbara Gruber, ‘Facts, Fakes and Figures: How AI is Influencing Journalism’ Kulturtechniken 4.0 <https://www.goethe.de/prj/k40/en/lan/aij.html> accessed on 10 April 2024, writing: ‘AP’s newsroom AI technology automatically generates roughly 40,000 stories a year’.

29 European Federation of Journalists, ‘AI Act: Journalists and Creative Workers Call for a Human-Centric Approach to Regulating AI’ (26 September 2023) <https://europeanjournalists.org/blog/2023/09/26/ai-act-journalists-and-creative-workers-call-for-a-human-centric-approach-to-regulating-ai/> accessed 4 March 2024; Partnership on AI, ‘PAI’s Responsible Practices for Synthetic Media’ <https://syntheticmedia.partnershiponai.org/#read_the_framework> accessed 4 March 2024; Digwatch, ‘Ethical Challenges of Integrating AI in Media: Trust, Technology, and Rights’; Council of Europe (CDMSI), ‘Guidelines on the Responsible Implementation of Artificial Intelligence Systems in Journalism’ (30 November 2023) <https://rm.coe.int/cdmsi-2023-014-guidelines-on-the-responsible-implementation-of-artific/1680adb4c6> accessed 3 March 2024.

30 Article 52 AI Act.

31 The final AI Act uses the word deployer to what was earlier called user. ‘[D]eployer means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity’, Article 3 (4) AI Act. The term ‘user’ is not retained in the Act.

32 Ch Johannisbauer, ‘ChatGPT im Rechtsbereich’ (2023) MMR-Aktuell 455537.

33 Helberger and Diakopoulos (n 20) 4.

34 AL Opdahl and others, ‘Trustworthy Journalism through AI’ (2023) Data & Knowledge Engineering 146.

35 R Bommasani and others, ‘On the Opportunities and Risks of Foundation Models’ (2021) arXiv preprint arXiv:2108.07258.

36 K Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021).

37 CAHAI Feasibility Study, CAHAI(2020)23, point 99 <https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da> accessed 3 March 2024.

38 P Hacker and others, ‘Regulating ChatGPT and Other Large Generative AI Models’ (Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, June 2023, 1112–23).

39 Recital 70 AI Act.

40 Chiara Longoni and others, ‘News from Generative Artificial Intelligence Is Believed Less’ (2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘22), June 21–24, 2022, Seoul, Republic of Korea. ACM, New York, NY, USA, 10 pages). https://doi.org/10.1145/3531146.3533077.

41 For example, ‘WIRED Disclosed a Detailed AI policy’ <https://www.wired.com/about/generative-ai-policy/> accessed 3 March 2024.

43 European Federation of Journalists have joined the committee. <https://europeanjournalists.org/blog/2023/09/11/efj-joined-international-committee-for-an-ai-charter-in-media/> accessed 3 March 2024.

44 See n 39.

45 F Marconi, Newsmakers: Artificial Intelligence and the Future of Journalism (Columbia University Press, 2020).

46 AP released its guidelines, advising caution to avoid publishing GenAI-generated content uncritically. It should be treated as unvetted source material. <https://www.theverge.com/2023/8/16/23834586/associated-press-ai-guidelines-journalists-openai> accessed 4 March 2024.

47 Jeff Israely, ‘How Will Journalists Use ChatGPT? Clues from a Newsroom that’s been Using AI for Years’ Niemanlab (2023) <https://www.niemanlab.org/2023/03/how-will-journalists-use-chatgpt-clues-from-a-newsroom-thats-been-using-ai-for-years/> accessed 3 March 2024.

48 §6 LPressG NRW (State Press Law), oder §3 Abs 2 BlnPrG (Berlin Press Law): ‘Die Presse hat alle Nachrichten vor ihrer Verbreitung mit der nach den Umständen gebotenen Sorgfalt auf Inhalt, Wahrheit und Herkunft zu prüfen.’ See also: HC Gräfe and J Kahl, ‘KI-Systeme zur automatischen Texterstellung’ (2020) MMR 121.

49 Conraths (n 2) 574.

50 Its mother organisation, IFJ has adopted in 1954 the IFJ Declaration of Principles on the Conduct of Journalists. See the principles here: <https://www.ifj.org/who/rules-and-policy/global-charter-of-ethics-for-journalists> accessed 3 March 2024.

51 <https://ethicaljournalismnetwork.org/who-we-are#Mission> accessed 3 March 2024. ‘Our five core principles of ethical journalism’.

52 D Bomhard and J Siglmüller, ‘Europäische KI-Haftungsrichtlinie’ (2022) RDi 506.

53 RSF, ‘Paris Charter on AI and Journalism’ (10 November 2013) <https://rsf.org/sites/default/files/medias/file/2023/11/Paris%20Charter%20on%20AI%20and%20Journalism.pdf> accessed 3 March 2024. EFJ and 16 partners support Paris Charter on AI and Journalism. <https://europeanjournalists.org/blog/2023/11/14/efj-and-16-partners-support-paris-charter-on-ai-and-journalism/> accessed 3 March 2024.

54 K Wach and others, ‘The Dark Side of Generative Artificial Intelligence: A Critical Analysis of Controversies and Risks of ChatGPT’ (2023) 11(2) Entrepreneurial Business and Economics Review 7.

55 Art. 52 (3) AI Act.

56 Instances of ChatGPT ‘hallucinations’ described and categorised in: H Alkaissi and SI McFarlane, ‘Artificial Hallucinations in ChatGPT: Implications in Scientific Writing’ (2023) 15(2) Cureus.

57 The errors are explained here: <https://futurism.com/cnet-ai-errors>; James Vincent, ‘Sports Illustrated’s Publisher is Using AI to Generate Fitness Advice’ The Verge (2023) <https://www.theverge.com/2023/2/3/23584305/ai-language-tools-media-use-arena-group-sports-illustrated-mens-journal> accessed 3 March 2024; Paul Farhi, ‘A News Site Used AI to Write Articles. It was a Journalistic Disaster’ The Washington Post <https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/> all links accessed 3 March 2024.

58 C Wardle and H Derakhshan, ‘Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking’ (2017) 27 Council of Europe 1.

59 But see: FM Simon, S Altay and H Mercier, ‘Misinformation Reloaded? Fears about the Impact of Generative AI on Misinformation are Overblown’ (2023) 4(5) Harvard Kennedy School Misinformation Review. The authors argue that the consumption of misinformation is mostly limited by demand and not by supply, and therefore the concerns that GenAI will impact the impact of dis- and misinformation, are overblown.

60 In contrast to Opdahl et al., I regard these two closely correlated. AL Opdahl and others, ‘Trustworthy Journalism Through AI’ (2023) 146 Data & Knowledge Engineering 102182.

61 Article 52d AI Act.

62 Guidelines on the responsible implementation of artificial intelligence systems in journalism, (CDMSI) on 30 November 2023 <https://rm.coe.int/cdmsi-2023-014-guidelines-on-the-responsible-implementation-of-artific/1680adb4c6> accessed 3 March 2024.

63 Michael Kalbfus and Andreas Schöberle, ‘Arbeitsrechtliche Fragen beim Einsatz von KI am Beispiel von ChatGPT’ (2023) ArbRAktuell 251.

64 See in specific the guidelines of AP and Wired, n 39.

65 About this in detail, see: Graziana Kastl-Riemann, ‘Algorithmen und Künstliche Intelligenz im Äußerungsrecht’ (2023) ZUM 578.

66 JE Schirmer, ‘Artificial Intelligence and Legal Personality: Introducing “Teilrechtsfähigkeit”: A Partial Legal Status Made in Germany’ (2020) Regulating Artificial Intelligence 123. See also: P Čerka, J Grigienė, and G Sirbikytė, ‘Is It Possible to Grant Legal Personality to Artificial Intelligence Software Systems?’ (2017) 33(5) Computer Law & Security Review 685; but against it: S Chesterman, ‘Artificial Intelligence and the Limits of Legal Personality’ (2020) 69(4) International & Comparative Law Quarterly 819.

67 Ernest Kenneth-Southworth and Yahong Li, ‘AI Inventors: Deference for Legal Personality Without Respect for Innovation?’ (2023) 18(1) Journal of Intellectual Property Law & Practice 58 <https://doi.org/10.1093/jiplp/jpac111>.

68 Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society OJ L 167, 22.6.2001, 10–19.

69 Having a legal liability presupposes the existence of moral liability, which presupposes consciousness. See also: Chesterman (n 66).

70 Even if such reasoning would differ from technological ‘explainability’ but would need to provide a logical chain of arguments that allow and explain the conclusion.

71 Eliza Mik, ‘AI as a Legal Person?’, in Lee Jyh-An, Hilty Reto, and Liu Kung-Chung (eds), Artificial Intelligence and Intellectual Property (online edn, Oxford Academic, 22 Apr. 2021) <https://doi.org/10.1093/oso/9780198870944.003.0020> accessed 27 October 2023.

72 Resolution of the European Parliament of 20 October 2020, 2020/2014(INL).

73 W Davis, ‘AI Companies have All Kinds of Arguments Against Paying for Copyrighted Content’ The Verge (2023) <https://www.theverge.com/2023/11/4/23946353/generative-ai-copyright-training-data-openai-microsoft-google-meta-stabilityai> accessed 3 March 2024.

74 Arts. 3 and 4 of Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC OJ L 130, 17.5.2019 (DSM Directive) 92–125 See: A Strowel, ‘ChatGPT and Generative AI Tools: Theft of Intellectual Labor?’ (2023) IIC 491.

75 See among others: J Siglmüller and D Gassner, ‘Softwareentwicklung durch Open-Source-trainierte KI – Schutz und Haftung’ (2023) RDi 124.

76 According to Recital 14 of the DSM Directive, ‘Lawful access should be understood as covering access to content based on an open access policy or through contractual arrangements between rightholders and research organisations or cultural heritage institutions, such as subscriptions, or through other lawful means. For instance, in the case of subscriptions taken by research organisations or cultural heritage institutions, the persons attached thereto and covered by those subscriptions should be deemed to have lawful access. Lawful access should also cover access to content that is freely available online.’ Article 3 (1) of DSM Directive and § 44b Abs. 2. German Intellectual Property Law.

77 The USE Computer Fraud and Abuse Act criminalises accessing a server without authorization, and this might eventually lead to a court precedent that bars web scraping, or defines its conditions, see Van Buren v United States, 141 S.Ct. 1648 (2021). Also see: Bommasani (n 35).

78 J Quang, ‘Does Training AI Violate Copyright Law?’ (2021) 36 Berkeley Tech. LJ 1407.

79 Sara Fischer, ‘Exclusive: AP Strikes News-Sharing and Tech Deal with OpenAI’ Axios (13 July 2023) <https://www.axios.com/2023/07/13/ap-openai-news-sharing-tech-deal> accessed 3 March 2024.

80 Max Tani, ‘New York Times Drops Out of AI Coalition’ Semafor (2023) <https://www.semafor.com/article/08/13/2023/new-york-times-drops-out-of-ai-coalition> accessed 3 March 2024.

81 Bobby Allyn, ‘“New York Times” Considers Legal Action Against OpenAI as Copyright Tensions Swirl’ NPR (2023) <https://www.npr.org/2023/08/16/1194202562/new-york-times-considers-legal-action-against-openai-as-copyright-tensions-swirl> accessed 3 March 2024.

82 See: J Vincent, ‘Getty Images Sues AI Art Generator Stable Diffusion in the US for Copyright Infringement’ The Verge (6 February 2023) <https://www.theverge.com/2023/2/6/23587393/ai-art-copyright-lawsuit-getty-images-stable-diffusion> accessed 3 March 2024.

83 See: M O’Brien, ‘Photo Giant Getty Took a Leading AI Image-Maker to Court. Now It’s also Embracing the Technology’ (25 September 2023) <https://apnews.com/article/getty-images-artificial-intelligence-ai-image-generator-stable-diffusion-a98eeaaeb2bf13c5e8874ceb6a8ce196> accessed 3 March 2024.

84 Ashley Belanger, ‘Artists May “Poison” AI Models Before Copyright Office Can Issue Guidance’ Ars Technica (2023) <https://arstechnica.com/tech-policy/2023/11/artists-may-poison-ai-models-before-copyright-office-can-issue-guidance> accessed 3 March 2024.

85 Best-selling writers have sued therefore OpenAI. See: NYT, ‘Franzen, Grisham and Other Prominent Authors Sue OpenAI’ (2023) <https://www.nytimes.com/2023/09/20/books/authors-openai-lawsuit-chatgpt-copyright.html> accessed 3 March 2024.

86 Strowel (n 74).

87 Sarah Andersen, et al., v Stability AI LTD. et al. Case 3:23-cv-00201-WHO.

88 Angela Watercutter, ‘ Hollywood Actors Strike Ends With a Deal That Will Impact AI and Streaming for Decades’ Wired (8 November 2023) <https://www.wired.com/story/hollywood-actors-strike-ends-ai-streaming/> accessed 3 March 2024.

89 See: Chang v. Virgin Mobile USA, L.L.C., 2009 WL 111570, 2009 U.S. Dist. LEXIS 3051 (N.D. Tex. 2009).

90 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 119, 4.5.2016, 1–88 (General Data Protection Regulation, GDPR).

91 See for example: M von Welser, ‘ChatGPT und Urheberrecht’ (2023) GRUR-Prax 57; Thomas Hoeren, ‘“Geistiges Eigentum” ist tot – lang lebe ChatGPT’ (2023) MMR 81; Gräfe and Kahl (n 48).

92 Stated by AH Woerlein, ‘ChatGPT – “Fortschritt” durch Künstliche Intelligenz auf Kosten des Datenschutz- und Urheberrechts’ (2023) ZD-Aktuell 01205.

93 Felix Krone, ‘Urheberrechtlicher Schutz von ChatGPT-Texten?’ (2023) RDi 117, 120.

94 U Ehinger and A Grünberg, ‘Der Schutz von Erzeugnissen künstlicher Kreativität im Urheberrecht’ (2019) K&R 232 (233).

95 JC Ginsburg and LA Budiardjo, ‘Authors and Machines’ (2019) 34 Berkeley Tech LJ 343. Ginsburg and Budiardjo include only the creator of the programme and the user (creator of the final content) in regard. With the new general purpose models, training and fine-tuning can also provide relevant value to the tool.

96 Woerlein (n 92).

97 ibid.

98 S Séjourné, ‘Draft Report on Intellectual Property Rights for the Development of Artificial Intelligence Technologies’ (European Parliament, Committee on Legal Affairs, 2020/2015(INI), 24 April 2020) paras 9–10.

99 Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence (World Intellectual Property Organisation, WIPO/IP/AI/2/GE/20/1 REV, 21 May 2020) para 23.

100 But see a contrary opinion by: von Welser (n 91).

101 AI Act, AI Liability Directive. See also: Bomhard and Siglmüller (n 52).

102 D Elgesem, ‘The AI Act and the Risks Posed by Generative AI Models’ (CEUR Workshop Proceedings 2023 Symposium of the Norwegian AI Society, June 14–15, 2023, Bergen, Norway (Vol. 3431)) 3.

103 Bommasani (n 35).

104 Helberger and Diakopoulos (n 20) 3.

105 ibid 4.

106 Fechner (n 15) ch 4 para 112.

107 Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) COM/2022/496 final.

108 Redaktion MMR-Aktuell, ‘FTC: Untersuchung von OpenAI wegen Menschen-Verleumdung durch ChatGPT’ (2023) MMR-Aktuell 458423.

109 B Jones, E Luger and R Jones, ‘Generative AI & Journalism: A Rapid Risk-Based Review’ (2023) <https://www.pure.ed.ac.uk/ws/portalfiles/portal/372212564/GenAI_Journalism_Rapid_Risk_Review_June_2023_BJ_RJ_EL.pdf> accessed 3 March 2024; J Turley, ‘ChatGPT Falsely Accused Me of Sexually Harassing My Students. Can We Really Trust AI?’ USA Today (2023) <https://eu.usatoday.com/story/opinion/columnist/2023/04/03/chatgpt-misinformation-bias-flaws-ai-chatbot/11571830002/> accessed 3 March 2024.

110 Article 52 (3) AI Act.

111 Alexander Heinze, ‘§186 Üble Nachrede’ in Heribert Schumann, Andreas Mosbacher, and Stefan König (eds), MedienStrafRecht (NomosKommentar, 2023).

112 GDPR (n 90).

113 Article 8 European Convention on Human Rights.

114 Judit Bayer and others, ‘Disinformation and Propaganda – Impact on the Functioning of the Rule of Law in the EU and Its Member States’ (2019) Study for the European Parliament Policy Department C: Citizens’ Rights and Constitutional Affairs.

115 P Beuth, ‘Wie sich ChatGPT mit Worten hacken lässt’ Der Spiegel (12 January 2023).

116 Article 16 DSA.

117 Ph Hacker, ‘Die Regulierung von ChatGPT et al. – ein europäisches Trauerspiel’ (2023) GRUR 289.

118 Deep Fakes in the AI Act.

119 A Androniceanu, I Georgescu and OM Sabie, ‘The Impact of Digitalization on Public Administration, Economic Development, and Well-Being in the EU Countries’ (2022) 20(1) Central European Public Administration Review 7.

120 Recital (60v) AI Act.

121 Sachita Nishal and Nicholas Diakopoulos, ‘Envisioning the Applications and Implications of Generative AI for News Media’ (CHI ‘23 Generative AI and HCI Workshop, April 23–28, 2023, Hamburg, Germany. ACM, New York, NY, USA), 7 pages, page 3.

122 A supervision by the developer is capable to prevent damage when they remotely recognise unsafe prompts, as it is currently done by ChatGPT.

123 Jones (n 109).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 596.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.