4,281
Views
2
CrossRef citations to date
0
Altmetric
Introduction

Towards a Normative Perspective on Journalistic AI: Embracing the Messy Reality of Normative Ideals

, ORCID Icon, ORCID Icon, &

Abstract

Few would disagree that AI systems and applications need to be “responsible,” but what is “responsible” and how to answer that question? Answering that question requires a normative perspective on the role of journalistic AI and the values it shall serve. Such a perspective needs to be grounded in a broader normative framework and a thorough understanding of the dynamics and complexities of journalistic AI at the level of people, newsrooms and media markets. This special issue aims to develop such a normative perspective on the use of AI-driven tools in journalism and the role of digital journalism studies in advancing that perspective. The contributions in this special issue combine conceptual, organisational and empirical angles to study the challenges involved in actively using AI to promote editorial values, the powers at play, the role of economic and regulatory conditions, and ways of bridging academic ideals and the messy reality of the real world. This editorial brings the different contributions into conversation, situates them in the broader digital journalism studies scholarship and identifies seven key-take aways.

Introduction

Technology does not overcome us. It is our task as a society, as professional users, academics and developers to shape technologies as part of the kind of society we wish to live in. In order to be able to do so, however, the development and implementation of digital solutions must be guided by a vision of the values and fundamental freedoms we, as a society, want to see realized. Such a vision is also needed for journalistic Artificial Intelligence (AI). AI-driven tools play an increasingly important role on many levels of the process of making and distributing news: from smart tools that assist journalists in producing their stories to the fully automated production of news stories (robot journalism) and from audience analytics that inform editorial decisions to the AI-driven recommendations of content to users. As such, journalistic AI-driven tools are more than simple tools. They are part of a structural transformation of making news and engaging with the audience.

The integration of AI-driven tools into the journalistic process raises not only a host of challenging professional, technical and organizational questions. Intense debates about filter bubbles, privacy, shifting power dynamics, gatekeeping, editorial independence and the metrification of journalistic values and fundamental rights also touch upon the legal, ethical, societal and democratic implications of AI in the media. Morozov, well-known for his powerful argument that technology companies with their promise of friction-less problem-solving lure us into an ethos of technological quick-fixes, warned in an interview that the news media, too, “have embraced the digital rhetoric too eagerly, and have not articulated their own value to the public” (Morozov Citation2014). This concern has been echoed repeatedly by scholars criticizing that much of the development of AI in journalism practice so far is incidental and driven by a technocentric “Shiny New Things” syndrome rather than by a strategic vision of how AI can – realistically - contribute to the societal role of journalism (Posetti Citation2018; Kueng Citation2017; Thurman, Lewis, and Kunert Citation2019; Broussard et al. Citation2019). Without a clear vision of where to (not) implement AI-driven solutions, news publishers are easy prey for the sellers of technological fixes that cost a lot but do not make the media better. Finally, a vision of where to go with journalistic AI is not only the secret ingredient of successful innovation (Kueng Citation2017). It also empowers the media and society to make informed decisions about how far to go along with the vision of a handful of large technology cooperations that have their own definitions of what good and responsible AI is.

Normativity à la Silicon Valley is rather driven by corporate and shareholder interests than political theories of democracy or fundamental rights (Webb Citation2019). For the journalistic media, however, the matter is more complicated. In recognition of the societal and democratic role that journalism plays, journalism enjoys special protection and privileges under the human rights framework. Theories of freedom of expression or free speech have been constitutive for how journalism is organized and embedded into society, the protection journalism enjoys from state interference, and the obligations of states to ensure the conditions journalism needs to function. This special status of journalism, however, also comes with duties and responsibilities and the commitment to the democratic and societal role of the media. Put differently, developing a vision of how AI can contribute to the societal role of journalism also requires identifying and solving ethical challenges and potential conflicts with human rights.

This special issue aims to develop a normative perspective on the use of AI-driven tools in journalism and the role of digital journalism studies (Steensen and Westlund Citation2021) in advancing such a perspective. What does it mean to “use AI responsibly” in the media and journalism, and how do we answer that question? We understand journalistic Artificial Intelligence as an umbrella term for a range of technologies, from machine learning to automated decision making, that are used along the entire production chain. We also acknowledge that, as a notion, AI is helpful and unhelpful at the same time. On the one hand, “AI” as a term helps to signify and draw attention to a particularly impactful stretch in journalism’s digital turn – the next level of sophistication in making sense of enormous amounts of data, powered by machine learning, forms of automation of research, production and distribution of content. On the other hand, AI as a notion is also fraught with myths, political connotations and emotional responses that stand in the way of an informed debate on AI, within and outside newsrooms.

This special issue is part of the ERC-funded PersoNews project, and concludes five years of research into the societal, ethical and legal implications of using journalistic AI and news recommendation algorithms in particular. The original project plan was to organize a conference and write a book. Then came the Covid pandemic, the conference turned into an online workshop, and the book into this special issue. During the five years of the PersoNews project, we, the team and conveners of this special issue, have learned greatly from and engaged with the growing digital journalism studies community, making Digital Journalism a natural home for this special issue. We are therefore also immensely grateful for the support of the excellent DJ editorial team and to all the reviewers that invested so much time and effort in helping to get the best out of all the contributions. Finally, a hail-out to all the authors in this special issue, many of whom we have read, admired and followed throughout the project.

The technological change AI brings may be inevitable; how AI reshapes journalism is not: There is a recognition in this special issue that the media is responsible for determining how it uses AI. The individual contributions tease out what that responsibility entails and whether the media is in a position to exercise this responsibility. This special issue studies the challenges involved in actively using AI to promote editorial values from different angels: the conceptual angle of how to define, understand and operationalize normative values, the organizational angle of who defines what responsible use of AI in journalism is and what is necessary to be able to do so, the empirical angle of how we can measure AI’s impact on editorial values as well as the difficulty of bridging academic ideals and the messy reality of the real world.

In the following, we will reflect on these contributions and, more generally, on what exactly it means to develop a normative perspective on journalistic AI along four main themes: the values we want journalistic AI to realize (both in the sense of a normative vision and the need to reconcile this vision with the empirical reality of AI on the ground); the people and power structures that determine which values AI impacts and how; AI's role in the digital information infrastructure and, last but not least, the importance of (and lack of attention for) AI governance and regulation in this debate.

A Much-Needed Vision

Right now, a significant part of the scholarly debate on where journalism should evolve with AI and how it should internalize and reflect on the potential consequences for users and society has concentrated on identifying core ethical principles and professional values. Some studies take an empirical, bottom-up approach by investigating practitioners’ perspectives. One interview study with media professionals, for example, has confirmed for the context of news recommendation algorithms the importance of traditional professional values, such as transparency, diversity, and autonomy, next to newer, more user-centric values, such as personal relevance or usability (Bastian, Helberger, and Makhortykh Citation2021). Another study relating more generally to the use of AI in journalism stressed similar and also additional values, such as consistency with editorial criteria, respect for diversity and the promotion of a thriving public sphere, the importance of monitoring and data quality to avoid bias, the media’s responsibility for the safeguarding of user privacy, but also the realization that quality journalism means emphasizing the human factor and journalistic independence (Pocino Citation2021). Yet another study, also based on interviews with news professionals, identified issues around bias, disinformation, ways of enhancing editorial decision-making and transparency, balancing AI and human intelligence and the role of technology companies as the most important issues (Beckett Citation2019). Scholarly investigations often still concentrate on the perceptions of journalists or editors, but some studies have begun to turn to the perspective of technical staff and developers (Belair-Gagnon and Holton Citation2018; Ananny and Crawford Citation2015) or the diverse perspective of editors, technologists and businesspeople inside organization altogether (Lewis and Westlund Citation2015). For future research, it would be worthwhile to bring these professionals into a more in-depth conversation and include other relevant actors, such as the legal department, Human Resources, marketing, funders and shareholders. Some studies take a more theoretical approach by structuring the debate and placing journalistic algorithmic ethics into the broader context of AI ethics (Dörr and Hollnbuchner Citation2017), investigating the ethics of individual instances of journalistic AI (Danaher Citation2018) or engaging in a meta-analysis of the burgeoning field of ethical AI codes that have started developing over the years (Fjeld et al. Citation2020; Hagendorff Citation2020).

This work is important and useful input for a debate on what kind of journalistic AI media and society should strive for. However, a normative vision of journalistic AI requires more than identifying lists of ethical values impacted by AI. A normative framework helps scholars and practitioners to decide why these are the values and ethical requirements we should strive for and to which end. Without such a framework, it will be very difficult to fill concepts such as diversity, explainability or public service value with meaning, to decide how to balance them vis-à-vis conflicting values, or to assess afterwards if technology has indeed succeeded in advancing these values. Finally, a broader normative framework can function as an important anchor point, a point at the horizon that helps us to find our way through a discouragingly long super-list of ethical AI principles.

The article by Lin & Lewis in this special issue aims to do exactly that: carve out a more normative framework as a point of departure for answering their question of “what should journalistic AI do to serve journalism’s broader democratic norms?” (Lin and Lewis Citation2022, 1627–1649). Instead of launching head-on into yet another enumeration of all the things digital journalism should do, the authors dedicate the first part of their article to grounding their normative perspective in theories of journalism and democracy. Building on the works of Schudson (Citation2008), Baker (Citation2002), Christians et al. (Citation2009), Strömbäck (Citation2005) and Nielsen (Citation2017), Lin & Lewis propose that “[t]he one thing journalistic AI might do for democracy” is to contribute to providing people with accurate, accessible, diverse, relevant and timely information. In the second part of their paper, the authors then elaborate on what that could mean very concretely for how researchers might approach these concepts and what responsible use of journalistic AI might entail.

In another special issue contribution, Marijn Sax reminds the reader how rich the field of democratic theory is (Sax Citation2022, 1650–1670). While currently much scholarly work on journalism and democratic theory focuses on liberal and deliberative theories (see also (Karppinen Citation2013), Sax‘ article shows us how turning towards less commonly explored theories can open up entirely new and exciting vantage points and insights for example by paying more attention to how conflicts can be made productive, how the power to define metrics can lead to the exclusion of certain groups but also how procedures to define metrics can be made opener to contestation. Sax “adds agonism to the mix” and demonstrates how studying news recommendation algorithms through the lens of agonistic theory can compel us to engage more critically with the kind of metrics news recommenders could or should be optimized for. Importantly, Sax also invites us to look beyond metrics and measures. Taking agonism seriously, he argues, also means asking critical questions about the role of news recommenders as technological tools and instruments of power in the digital media ecosystem. The author challenges us to pay more attention to the question of how the processes, actors and their power to shape the design of recommendation algorithms can be critically questioned and made contestable (Sax Citation2022, 1650–1670).

Another possible framework that has received far less attention in digital journalism studies is the human rights framework. We already mentioned that press operations are deeply rooted in and shaped by the freedom of expression doctrine. As the European Court of Human Rights explains, "freedom of expression, as secured in paragraph one of Article 10 (art. 10-1), constitutes one of the essential foundations of a democratic society and one of the basic conditions for its progress and each individual’s self-fulfilment. ….These principles are of particular importance as far as the press is concerned. Whilst the press must not overstep the bounds set, inter alia, for the "protection of the reputation of others,” it is nevertheless incumbent on it to impart information and ideas on political issues just as on those in other areas of public interest” (highlights by the authors)Footnote1 Based on the right to freedom of expression, the court has defined the democratic task of the press as the imparting of information and ideas that the public has a right to receive,Footnote2 and being a critical observer and public watchdogFootnote3 of all affairs that concern the public and the democratic process. And also outside Europe, for example in the US, freedom of expression (or speech) doctrine is closely associated with the functioning of the media in a democracy (Balkin Citation2016; Post Citation1990).Footnote4 It would go far beyond the scope of this article to elaborate in more detail on how freedom of expression doctrine can inform what the media ought or ought not to do (Ash Citation2017; Tambini Citation2021; Helberger et al. Citation2020; Balkin Citation2016; Post Citation1990; Meiklejohn Citation1948), or to broaden the investigation to other potentially very relevant human rights, such as the right to privacy, non-discrimination and equality, political participation and human dignity. For now, it is useful to point out that the right to freedom of expression also extends to the communication technologies used,Footnote5 and that states, at least under the European human rights doctrine, should create the conditions so that the media can play its role while refraining from unlawful interference with the rights of the media and the audience. Departing from the freedom of expression doctrine or from democratic theories of the media can result in similar conclusions, such as acknowledging the importance of pluralism and diversity or the information function of the media. Nevertheless, the human rights framework, its instrumental approach, its emphasis on balancing competing rights and interests, as well as the responsibilities that come with the protection of human rights can provide digital journalism scholars with new useful concepts and a language to develop a normative perspective on the use of journalistic AI.

The contribution by Vermeulen in this special issue is a good example of this. It zooms in on one aspect that Lin and Lewis (Citation2022, 1627–1649) addressed in more general terms, but then from a freedom of expression perspective. Vermeulen asks what journalistic AI can do to serve media diversity (Vermeulen Citation2022, 1671–1690). Vermeulen shows how media diversity flows as a central condition from freedom of expression and how diversity in the media relates to the audience’s freedom to receive information and ideas. Taking the freedom of expression perspective, like Vermeulen does, teaches us that diversity is not a goal in itself, but that diversity as a value serves important societal goals, such as the societal integration of citizens, allowing them to participate in public debates. The freedom of the media cannot be seen separately from the rights and freedoms of the audience. After all, the freedom to receive information includes the freedom not to receive information (Eskens, Helberger, and Moeller Citation2017) and the freedom of users to decide for themselves which ideas and opinions they wish to receive. Insofar, even the most well-meaning diverse recommender can interfere with the freedom of the audience to receive information and hold an opinion. Vermeulen, in her contribution, highlights, in particular, the aspect of autonomy and freedom of choice and that responsible, diverse recommender design also involves including mechanisms to give users agency in the personalization process. This focus on user agency resonates with a growing body of scholarship that acknowledges the role of users as active agents in the news personalization process (Thorson and Wells Citation2016; Monzer et al. Citation2020; Harambam, Helberger, and Van Hoboken Citation2018; Eskens, Helberger, and Moeller Citation2017; Hendrickx Citation2022; Swart et al. Citation2022).

More generally, the contributions by Sax (Citation2022), Lin and Lewis (Citation2022) and Vermeulen (Citation2022) show that identifying relevant values and grounding them in a normative framework is important but that doing so is only the first step towards more responsible use of AI. The next challenge lies in the more concrete conceptualization, balancing conflicting values and a critical assessment (and improvement) of the decision-making processes and how they can be made more inclusive and more contestable.

The Messy Reality of Normative Ideals

Another important challenge for carving out a normative framework for journalistic AI is finding a way to reconciliate grand theories and normative ideals with the messy reality of digital journalism on the ground. This is a point that also Lin & Lewis make very prominently in their contribution. Their contribution echoes criticism of the discrepancy between ideal theory and actual performance in real-life situations elsewhere in digital journalism scholarship (Nielsen Citation2017; Beckett Citation2019). While grand theories are important as an aspirational Northstar to inspire visions of where journalistic AI ought to go, theories on paper do not make society a better place or journalistic AI more useful to society. In the worst case, too ideal an ambition can have the opposite effect. In light of the paralyzing number of ethical guidelines on the responsible use of AI and the ever-growing list of ethical requirements, it is difficult not to feel sympathy with the scholar or practitioner who ends up overwhelmed and discouraged.

The discussion around diversity-sensitive recommender design is a good example to demonstrate why it is so important to acknowledge and embrace the messy reality of normative ideals. In response to the concerns about filter bubbles and echo chambers that have framed much of the scholarly discourse about the societal impact of AI-powered recommender algorithms over the past decade, a growing body of scholarship has started to investigate the potential of translating media diversity as a democratic goal and normative ideal into recommender design (Vrijenhoek et al. Citation2021; Helberger, Karppinen, and D’Acunto Citation2018; Bernstein et al. Citation2021). This body of literature forms a counterweight to the computer science literature that often approaches diversity as a mathematical problem (Kunaver and Požrl Citation2017; see also the excellent overview in Loecherbach et al. Citation2020). Meanwhile, a growing number of media organizations, public and private, also experiment with ways of developing more diverse recommendation metrics and algorithms. For those involved in this line of research, it soon becomes very clear that an abstract normative ideal – for instance, increasing the representation of marginalized voices in a recommendation - sounds intuitive in theory (Helberger Citation2019) but is far more difficult to implement into practice. In order to instruct a recommendation algorithm to increase the representation of marginalized voices, a definition of marginalized voice is needed. In reality, there is typically no metadata that labels a content or voice as marginalized, nor are there any computable lists of minorities or marginalized voices in society, and there are none for a good reason. Not only could such lists result in dangerous instances of discrimination, exclusion and stereotyping, already the drawing of such a list could be unethical in itself and infringe upon important legal principles, like the ban on the processing of sensitive data under Article 9 of the European General Dataprotection Regulation (GDPR). Does that mean we should stop trying to make recommenders more inclusive? From a societal perspective, the answer is a clear ‘no’. The practical implementation may stay far behind the normative ideal, for example, by concentrating on including the representation of voices that are less popular (Abdollahpouri, Burke, and Mobasher Citation2017). Even so, the result would be a recommendation algorithm that at least aspires to be more inclusive and diverse. As such, it could already be an improvement to merely engagement-oriented metrics.

In translating theory into practice, many nuances get lost, and formalization of normative ideals takes patience, time, experimentation, and even more compromises between ideal and reality. Some values, or dimensions of values, may not lend themselves to formalization in code at all or require broader organizational, institutional, and procedural approaches in the phase before and after the actual technical design (see also below). However, each step toward realizing that ideal of a diversity-sensitive recommender unlocks new insights, perspectives and methods to build on.Footnote6 The messy, imperfect phase of learning to build AI that promotes editorial values must be addressed. It is key to putting normative theories into practice.

Measuring the effects and impact of value-sensitive design is yet another important challenge, as doing so is critical to evaluating, learning and improving. Measurement or “scrutability” (Komatsu et al. Citation2020) comes with another load of methodological challenges. The study by Heitz and co-authors in this special issue is one important example of how research can also here play a role and, in so doing, can contribute to methodological innovation. In order to measure the effects of a diversity-sensitive recommender, the team built a dedicated news app that allowed the authors to study how users engage with news recommendations (Heitz et al. Citation2022, 1710–1730). Recommendations were diversified by including articles popular among users with different political leanings. One of the findings from that study is that from the perspective of users, a diversity-optimized recommendation algorithm can match an accuracy-optimized algorithm. This also means that diversity-enhancing recommendation algorithms can be attractive from both a user as well as media industry point of view (see also (Bodó Citation2019). Heitz et al.'s research can also be an important starting point for further research into evaluating diversity-sensitive design, and their method could be used to explore other dimensions of diversity, like optimizing for content that caters to the interests of more marginalized or minority voices in society. Most importantly, the study demonstrates the importance of developing approaches to the measurability of the effects of value-sensitive design. Doing so can also contribute to exciting new research.

The measurability of journalistic values is also an important theme in Torbjörn and colleagues’ contribution (Rolandsson, Widholm, and Rahm-Skågeby Citation2022). Their contribution interestingly takes a very different perspective: the perspective of the management of a news organization (here: a public service media provider). They study how datafication and managerialism intersect in the process of news-valuing, and more generally reorganizing and re-structuring internal work processes. The article describes ethnographic work at the Swedish public service media organization SR and their News Values project - a project to automate news sorting with the help of algorithms. The News Values project also generated internal data to measure and ultimately steer the production and distribution of content that lives up to SR values. The article raises fascinating questions about the quantifiability of something as abstract as news values and how datafication can set new processes of standardization and control into motion. The research also shows that defining ethical values for AI is not a single action but an iterative process.

Behind Ideal and Less-Ideal Visions of Technology, There Are People

Transforming normative ideals on paper into responsible technologies in the real world is the work of the people that decide to implement, build, use and engage with the technology. This also means that using AI in line with normative ideals requires that those involved in the process know their power to shape the technology and are prepared to take control. Moran and Shaikh’s contribution to this special issue is a powerful wake-up call. Moran and Shaikh (Citation2022, 1756–1774) engage in a meta-journalistic exploration of how journalists report about AI in journalism and how the rhetoric they use and concerns brought to the fore reveal a deeper critical engagement with journalism’s role in the digital age and what or who being a journalist entails (Moran and Shaikh Citation2022, 1756–1774). Their contribution shows that different actors in the process push for different narratives. While newsroom leaders and funders tend to highlight the opportunities that new AI-driven tools provide, many journalists still grapple with the question of where exactly the boundaries of being a journalist and doing journalistic work are and how to position human work vis-à-vis automation. The authors conclude their investigation by observing a lack of critical engagement with the issue of editorial responsibility and how journalists and editors can use AI-driven tools to optimize journalism.

Having said so, the contributions in this special issue add an important caveat to the observation that editors and journalists should engage critically with AI and take their responsibilities when using AI. Several of the contributions raise the important question of whether the decision makers in newsrooms are even in a position and capable of doing so. In interviews with newsroom professionals in the Netherlands, De Haan and colleagues (de Haan et al. Citation2022, 1775–1793) found that one possible explanation of why newsroom professionals fail to engage with their responsibility vis-à-vis journalistic AI is that often they are not even aware that they engage with algorithmic tools and applications. To the extent that the journalists interviewed were aware of the impact of AI on their daily work, journalists defined AI as a relatively autonomous force. This finding is very important for the larger theme of this special issue. Developing a normative perspective on journalistic AI, a vision of how to use it and for which goals or values require a sense of agency. It is difficult to develop a sense of agency over something that is seen as an “invisible hand” or “looming threat” over which one has no control.

The disconnect between the increasingly central role of journalistic AI and the level of understanding of newsrooms professionals is also a key theme in the contribution by Jones and Jones (Citation2022, 1731–1755), who studied the ability of journalists to understand and engage with journalistic AI at the BBC. In reflecting on their findings, the authors explain that a poor understanding of journalistic AI and AI in a more general sense creates risks of abuse or underuse of the technology at the individual level or failure to engage critically with the technology. This again can result in risks at the organizational level by affecting the role that journalism has in society and harm the reputation and legitimacy of journalism as an institution. The situation can also hinder the emergence of professional standards of responsible use of AI or result in the inability to report about AI's impact on society adequately.

The articles by Jones & Jones, De Haan and colleagues and Moran and Shaikh also highlight the need and urgency of developing the kind of AI literacy for journalism that Mark Deuze and Charlie Beckett call for in their commentary (Deuze and Beckett Citation2022, 1913–1918). As Deuze and Beckett explain, “[a]rtificial intelligence literacy is not simply knowing about AI, but also understanding and appreciating its normative dimension, as much as it is linked to impact and action” (Deuze and Beckett Citation2022, 1913–1918). Reflecting on the suggestions by Deuze and Beckett, Jones & Jones suggest that literacy strategies must be designed to provoke critical reflection, foster articulation of journalistic values and generate a sense of agency (Jones and Jones Citation2022, 1731–1755). However, as Jones & Jones also point out, AI literacy for journalists is about educating journalists about AI and ways of using AI responsibly as much as it is about creating the necessary supporting conditions (Jones and Jones, 1731–1755). The authors in particular highlight the aspect of AI visibility and that an important challenge for newsroom professionals and scholars alike is to make journalistic AI intelligible, visible and contextualizable.

The need to take control and decide how to square digital tools with journalism’s professional ideology is also the dominant theme in Møller’s contribution (Møller Citation2022, 1794–1812). More specifically, Møller conceptualizes personalization technologies through the perspective of journalism’s professional ideology. His study adds a theoretical perspective to the empirical findings by Moran and Shaikh (Citation2022, 1756–1774) and de Haan et al. (Citation2022, 1775–1793). Møller helpfully reminds us that the struggles of developing the journalistic identity, of navigating the professional and ethical norms that define journalists and journalism, as well as carving out the boundaries with external actors such as the audience or technology developers, are not a new struggles. To some extent, these struggles are inherent in the societal role of journalism. If journalism serves society, it is bound to change with the technologies that shape it (Pavlik Citation2000). To the extent that AI-driven tools become part of the editorial process, they fall under the editorial control of the media organization. As Møller’s contribution confirms, assuming control rather than being controlled also requires a vision of where to take technology development and how to identify and balance relevant and conflicting values.

Power and Decision-Makers

In the previous section, we emphasized the importance of having a sense of agency and the power to develop and implement technology according to normative ideals, an editorial mission or professional values. However, being able to exercise agency also requires having a real choice and the autonomy to act according to one’s convictions. In contrast, many, if not most, accounts of normative theories of AI are surprisingly agnostic of the complex ecology of decision-making. Ethical principles and normative theories describe how the responsible use of journalistic AI ought to look, and far less who is responsible for ensuring that it does live up to those ideals. The first principle of the European Ethics Guidelines for Trustworthy AI may illustrate that point: “Develop, deploy and use AI systems in a way that adheres to the ethical principles of respect for human autonomy, prevention of harm, fairness and explicability. Acknowledge and address the potential tensions between these principles.”Footnote7 Grand concepts aside, the principle is directed at nobody in particular. There is an ominous gap between principle and the reality of institutional decision-making.

To begin with, those who develop, deploy and use an AI system are different actors. Developing a technology as complex as, for instance, an algorithmic recommendation algorithm is teamwork, and the team working on that task is often composed of a diversity of experts with different disciplinary backgrounds and, ideally, representing different parts of a news organization (Belair-Gagnon and Holton Citation2018; Lindblom, Lindell, and Gidlund Citation2022). All these experts will likely approach the development of a recommender from very different perspectives, vocabularies and epistemological traditions. From each perspective, a notion such as fairness or diversity could look very differently. From the perspective of journalism scholars and practitioners, fairness is often associated with dimensions of objectivity in the sense of reporting in an impartial, truthful way (Bastian, Helberger, and Makhortykh Citation2021; Deuze Citation2005), whereas fairness from a computer science perspective is often discussed in the context of unbiasing and non-discrimination (Burke, Sonboli, and Ordonez-Gauger Citation2018). If someone were to ask the legal department, an entirely different understanding of fairness in the sense of distributional justice or the quality of transactions would come to the fore (Valcke, Graef, and Clifford Citation2018). Second, those that develop an algorithm are not necessarily those that use it, and the users can be again distinct from those deciding to build and implement the algorithm or fund the project. All those actors come with their own motivations, values and KPIs (Ananny and Crawford Citation2015; Bodó Citation2019; Diakopoulos 2019). Third, none of those who decide to implement, develop or use journalistic AI may be autonomous in their decision to use the technology in line with their professional values and editorial aspirations. This is the question of the extent to which digitization of the media results in new incidental or even structural dependencies that can threaten the editorial autonomy and mission (Pickard Citation2020; Balkin Citation2017; Schjøtt Hansen and Hartley Citation2021; Helberger Citation2020; Bell Citation2018).

The articles in this special issue make an important contribution to advancing these three dimensions and pointing out directions for further research. The contribution by Moran and Shaikh explains how the initial decision to implement AI is often not made by journalists but by editors, funders and managers (Moran and Shaikh Citation2022; Rolandsson, Widholm, and Rahm-Skågeby Citation2022, 1691–1709). The authors found that this situation results in a sense of inevitability and fuels “a sense of technological determinism despite the realities of technology being increasingly shaped by, and designed for journalism” (Moran and Shaikh Citation2022, 1756–1774). If users of algorithmic applications feel overcome with technology that they neither fully understand nor decided to adopt, the resulting sense of resistance does not go well with the demands of using the technology ethically and responsibly. After all, doing so requires a sense of agency and choice. One possible conclusion to draw from this also for future research is that the responsible use of journalistic AI not only involves developing a concrete idea of what responsible means, but also a better understanding of the conditions under which professional users are prepared and capable of assuming agency and responsibility for the responsible use of journalistic AI. For example, what are the effects of anthropomorphizing machines and speaking about “robot journalists” for editors’ or journalists’ sense of control? What kind and level of technological knowledge is needed to feel confident enough to exercise human oversight? How to design journalistic AI in a way that allows for control and agency?

Smets, Hendrickx and Ballon highlight another aspect of institutional decision-making (Smets, Hendrickx, and Ballon Citation2022, 1813–1831). Their contribution explores news recommenders from the perspective of multi-stakeholder analysis (Abdollahpouri et al. Citation2020; Milano, Taddeo, and Florida Citation2021; Heitz et al. Citation2022, 1710–1730) and provides a framework to make visible the complexity of players that are involved in the decision to implement, develop and use recommendation algorithms. Smets et al. describe the different interests involved and how the negotiation between these stakeholders and the broader socio-economic context ultimately shapes the technology and its place in an organization. The multi-stakeholder perspective aligns well with the socio-technical approach to digital technology that looks beyond the technologies themselves and consider what Westlund and Lewis refer to as “actors, actants, audiences and activities” behind journalistic AI (Westlund and Lewis Citation2014). An important observation in the article by Smets and colleagues is the importance of a product or problem owner, typically an employee within a media organization whose role it is to steer strategic alignment between the different interests.

The multi-stakeholder approach to journalistic AI opens up various fruitful directions for research. It mirrors similar calls for a better understanding of the complexity of the curation of information flows from the user side (Thorson and Wells Citation2016). A multi-stakeholder approach to journalistic AI could shift the perspective of scholars from “Unlocking the black box” to the question, “Why is the box black and in whose interest is that”? Developing a normative perspective on AI also means acknowledging that the question of “what journalistic AI should do or not do” is a question that involves different parties, different values and interests. The way the technology operates is a result of negotiations between different interests. If we, as a society, want to steer technology into particular directions, we must recognize the need to better understand the different stakeholders, their powers and perspectives.

Future research could go both in breadth and in-depth. In terms of breadth, the existing literature has only begun to study the number of stakeholders involved.

Abdollahpouri and colleagues focus on consumers, providers and systems (Abdollahpouri et al. Citation2020), and Milano and colleagues, in addition, on society as an additional stakeholder (Milano, Taddeo, and Florida Citation2021). The research being done in the context of this special issue adds three other important stakeholders to the discussion around journalistic AI. Smets, Hendrickx and Ballon introduce the journalist and the product owner as other important stakeholders (Smets, Hendrickx, and Ballon Citation2022, 1813–1831). And Torbjörn and co-authors’ contribution explores the role of the management layer in a news organization and how datafication is influencing the role and visions of managers for the future of news making (Rolandsson, Widholm, and Rahm-Skågeby Citation2022, 1691–1709). These are all important stakeholders in recommender design. However, there are many more, and again also these come with their own interests, values and powers to determine what the technology should do. Other stakeholders that we will discuss in more depth are law and policymakers (see below). Moreover, seeing the amount of data and the complexity of the systems it is not uncommon to migrate processing capacity into the cloud, making cloud solution providers and the computational infrastructures they provide another important stakeholder in the game (Kulynych et al. Citation2020).

In addition to expanding the range of stakeholders to examine, there is also an important role for research to examine the relationships between different stakeholders in more depth. Simon’s investigation into how social media and technology platforms leverage their technological advantage into their relationship with media organizations is an excellent example of why this approach is so important (Simon Citation2022, 1832–1854). Simon’s article takes the infrastructure perspective and demonstrates how control over data, cloud infrastructures or algorithms can result in deep-going structural dependencies, or what Simon refers to as “infrastructure capture” (Simon Citation2022). His study creates a typology of the overlap between platform companies’ services and new organizations, showing how platform companies are involved in developing and using journalistic AI at all stages of the news production process. This investigation is also a reminder that there are situations in which a particular media organization is not in the position to develop their own recommendation algorithm, making it necessary to procure from external technology providers (Bodó Citation2019; Smets, Hendrickx, and Ballon Citation2022, 1831–1831). Alexandra Borchardt, in her commentary, develops this aspect further and sketches the broader implications from the perspective of local journalism (Borchardt Citation2022, 1919–1924). Her compelling conclusion is that “if local journalism is to survive to stabilize democracies from the bottom up, affordable, accessible, and easy to use AI solutions for local news publishers are needed – and fast” (Borchardt Citation2022, 1919–1924).

Structural dependencies, however, can not only exist at the level of digital communication infrastructure. Insightful is the investigation of the way the Russian technology company Yandex controls not only a significant part of the distribution infrastructure but uses this control to shape the content of journalistic reporting by Olga Dovbysh, Mariëlle Wiersma and Mykola Makhortykh in this special issue (Dovbysh, Wijermars, and Makhortykh Citation2022, 1855–1874). Yandex’s program for content producers, “Nirvana,” stipulates in detail which content qualifies as quality content and is allowed on the influential Russian Yandex platform. Dovbysh et al.'s investigation demonstrates how Yandex leverages its influence over the organization of journalistic routines (writing for the platform) and even over what is being written. This dynamic affects journalistic autonomy and could have a broader societal impact on strategic depoliticization and the problematization of news agendas (Dovbysh, Wijermars, and Makhortykh Citation2022). Remarkably, another study in this special issue arrived at a very similar conclusion for the influence of the Chinese Opera Newshub on journalism in Nigeria (Umejei, Citation2022, 1875–1892). Also for the case of the Opera Newshub, Umejei finds tensions between the ranking criteria of the Hub’s algorithm and traditional news values of Nigerian journalists, particularly regarding the question of what may be considered newsworthy and a tendency to priviledge softnews over hardnews. Overall, the study finds indications that the Chinese Opera Newshub platform weakens the journalistic autonomy and authority of traditional journalists in sub-Saharan Africa though, interestingly, Umejei also found that at least some journalists were actively devising ways to game the algorithms as a form of professional resistance (Umejei Citation2022, 1875–1892).

These studies are powerful reminders of how dangerous it can be to blindly assume that social media platforms are politically neutral and have no stakes in the game of influencing users as citizens (Helberger Citation2020) or journalists as agenda setters. The two studies also raise important questions for future (comparative) research about the effects of platformisation for media ecosystems across different regions, and the role of additional factors to protect editorial autonomy and resilience in journalism, such as regulation, the sustainability of the broader media landscape, working conditions of journalists or professional values.

From Mere Tools to Gears in the Digital Information Infrastructure of the Public

Interestingly, Smets and colleagues found in their interviews with news organizations that despite much emphasis on the important role of end-users in spirit, end-users, besides their role as data producers (Petre Citation2022), are hardly given a voice in the discussion of the values and visions that should guide the use of AI. Their role is a datafied one, through the metrics that measure their engagement with media products and through user studies that gather their responses to the tools the media develop (Smets, Hendrickx, and Ballon Citation2022, 1813–1831). This finding reflects the complexity of the relationship between the media and its audiences. While digital technologies, in many ways, open up entirely new dimensions of interactivity and being truly responsive to the needs and interests of the audience (Hindman Citation2017), following the dictate of audience metrics can sit uneasily with perceptions of professional autonomy (Ferrer-Conill and Tandoc Citation2018). There is, however, another question that is not less relevant but hardly ever asked: how much is the implementation of data-driven targeting and distribution strategies still a sole matter and privilege of the professional autonomy of the media?

In the previous section, we explained how the relationship with external technology providers could limit the autonomy of the media. Other active agents and decision-makers are users, their friends and friends of those friends. They, too, become active players in shaping the news flow through their work – in the form of the data they generate, the feedback they give, and their role as an important agent in the algorithmic feedback loop (Bodó Citation2019). Thorson speaks in this context of “curated newsflows” (Thorson and Wells Citation2016). Furthermore, to the extent that algorithmic content moderation and distribution tools morph into elements of the broader digital communication infrastructure, these tools also become part of their (the users’) digital communication infrastructure. From here, it is only a small step to acknowledge the right of users to also have a say in the design of that infrastructure and the way digital technology changes their relationship with the media. In other words, developing a normative perspective on journalistic AI also requires a better understanding of the citizens’ perspective on journalistic AI. The citizens’ perspective is distinct from the consumer perspective in that citizens, too, should have a voice in developing a normative perspective on the technologies that affect the democratic discourse.

Taking the citizens’ perspective on journalistic AI seriously can open up new and interesting research directions. For example, while much of the focus on responsible recommender design has so far been on emphasizing the need to integrate aspects of diversity into algorithmic design, there has been far less attention to the question of how democratic and inclusive the broader process around the decision to implement, and use of algorithmic tools should be. To take the example of news diversity as a public value again: diversity is not only a matter of developing diversity metrics or nudging users to consume more diverse news but also of better understanding how users and their interests can be incorporated into the decision-making process of defining metrics and measures in the first place (Mattis et al. Citation2022).

Here clearly also lies a challenge for research: so far, much of the research has concentrated on either studying the impact of digital technology on users’ passive consumption of news, on users as producers of information (Hendrickx Citation2022), on conceptualising datafied users as clicks on newsroom dashboards (Ferrer-Conill and Tandoc Citation2018; Petre Citation2022), or understanding the role of users in the algorithmic feedback loop more generally (Hendrickx Citation2022; Swart et al. Citation2022; Monzer et al. Citation2020; Thorson and Wells Citation2016). So far, research was far less interested in better understanding users in a more political role of problem-owners and citizens. The decision about the design of the digital infrastructures of tomorrow is not simply a managerial decision of where and how to invest in innovation. Together, the digital news recommenders and content moderation systems form the communication infrastructure of the future and, as such, are also a matter of public interest. Such a public infrastructure perspective challenges us to understand better how users and their interests can be meaningfully involved in the decisions about developing and programming the next-generation digital infrastructure. Users become citizens and questions of engagement turn into questions of representation and, maybe even more importantly the question of whose perspectives are not yet represented but should be included. To return to the example of diverse recommendations: diversity in recommendations is not simply a question of how to make services more responsive and diverse, but how to design those design processes to include a diversity of perspectives.

Between Normative Ideals and Law and Governance

In many discussions around “responsible AI,” the relative absence of another important stakeholder must be noted: law and policymakers. Without the law, many innovations would never have occurred (Blind Citation2012; Martin et al. Citation2019). The rules about data protection, unfair competition and consumer protection have an important standard-setting function. With intellectual property law, many innovations could move out of the lab context. Moreover, to speak in the words of Victor Pickard: the idea that the government is an unwanted interloper in the media sector is a “libertarian fantasy” (Pickard Citation2020).

An important element of the fundamental rights protection of the press under the European fundamental rights doctrine, for instance, is that states have a positive obligation to create the conditions for the media to function. This positive obligation of states has resulted in funding schemes for journalism, media concentration rules to promote fair competition in the marketplace of ideas, and press-specific exemptions in data protection law. It is also this positive obligation that can serve as a basis for states to reign in the power of big tech (as laws like the Digital Services Act and the Digital Markets Act have begun to do), ease some of the structural dependencies that Simons warns about in this special issue (Simon Citation2022, 1832–1854), and ensure that the ability to use AI responsibly is not only reserved to the media companies with the largest budgets. Many of the challenges to realizing a vision for journalistic AI that serves a democratic society pit individual media companies against powerful actors like platforms and technology providers, and make optimizing for popularity the path of least resistance. States can play a role in helping the media overcome the structural dependencies and power imbalances they face in the digital media system. By setting the rules of the games, regulation can be a decisive factor for innovation within newsrooms, between newsrooms, and for creating competitive conditions in the broader marketplace of ideas.

Indeed, the article by Kuai and co-authors is a fascinating case study to demonstrate how powerful the role of law, here: intellectual property law, can be in shaping the future development of journalistic AI and journalism as an institution more generally (Kuai, Ferrer-Conill, and Karlsson Citation2022, 1893-1912). The question of whether AI could or even should be considered the author of AI-generated content is relevant in terms of innovation and economic sustainability but can touch also deeply upon questions of professional identity and autonomy. Kuai, Ferrer-Conill, and Karlsson‘s (Citation2022, 1893–1912) case study of Chinese copyright law shows how powerful a law can be in not only influencing the dynamics within the newsroom but also in affecting the distribution of power in the overall media system. Their analysis is a commanding example of why it can be essential to study the underlying regulatory frameworks, and do so for different cultural and political contexts. Finally, the study is also a very instructive example of how laws and regulations can be the subject of study by digital journalism scholars and how the analysis of laws and regulations can benefit from applying methods beyond traditional doctrinal legal analysis. Insofar legal and non-legal scholars can learn a lot from each other.

For future research, the European Union is another worthwhile case study. In recent years, the European Commission more and more asserted itself as the digital media legislator. With the Digital Services Act, the Digital Markets Act, and the European Data Strategy, the Commission has made an important step towards changing the competitive dynamics of European media markets. The upcoming AI Act may add additional requirements for transparency and human oversight (Helberger and Diakopoulos Citation2022). And the proposed European Media Freedom Act asserts a question that has traditionally been core to freedom of expression protection: what limits does media freedom impose on the state’s ability to interfere with editorial decision-making in the context of AI? The ambitious regulatory framework from Europe will offer a host of new interdisciplinary research opportunities for scholars working on digital journalism and its normative aspects.

Towards a Normative Perspective on Journalistic AI: Key Take-Aways

When we think of normative perspectives on the use of AI, we often think of identifying catalogues or even checklists of ethical or public values that technologies must comply with. The first take-away from working on and with this special issue is that identifying values and principles of responsible AI is an important starting point, nothing less and nothing more. Truly responsible use of journalistic AI is less about lists and more about the responsible organization of processes: the processes that result in the identification of relevant values, but also ways to concentrate, contest, formalize, implement, measure and continuously improve the way journalistic AI lives up to these values.

The second take-away is that these processes need to be grounded in a sense of agency and responsibility for developing a forward-looking vision on the role of journalistic AI and the values and goals it shall serve. Such a vision should be grounded in a broader normative framework – an editorial mission, fundamental rights, democratic theories of the role of the media – to be able to look beyond short-term KPIs, see the broader picture and also the different competing values that must be balanced when deploying journalistic AI. Being able to account for the broader picture and the different, sometimes competing values is critical because another important, third take-away from this special issue is that the realization of values and fundamental rights is a multi-stakeholder process. Ultimately, it is a question of who should have the power to decide and how concentrated this power should be. There is much debate and research on the explainability of algorithms or AI systems. We need more work on the explainability of the powers and decision-making processes behind these systems, how they shape technologies and ultimately, also our understanding of values. The contributions in this special issue make an important step in advancing this understanding.

When talking about the responsible use of AI, there is also a tendency to pay much attention to the various values we wish to see realized and less to the humans that realize, conceptualize and practice those values. Take-away four is that demanding a sense of agency and responsibility, as we also do in this special issue, is important but so is understanding what must be done to enable the journalists, editors, managers, developers, users and others involved in the process to play that role of active and responsible agent. The research done as part of this special issue identified very clearly how urgent the need to think about ways how to increase the digital literacy of newsroom professionals is, to optimize tools and applications not only for values but also for enabling human oversight and agency and for making decision making processes around the implementation of journalistic AI more inclusive and contestable. The research also highlights the critical importance of a deeper engagement with concepts of professional responsibility and autonomy and how to protect the freedom to make responsible choices in the face of structural dependencies within and outside newsrooms.

Another important take-away, number five, is that we need to move beyond framing and researching journalistic AI as mere tools and do more work on seeing individual applications as the cocks and gears of the public communication infrastructure. The design of that public communication infrastructure concerns all of us – the readers, listeners and watchers of algorithmically mediated communication. Against this background, it is odd that so much attention and research is around the role of users as data points and consumers of digital information and so little on their role as citizens. If we take a public infrastructure perspective on AI, it becomes clear that the design of the virtual assistants, content moderation and recommendation algorithms that determine our access to information is not exclusively for managers and Research & Development departments to decide. Here lies a clear challenge to design decision-making routines so that they become more accountable to the public, more inclusive and cognisant of diverse and underrepresented voices in society, and less dependent on a small number of major technology companies.

Take-away number six is that a debate on the normative perspective on journalistic AI is not complete without scrutinizing the underlying economic conditions and regulatory frameworks that enable, hinder, inform, shape and create the conditions for developing and realizing normative ideals in journalistic AI. The impact of economic factors and regulatory frameworks on the understanding and operationalization of normative ideals can be less direct, more complex and challenging to study. However, this is also exactly why more research and efforts to include economic and regulatory perspectives in digital journalism studies are very much needed.

Finally, the seventh take-away is that there should be nothing ideal about normative ideals – truly ideal are conceptualisations of values that allow us to confidently embrace the messy reality of the real world because there is room to experiment, fail, learn, improve and agree on how much of a gap between ideal and non-ideal outcomes professionals and society are willing and in a position to accept. Technology development is often driven by KPIs, short funding deadlines, economic pressure and the need to develop something that functions within a set time frame. Translating normative values into technology design is everything but straightforward, and truly responsible AI is about creating the experimental space but also the room, capabilities and financial possibilities to do so. Doing so is a task for media organizations and platforms as much as it is a quest for funders, governments, policymakers and researchers.

Disclosure Statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

The research was funded by the European Research Council (grant no. 638514), and was conducted under the PERSONEWS ERC-STG project.

Notes

1 ECHR, Lingens, para 41 and 42.

2 Sunday Times

3 Thorgeir Thorgeirson v. Iceland, judgment of 25 June 1992, Series A no. 239, p. 27, § 63; Goodwin v. the United Kingdom judgment of 27 March 1996, Reports 1996-II, p. 500, § 39

4 See, e.g. the US Supreme Court, Red Lion Broadcasting Co. v. FCC, 395 US 367 (1969).

5 Autronic AG v Switzerland[1990] ECtHR 12726/87 [47].

6 The few sector-specific ethical guidelines on journalistic AI are aspirational rather than idealistic and include a clear commitment to experimentation.

References

  • Abdollahpouri, Himan, Gediminas Adomavicius, Robin Burke, Ido Guy, Dietmar Jannach, Toshihiro Kamishima, Jan Krasnodebski, and Luiz Pizzato. 2020. “Multistakeholder Recommendation: Survey and Research Directions.” User Modeling and User-Adapted Interaction 30 (1): 127–158.
  • Abdollahpouri, Himan, Robin Burke, and Bamshad Mobasher. 2017. “Controlling Popularity Bias in Learning-to-Rank Recommendation.” In Proceedings of the Eleventh ACM Conference on Recommender Systems, 42–46. RecSys ’17. New York: Association for Computing Machinery.
  • Ananny, Mike, and Kate Crawford. 2015. “A Liminal Press.” Digital Journalism 3 (2): 192–208.
  • Ash, Timothy Garton. 2017. Free Speech: Ten Principles for a Connected World. Reprint edition. London, UK: Atlantic Books.
  • Baker, C. Edwin. 2002. Media, Markets, and Democracy. Cambridge: Cambridge University Press.
  • Balkin, Jack M. 2016. “Cultural Democracy and the First Amendment.” Northwestern University Law Review 110 (5): 1053–1095.
  • Balkin, Jack M. 2017. “Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation.” UC Davis Law Review 51: 68, September.
  • Bastian, Mariella, Natali Helberger, and Mykola Makhortykh. 2021. “Safeguarding the Journalistic DNA: Attitudes towards the Role of Professional Values in Algorithmic News Recommender Designs.” Digital Journalism (Abingdon, England) 9 (6): 835–863.
  • Beckett, Charlie. 2019. New Powers, New Responsibilities. A Global Survey of Journalism and Artificial Intelligence. London: LSE. https://www.lse.ac.uk/media-and-communications/polis/JournalismAI/The-report.
  • Belair-Gagnon, Valerie, and Avery E. Holton. 2018. “Boundary Work, Interloper Media, And Analytics In Newsrooms.” Digital Journalism 6 (4): 492–508.
  • Bell, Emily. 2018. “The Dependent Press: How Silicon Valley Threatens Independent Journalism.” In Digital Dominance. The Power of Google, Amazon, Facebook and Apple, edited by Martin Moore and Damian Tambini, 241–261. New York: Oxford University Press.
  • Bernstein, Abraham, Claes De Vreese, Natali Helberger, Wolfgang Schulz, Katharina Zweig, Lucien Heitz, and Suzanne Tolmeijer. 2021. “Diversity in News Recommendation.” In Dagstuhl Manifestos. Vol. 9. Schloss Dagstuhl.
  • Blind, Knut. 2012. “The Influence of Regulations on Innovation: A Quantitative Assessment for OECD Countries.” Research Policy 41 (2): 391–400.
  • Bodó, Balázs. 2019. “Selling News to Audiences – A Qualitative Inquiry into the Emerging Logics of Algorithmic News Personalization in European Quality News Media.” Digital Journalism 7 (8): 1054–1075.
  • Borchardt, Alexandra. 2022. “Go, Robots, Go! The Value and Challenges of Artificial Intelligence for Local Journalism.” Digital Journalism, 10 (10): 1919–1924.
  • Broussard, Meredith, Nicholas Diakopoulos, Andrea. L. Guzman, Rediet Abebe, Michel Dupagne, and Ching-Hua Chuan. 2019. “Artificial Intelligence and Journalism.” Journalism & Mass Communication Quarterly 96 (3): 673–695.
  • Burke, Robin, Nasim Sonboli, and Aldo Ordonez-Gauger. 2018. “Balanced Neighborhoods for Multi-Sided Fairness in Recommendation.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Vol. 81, 202–214. PMLR.
  • Christians, Clifford G. Theodore Glasser, Denis McQuail, Kaarle Nordenstreng, and Robert White. 2009. Normative Theories of the Media: Journalism in Democratic Societies. Urbana: University of Illinois Press.
  • Danaher, John. 2018. “Toward an Ethics of AI Assistants: An Initial Framework.” Philosophy & Technology 31 (4): 629–653.
  • de Haan, Yael, Eric van den Berg, Nele Goutier, Sanne Kruikemeier, and Sophie Lecheler. 2022. “Invisible Friend or Foe?How Journalists Use and Perceive Algorithmic-Driven Tools in Their Research Process.” Digital Journalism 10 (10): 1775–1793.
  • Deuze, Mark, and Charlie Beckett. 2022. “Imagination, Algorithms and News: Developing AI Literacy for Journalism.” Digital Journalism 10 (10): 1913–1918.
  • Deuze, Mark. 2005. “What Is Journalism? Professional Identity and Ideology of Journalists Reconsidered.” Journalism 6 (4): 442–464.
  • Diakopoulos, Nicholas. 2019. Automating the News. Cambridge, MA: Harvard University Press.
  • Dörr, Konstantin Nicholas, and Katharina Hollnbuchner. 2017. “Ethical Challenges of Algorithmic Journalism.” Digital Journalism 5 (4): 404–419.
  • Dovbysh, Olga, Mariëlle Wijermars, and Mykola Makhortykh. 2022. “How to Reach Nirvana: Yandex, News Personalisation, and the Future of Russian Journalistic Media.” Digital Journalism 10 (10): 1855–1874.
  • Eskens, Sarah, Natali Helberger, and Judith Moeller. 2017. “Challenged by News Personalisation: Five Perspectives on the Right to Receive Information.” Journal of Media Law 9 (2): 259–284.
  • Ferrer-Conill, Raul, and Edson C. Tandoc. 2018. “The Audience-Oriented Editor.” Digital Journalism 6 (4): 436–453.
  • Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. 2020. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” In SSRN Scholarly Paper. Rochester, NY.
  • Hagendorff, Thilo. 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds and Machines 30 (1): 99–120.
  • Harambam, Jaron, Natali Helberger, and Joris Van Hoboken. 2018. “Democratizing Algorithmic News Recommenders: How to Materialize Voice in a Technologicaly Saturated Media Ecosystem.” Philosophical Transactions of the Royal Society 376 (2133): 1–21.
  • Heitz, Lucien, Juliane A. Lischka, Alena Birrer, Bibek Paudel, Suzanne Tolmeijer, Laura Laugwitz, and Abraham Bernstein. 2022. “Benefits of Diverse News Recommendations for Democracy: A User Study.” Digital Journalism 10 (10): 1710–1730.
  • Helberger, Natali, and Nicholas Diakopoulos. 2022. “The European AI Act and How It Matters for Research into AI in Media and Journalism.” Digital Journalism 1–10.
  • Helberger, Natali, Kari Karppinen, and Lucia D’Acunto. 2018. “Exposure Diversity as a Design Principle for Recommender Systems.” Information, Communication & Society 21 (2): 191–207.
  • Helberger, Natali, Max Van Drunen, Sarah Eskens, Mariella Bastian, and Judith Moeller. 2020. “A Freedom of Expression Perspective on AI in the Media – with a Special Focus on Editorial Decision Making on Social Media Platforms and in the News Media.” European Journal of Law and Technology 11 (3): 1–28. https://ejlt.org/index.php/ejlt/article/view/752.
  • Helberger, Natali. 2019. “On the Democratic Role of News Recommenders.” Digital Journalism 7 (8): 993–1012.
  • Helberger, Natali. 2020. “The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power.” Digital Journalism 8 (6): 842–854.
  • Hendrickx, Jonathan. 2022. “Power to the People? Conceptualising Audience Agency for the Digital Journalism Era.” Digital Journalism, 14 June 2022 (online first), https://doi.org/10.1080/21670811.2022.2084432.
  • Hindman, Matthew. 2017. “Journalism Ethics and Digital Audience Data.” In Remaking the News: Essays on the Future of Journalism Scholarship in the Digital Age, 177ff, edited by P. N. Boczkowski and C. W. Anderson, 177–194. Cambridge, MA: MIT Press.
  • Jones, Bronwyn, and Jones Rhianne. 2022. “AI ‘Everywhere and Nowhere': Addressing the AI Intelligibility Problem in Public Service Journalism.” Digital Journalism, 1731–1755.
  • Karppinen, Kari. 2013. “Uses of Democratic Theory in Media and Communication Studies.” Observatorio 7 (3): 1–17.
  • Komatsu, Tomoko, Lopez Marisela Gutierrez, Makri Stephann, Porlezza Colin, Cooper Glenda, MacFarlane Andrew, and Missaoui Sondess. 2020. “AI should embody our values: Investigating journalistic values to inform AI technology design.” In NordiCHI '20: Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, 1–3. October 2020,
  • Kuai, Joanne, Raul Ferrer-Conill, and Michael Karlsson. 2022. “AI ≥ Journalism: How the Chinese Copyright Law Protects Tech Giants’ AI Innovations and Disrupts the Journalistic Institution.” Digital Journalism 10 (10): 1893–1918.
  • Kueng, Lucy. 2017. Going Digital. A Roadmap for Organisational Transformation. Oxford: Reuters Institute.
  • Kulynych, Bogdan, Rebekah Overdorf, Carmela Troncoso, and Seda Gürses. 2020. “POTs: Protective Optimization Technologies.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 177–188. FAT* ’20. New York: Association for Computing Machinery.
  • Kunaver, Matevž, and Tomaž Požrl. 2017. “Diversity in Recommender Systems – A Survey.” Knowledge-Based Systems 123 (May): 154–162.
  • Lewis, Seth C, and Oscar Westlund. 2015. “Actors, Actants, Audiences, and Activities in Cross-Media News Work.” Digital Journalism 3 (1): 19–37.
  • Lin, Bibo, and Seth C. Lewis. 2022. “The One Thing Journalistic AI Just Might Do for Democracy.” Digital Journalism 10 (10): 1627–1649.
  • Lindblom, Terje, Johan Lindell, and Katarina Gidlund. 2022. “Digitalizing the Journalistic Field: Journalists’ Views on Changes in Journalistic Autonomy, Capital and Habitus.” Digital Journalism 1–20.
  • Loecherbach, Felicia, Judith Moeller, Damian Trilling, and Wouter van Atteveldt. 2020. “The Unified Framework of Media Diversity: A Systematic Literature Review.” Digital Journalism 8 (5): 605–642.
  • Martin, Nicholas, Christian Matt, Crispin Niebel, and Knut Blind. 2019. “How Data Protection Regulation Affects Startup Innovation.” Information Systems Frontiers 21 (6): 1307–1324.
  • Mattis, Nicolas, Philipp Masur, Judith Möller, and Wouter van Atteveldt. 2022. “Nudging towards News Diversity: A Theoretical Framework for Facilitating Diverse News Consumption through Recommender Design.” New Media & Society. 146144482211044.
  • Meiklejohn, Alexander. 1948. Free Speech and Its Relation to Self-Government. New York: New York Harper and Brothers.
  • Milano, Silvia, Mariarosaria Taddeo, and Luciano Florida. 2021. “Ethical Aspects of Multi-Stakeholder Recommendation Systems.” The Information Society 37 (1): 35–45.
  • Møller, Lynge Asbjørn. 2022. “Between Personal and Public Interest: How Algorithmic News Recommendation Reconciles with Journalism as an Ideology.” Digital Journalism 10 (10): 1794–1812.
  • Monzer, Cristina, Judith Moeller, Natali Helberger, and Sarah Eskens. 2020. “User Perspectives on the News Personalisation Process: Agency, Trust and Utility as Building Blocks.” Digital Journalism 8 (9): 1142–1162.
  • Moran, Rachel E., and Sonia Jawaid Shaikh. 2022. “Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial Intelligence in Journalism.” Digital Journalism 10 (10): 1756–1774.
  • Morozov, Evgeny. 2014. To Save Everything, Click Here. The Folly of Technological Solutionism. LaVergne: Ingram Publisher.
  • Nielsen, Rasmus Kleis. 2017. “The One Thing Journalism Just Might Do for Democracy.” Journalism Studies 18 (10): 1251–1262.
  • Pavlik, John. 2000. “The Impact of Technology on Journalism.” Journalism Studies 1 (2): 229–237.
  • Petre, Caitlin. 2022. All the News That’s Fit to Click. Princeton, NJ: Princeton University Press.
  • Pickard, Victor. 2020. “Restructuring Democratic Infrastructures: A Policy Approach to the Journalism Crisis.” Digital Journalism 8 (6): 704–719.
  • Pocino, Patrícia Ventura. 2021. Algorithms in the Newsrooms Challenges and Recommendations for Artificial Intelligence with the Ethical Values of Journalism. Barcelona: Catalan Press Council. https://fcic.periodistes.cat/wp-content/uploads/2022/03/venglishDIGITAL_ALGORITMES-A-LES-REDACCIONS_ENG-1.pdf.
  • Posetti, Julie. 2018. Time to Step Away from the ‘Bright, Shiny Things’? Towards a Sustainable Model of Journalism Innovation in an Era of Perpetual Change. Oxford: Reuters Institute.
  • Post, Robert C. 1990. “The Constitutional Concept of Public Discourse: Outrageous Opinion, Democratic Deliberation, and Hustler Magazine v. Falwell.” Harvard Law Review 103 (3): 601–686.
  • Rolandsson, Torbjörn, Andreas Widholm, and Jörgen Rahm-Skågeby. 2022. “Managing Public Service: The Harmonization of Datafication and Managerialism in the Development of a News-Sorting Algorithm.” Digital Journalism 10 (10): 1691–1709.
  • Sax, Marijn. 2022. “Algorithmic News Diversity and Democratic Theory: Adding Agonism to the Mix.” Digital Journalism 10 (10): 1650–1670.
  • Schjøtt Hansen, Anna, and Jannie Møller Hartley. 2021. “Designing What’s News: An Ethnography of a Personalization Algorithm and the Data-Driven (Re)Assembling of the News.” Digital Journalism 1–19.
  • Schudson, Michael. 2008. Why Democracies Need an Unlovable Press. Cambridge: Polity.
  • Simon, Felix M. 2022. “Uneasy Bedfellows: AI in the News, Platform Companies and the Issue of Journalistic Autonomy.” Digital Journalism 10 (10): 1832–1854.
  • Smets, Annelien, Jonathan Hendrickx, and Pieter Ballon. 2022. “We’re in This Together: A Multi-Stakeholder Approach for News Recommenders.” Digital Journalism 10 (10): 1813–1831.
  • Steensen, Steen, and Oscar Westlund. 2021. What Is Digital Journalism Studies? London: Routledge.
  • Strömbäck, Jesper. 2005. “In Search of a Standard: Four Models of Democracy and Their Normative Implications for Journalism.” Journalism Studies 6 (3): 331–345.
  • Swart, Joëlle, Tim Groot Kormelink, Irene Costera Meijer, and Marcel Broersma. 2022. “Advancing a Radical Audience Turn in Journalism. Fundamental Dilemmas for Journalism Studies.” Digital Journalism 10 (1): 8–22.
  • Tambini, Damian. 2021. Media Freedom. 1st ed. Medford: Polity.
  • Thorson, Kjerstin, and Chris Wells. 2016. “Curated Flows: A Framework for Mapping Media Exposure in the Digital Age.” Communication Theory 26 (3): 309–328.
  • Thurman, Neil, Seth C. Lewis, and Jessica Kunert. 2019. “Algorithms, Automation, and News.” Digital Journalism 7 (8): 980–992.
  • Umejei, Emeka. 2022. “Chinese Digital Platforms: ‘We write what the Algorithm Wants.” Digital Journalism 10 (10): 1875–1892.
  • Valcke, Peggy, Inge Graef, and Damian Clifford. 2018. “IFairness – Constructing Fairness in IT (and Other Areas of) Law through Intra- and Interdisciplinarity.” Computer Law & Security Review 34 (4): 707–714.
  • Vermeulen, Judith. 2022. “To Nudge or Not to Nudge: News Recommendation as a Tool to Achieve Online Media Pluralism.” Digital Journalism 10 (10): 1671–1690.
  • Vrijenhoek, Sanne, Mesut Kaya, Nadia Metoui, Judith Möller, Daan Odijk, and Natali Helberger. 2021. “Recommenders with a Mission: Assessing Diversity in News Recommendations.” In Proceedings of the 2021 Conference on Human Information Interaction and Retrieval, 173–83. CHIIR ’21. New York: Association for Computing Machinery.
  • Webb, Amy. 2019. The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. New York: PublicAffairs.
  • Westlund, Oscar, and Seth C. Lewis. 2014. “Agents of Media Innovations: Actors, Actants, and Audiences.” The Journal of Media Innovations 1 (2): 10–35.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.