6,193
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Escape Me If You Can: How AI Reshapes News Organisations’ Dependency on Platform Companies

ORCID Icon

Abstract

Platform companies play a crucial role in the creation, dissemination, and business of news. They are also central actors in artificial intelligence (AI) which has led some to argue that the increasing use of AI in journalism may heighten the news industry’s dependence on platform companies. This article evaluates this argument. Drawing on 121 interviews with news workers at 33 leading publishers in the US, UK, and Germany, as well as 31 expert interviews, and secondary material it finds that AI reshapes the dependency of publishers on platform companies by exacerbating existing dependencies in distribution and creating new dependencies in production. News organisations rely on platforms for AI for various reasons, such as high development costs, lack of resources, and varying visions over their mission. The findings show that while increasing dependence on platforms is acknowledged, there is disagreement over its extent and impact. The reliance on platforms’ AI shows isomorphic tendencies and potentially limits publishers’ autonomy.

Introduction

Platform companies such as Google, Amazon, Meta, and Microsoft are central actors in the news. They also dominate the research, development and deployment of artificial intelligence (Ahmed, Wahed, and Thompson Citation2023), a field that has become more interesting to the news in recent years, given its promise to reshape journalistic work in various beneficial ways (Newman Citation2023; Diakopoulos Citation2019). Recent developments around large language models (LLMs) and so-called “generative AI” stand to reinforce platform companies’ dominance in AI, given their strong existing structural advantages in this area and their control of many of the underlying technological infrastructures (Lehdonvirta Citation2023).

With publishers already dependent on platform companies, particularly in the realm of distribution (Nielsen and Ganter Citation2022), it has been argued that the increasing use of AI in journalism may also increase the news industry’s dependence on platform companies and lead to a loss of autonomy and control, given that publishers mostly lack the resources that make these firms the dominant players in AI and are therefore largely at the mercy of these when it comes to the adoption of AI (Simon Citation2022). A shortcoming of these arguments, however, has been that they were largely theoretical.

This paper seeks to probe these claims with evidence. Drawing on 121 interviews with news workers at 33 leading national and international publishers in the United States, the United Kingdom, and Germany, as well as 31 international expert interviews, and secondary material, I show that the introduction of AI reshapes the dependency of publishers on platform companies by exacerbating existing dependencies on the distribution side and introducing new dependencies on the production side, providing platform companies with greater infrastructural control over news organisations. Public service and commercial news organisations in these countries rely on AI provided by platform companies in both direct and indirect ways in the production and distribution of news. Motivating factors are the high costs of developing AI themselves, severe constraints in terms of their own resources (data, expertise, labour, computing power) vis-à-vis platform companies’ structural advantages in the same areas, and different visions about what news organisations’ mission and role should be as regards AI. Publishers acknowledge an increasing dependency on platform companies through AI but there are some disagreements about the extent, whether this dependency matters for their organisations and the news, and if so how and where. I discuss these results in relation to institutional theory and media autonomy.

AI, Platform Companies, and the News

AI in the News

This article will not get into definitional tussles as to what constitutes “true” AI, with a good overview on recent debates provided by Jarrahi, Lutz, and Newlands (Citation2022). Instead, I will define it following Mitchell (Citation2019) whereby AI is the computational simulation of human capabilities in tightly defined areas, most commonly through the application of machine learning approaches, a subset of AI in which machines learn from data or their own performance. In news and journalism, AI commonly refers to various uses of AI-as-a-Service (AIaaS) where “where organisations access specific AI capabilities via cloud computing and can range from conversational bots to knowledge mapping, computer vision and speech recognition” (Newlands Citation2021, 1), and uses of machine learning (including deep learning and large language models) natural language processing and generation, as well as computer vision. A growing number of news publishers uses these systems and models at various points of the gatekeeping process in the news—the socio-technical process in news organisations that determines what and how information is gathered, evaluated, edited, and shared as news (Shoemaker and Reese Citation2013), with frequent application in news production (e.g., story discovery, transcription, translation, summarisation), content management (e.g., automated tagging and subtitling, archive management and discovery, re-formatting), content distribution and recommendation, as well as on the business side (e.g., audience analytics and dynamic paywalls), among other things (Beckett Citation2019; Hansen et al. Citation2023; Newman Citation2023).

Looking briefly at the reasons for the adoption of AI in the news, AI is widely regarded as promising greater efficiency, profits, deeper insights, as well as creativity at a time when the news industry continues to face strong business pressures and questions about its role and value in society (Newman Citation2023). At the same time, the uncertainty about the future significance of the technology as well pressure to innovate has meant that news organisations copy the actions of “peers”, something that mirrors dynamics from earlier waves of innovation (Christin Citation2020; Kalogeropoulos and Nielsen Citation2018). This form of institutional isomorphism, the “tendency of organisations in a particular field to resemble one another” across various dimensions (DiMaggio and Powell Citation1983; Napoli Citation2014, 351; Caplan and boyd Citation2018) is apparent with AI, too (Becker et al. Citation2023). Uncertainty is often a driver of mimetic isomorphic structuration, as organisations imitate each other to reduce uncertainty and the associated risk. In addition, coercive forces play a role, with pressures from e.g., other, dominant organisations, such as the technology sector, leading to isomorphic tendencies. Finally, professional networks and their norms and views often induce normative isomorphism. Of course, DiMaggio and Powell remind us that these forces often interact and overlap and can hardly be seen as separate.

Frenemies at the Gate: Platform Companies and Publishers

A large body of scholarship has chronicled the rise of platform companies to central actors in news and journalism. Today, platforms are important gateways for audiences to news content (Newman et al. Citation2022), sources of reach and traffic for news organisations (Diakopoulos Citation2019, 179; Nielsen Citation2018), service providers to publishers (Nechushtai Citation2018), and funders of journalism projects, innovation initiatives, and research (Papaevangelou Citation2023). The relationship between publishers and platforms is marked by a strong asymmetry, with platforms in the more powerful position, and publishers strongly dependent on them in various areas of news production, distribution, and revenue generation (Nielsen and Ganter Citation2022; Chua and Westlund Citation2021). As a result, the relationship between both sides has often been described as fraught.

Nielsen and Ganter (Citation2022) paint a detailed picture of how many publishers see platform companies as “frenemies”—veering between a willingness and need to cooperate with them due to various strong incentives (e.g., opportunities for reach, a fear of missing out) on the one hand, and a wariness on the other hand, because their dependence on these companies exposes them to various business risks in the mid- and long-term (Nielsen and Ganter Citation2022, 69). Publishers’ choices to engage, they write, “over time become structures, as they are institutionalised, embedded in formal and informal norms and routines across much of the news industry, and encoded in technological relations that everyone relies on”, ultimately leading to new forms of publishing driven and shaped by the relationships between both sides. Nielsen and Ganter describe these relationships as sitting on a spectrum of dependency, ranging from platform publishing (where publishers have control over content, and to some extent over distribution) to platformed publishing (full reliance on platforms for distribution) (Nielsen and Ganter Citation2022, 70).

New Dependencies? The Role of Platform Companies in AI in the News

As AI is being adopted in the news, several scholars have hypothesised how the technology and its increasing use might also reshape publishers’ relationship with platform companies, given that the latter are dominant players in the space of foundational AI research, development, and the provision of AI systems, infrastructure and services, including to the news (Simon Citation2022; Ahmed, Wahed, and Thompson Citation2023). Wu et al. (Citation2019) have provided evidence how technological firms often “impose their own logics on the journalistic field” as automation in journalism increases, with Whittaker (Citation2020) arguing that platform companies are central in framing AI and shaping the conditions in which news workers work. Jones et al. (Citation2022) have argued that platform companies’ central role in AI and the opaque nature of their services contributes to what they call the intelligibility issue of AI in journalism—“journalists’ ability to understand and engage with AI in ways that do not compromise journalistic norms and values”. Jungherr and Schroeder, on the other hand, point out that platform companies’ control over AI potentially increases their control over the infrastructures that enable the public arena and the information ecosystem (Jungherr and Schroeder Citation2021, Citation2023), with Seipp et al. (Citation2023) arguing that their increasingly central role in shaping the conditions for public opinion formation requires a rethinking of current approaches to media regulation.

Finally, Simon (Citation2022) has argued that the notion put forward by Nielsen and Ganter that platform companies’ mainly have relational power vis-à-vis publishers as they “control the means of connection” (especially to audiences) and distribution has to be expanded in light of their dominance in AI. Platform companies possess various structural advantages (Moore and Tambini Citation2018) that have given them market dominance in AI, creating an oligopoly of a handful of companies, and high barriers to entry for other firms. This, so Simon’s argument, makes it difficult for news organisations to compete with or avoid them and has led to a situation where platform companies increasingly provide AI infrastructure, services, and tools that matter for all sides of news organisations’ operations, allowing them to increasingly control both the means of production and connection. Drawing on Nechushtai’s (Citation2018) concept of “infrastructure capture”, Simon argues that publisher’s increasing use of AI provided or enabled by platform companies could lead to a shift in control and increasing dependence, potentially further limiting the news’ autonomy. However, Simon’s argument mainly relied on theoretical considerations and cursory evidence and called for more empirical evidence to probe these hypotheses. Heeding this call, I ask:

RQ: How does AI reshape the dependency of publishers on platform companies?

It will be readily apparent, however, that this question is too general and hides several more specific ones, as follows:

RQ1: For what do publishers rely on AI provided by platform companies?

RQ2: Why do publishers rely on AI provided by platform companies?

RQ3: How do publishers assess the relationship with platform companies in the space of AI?

Case Selection, Data, and Methods

To investigate these questions, this study draws from a combination of three sources. The primary data comes from 121 qualitative research interviews with news workers in 33 commercial and public service or public-interest news organisations in Germany, the United Kingdom, and the United States. In addition, I conducted 31 interviews with academic and industry experts across Europe and the United States to contextualise findings from the first set of interviews. Additional secondary data sources were observations and background conversations at in-person and virtual industry events, workshops, and conferences in the UK, the Netherlands, Germany, and Italy, as well as public and, where available, internal documents and materials from publishers and platform companies.

Sampling of Cases and Participants

To produce a dataset with some meaningful variation that allows for a limited general analysis, I adopted a multiple-case study design. Both national media systems and organisational differences have been shown to matter in the adoption and use of new technologies (Humprecht and Esser Citation2018; García Avilés et al. Citation2004) and with respect to publishers’ dealings with platform companies (Nielsen and Ganter Citation2022; Bodó Citation2019). Consequently, this study focuses on commercial and public service news organisations in the US, the UK, and Germany, which represent three different media systems in the scheme of Hallin and Mancini (Citation2004).

I identified 45 organisations where AI was in use as sites for recruitment through extensive desk research. I then employed a mixture of purposive sampling followed by snowball sampling to recruit interview participants from these organisations. I reached out to people who a) have worked with or on AI in the broadest sense (as indicated by e.g., their job description or publicly available information), and b) came from editorial, product development, audience, technology, and strategic/management roles, using a mixture of emails, messages on platforms such as LinkedIn and Twitter, letters, and telephone calls.

In total, I contacted 298 news workers and ended up interviewing 121 from 33 organisations out of the 45 organisations I approached. A list of the organisations can be found in . My interviewees held roles such as reporter, data scientist, software engineer, audience analytics manager, or product manager from more junior to senior positions. The age range was 25–63 years. In addition to official interviews, I also had various off-the-record background conversations with the same and additional news workers from these organisations as part of in-person and virtual industry events, conferences, workshops, and newsroom visits in the UK, the Netherlands, Germany, and Italy which informed the analysis.

Table 1. List of news organisations.

To increase the robustness of the study, I conducted an additional 31 interviews with academic and industry experts across Europe (Denmark, Finland, Germany, Italy, Netherlands, Sweden, Switzerland, United Kingdom) and the United States who are not directly working with AI in journalism or in a news organisation but have studied and analysed this topic from various disciplinary angles. These interviews helped to validate, triangulate, and cross-check the findings obtained from the other interview sources and to identify potential blind spots. I identified these experts using a mixture of purposive and snowball sampling, reaching out to experts who specialised in artificial intelligence, the news industry, and platform businesses.

Interview Conduct, Analysis of Data and Ethical Considerations

All interviews were conducted in person (5 interviews) and online (147 interviews) between June 2021 and December 2022. Guided by the research questions and the existing literature, I developed a semi-structured interview instrument to address the questions at the heart of this project in a standardised way, while providing enough flexibility to discuss aspects and topics which arose spontaneously. I kept extensive notes and wrote short reflective memos after each interview which both formed part of the corpus used for the analysis. Interviews lasted on average 75 min, with the shortest 25 min and the longest interview 2½ hours. All interviews were recorded, transcribed verbatim and coded for themes in NVivo, using a mixture of inductive and deductive coding. To increase the validity and reliability of the analysis, I combined investigator triangulation with data triangulation, where I cross-checked the interviews and analysis with other primary and secondary sources from the organisations (where available) and external, publicly available sources, as well as the expert interviews.

Research ethical approval for the study was granted by Oxford University’s Central University Research Ethics Committee (CUREC, Approval Reference: SSH_OII_CIA_20_71). All participants were provided with an information sheet and signed a consent form. The relationship between publishers and platform companies is often a sensitive topic and therefore a difficult topic to research (Nielsen and Ganter Citation2018, Citation2022). To maintain the trust and confidentiality of my participants I offered them anonymisation and anonymised them throughout. All illustrative quotes and examples were carefully checked to make sure they do not reveal the organisation or individual (unless the information was already public).

Findings

Where Publishers Rely on Platform Companies for AI in Production and Distribution

Simon (Citation2022) has provided first cursory evidence for platform companies’ involvement in the development and use of AI in the news industry at various stages of the gatekeeping process and has called for a better overview of their activities in this space. Here, I start by addressing this question with data from my sample of 33 organisations. All news organisations in this study are reliant on platform companies in both direct and indirect ways when it comes to the development and use of artificial intelligence in the production and distribution of news. By direct, I refer here to AI technology (in the broad way defined at the beginning) provided by platform companies.Footnote1 Indirect on the other hand refers to cases where platform businesses provide e.g., infrastructure or data which are not directly AI itself but are required or used to develop or run AI applications.

Direct Uses

In terms of direct uses, this includes the use of text-to-speech and speech-to-text applications such as Amazon Transcribe, Google WaveNet and Microsoft Azure Cognitive Services Speech. These services are variously used to turn articles into audio for e.g., their use in podcasts, provide readers with the ability to listen to articles, or automated subtitling, or to aid reporting and provide new products or an improved user experience. Google Translate is also frequently used to translate text. As one executive of a major German newspaper puts it: “We will never be able to develop, say, an AI that reads our texts aloud. To build text-to-speech offerings like that ourselves from scratch, we’d be crazy. So, we buy that”.

News organisations also use APIs and (pre-trained) AI models provided by platform companies, for example in investigating large documents or image sets, in general news production (e.g., to automatically crop and focus images) or in content moderation. One US-based news organisations relies on Amazon Textract for advanced optical character recognition (OCR), around digitising documents for investigations: “AWS has a great service to do that called Textract. It’s not cheap, but it’s really, really good. Kind of mind-blowingly good”, as one of their data scientists explains. Widely used are also Google’s Vision services and its Vision API which allows to label and classify images, detect objects within them and provides OCR, Amazon’s Rekognition Image, or Microsoft Azure’s Computer Vision, a deep learning powered product suite which offers similar services. A growing number of organisations also uses Google’s Pinpoint Studio, a programme to help explore and analyse large collections of documents.

On the distribution side, Google Jigsaw’s hate speech detection service Perspective API is used by several organisations in the sample for content moderation in comment sections across online and mobile offerings while at least six use Google’s language model BERT to identify common themes and topics across what users have consumed to inform prediction and personalisation decisions for news products and paywalls.

Another area where news organisations across all three countries rely on platform companies are software libraries and programming environments which allow for the training or development of their own machine learning models. Frequently named here are the TensorFlow software library for machine learning which was developed by Google Brain for internal Google use but has since been made open-source and free. A similar application used for computer vision and NLP is PyTorch, originally developed by Facebook/Meta’s AI unit. Google’s Vertex AIFootnote2 is also relied upon by several organisations I talked to, to build or train their own AI models for everything from audience analytics and dynamic paywalls to product features or reporting tools and approaches.

Indirect Uses

While the news organisations I spoke to rely on a number of direct AI services and applications, virtually all of them are dependent on platform companies in indirect ways when it comes to AI. Most important in this context is the use of the cloud hosting services provided by Microsoft (Azure), Google (Cloud) and Amazon (AWS). Often news organisations host their data infrastructure with one provider. As one newspaper editor in the UK explains: “We use the whole shebang, all of our infrastructure is Google owned”. A colleague at a German organisation describes how article scoring for news recommendation is done by relying on a different cloud service and its AI-driven features: “For article scoring, we rely on a complete agile infrastructure, a data warehouse and so forth from one of the big three”. However, in many cases news organisations use cloud services from all three main providers, depending on the specific use-case, the services required, and the price. In the words of another German team lead on the business side of a newspaper:

We actually use all cloud platforms with AWS, Google and Microsoft Azure. And we also use different components and try out their machine learning and ETLFootnote3 processes again and again. We also use Google and Azure’s APIs for keywording, sentiment analysis and similar stuff.

The quote also demonstrates that the delineation between indirect and direct uses is somewhat artificial as in many cases the border can be fluid, depending on the task or process. For example, for text-to-speech it is possible to use an artificial voice developed by e.g., Microsoft or to use the infrastructure of one of the platform companies to host the necessary data and an application from a third-party vendor (as is the case for e.g., one German organisation).

Last, and in line with a wealth of existing research, many news organisations are strongly dependent on platform’s audience data to make sense of their users. Various machine learning models aimed at a better understanding of audiences are fed with data provided by or (where possible) scraped from platform companies’ services. As a German news executive puts it:

So, we use everything that we can get publicly from the platforms via the interfaces for this work. We have built our own crawlers that scan the interfaces and then pick up the data and write it into our database.

To summarise, publishers depend on platform companies for AI in both direct and indirect ways for news production and distribution (RQ1). As one manager at a large UK news organisation puts it: “You can’t use AI without using those companies in some way”. This dependency extends to infrastructure, services, and tools, with most publishers using more than one platform company. While the dependency is currently more pronounced on the business and distribution side, it is growing on the production side and will likely further increase with the rise of “generative AI”. Even large publishers in my sample were limited in their ability to eschew this and often rely on off-the-shelf solutions provided by platform companies.

Between a Rock and a Hard Place: Motives for Using Platform Companies for AI

The Unholy Trinity: High Costs, Limited Resources, and Structural Advantages

It’s easy to use, it’s more cost-efficient, and we simply don’t have the resources to do it. It is what it is. (Innovation Manager, Germany)

For many, the high costs of developing AI and constraints in terms of the (human) resources they have available are the main reason why they have to rely on AI products, services, and infrastructure provided by platform companies—and they see the same dynamics at play among their competitors, both nationally and internationally. Owing to the need for vast computing power, extensive data, large server space, and skilled staff, many argued that it is not possible or too expensive for them and others to develop their own AI applications. As one IT developer at a UK news organisation puts it:

Cost effectiveness is a tricky one because you know software developers [and] data scientists are really expensive. So, if you spend a year of a few people working on something, you may say at first “oh that’s cheaper” but actually cost the organisation hundreds of thousands of pounds. So how much would you pay per year to use Google’s API for the same? You know, maybe a fraction of that.

Using cloud services or off-the-shelf tools and models provided by these companies, the majority of interviewees agrees, is cheaper and more effective than trying to develop similar infrastructure or approaches themselves, allowing them to experiment with AI or build stable applications for day-to-day use: “We’re using those big tech platforms as an enabler rather than a service provider in that way”, in the words of a director of technology in the UK.

Related to the cost issue and the lack of resources are the reinforcing structural advantages and a widening knowledge gap which make platform companies dominant in the space of AI. Describing a reporting project which involved the need for sophisticated image classification and object detection, a German data journalist explains the inevitability of using the services provided by one of the major platform businesses:

You need to train on lots and lots of data, you need known classifications, you need to know that it’s a photo of a palm tree and you need to know that for a million photos. And only an organisation such as Google has this knowledge and that’s why it has the models and why we use them.

A business executive in the US shares a somewhat similar view in response to his organisation’s use of Google’s AI services on the distribution side:

We don’t have Google’s scale. We don’t have Facebook’s scale. And Google can spend billions of dollars to build a certain model, like a language model for example. They can use all the articles they crawled, and they can like run like 1,000 GPU machines for month to build the model. We cannot do that.

Finally, the preference for platform companies’ AI applications is also deeply intertwined with the risk associated by many with developing the same in-house and the possibility of any such development going nowhere and causing big financial losses in the process. In the words of one German editorial developer: “So the advantage is when you work with these third-party suppliers is that they have the development work first” with one US-based executive arguing along similar lines: “I think it’s a good thing that news orgs are not the first movers in this space. That way, we face less financial risk”. However, many express that they are not completely happy with this situation but are often left with little other options, given a difficult economic environment and limited resources. One UK journalist put it his way: “[The problem is] that the industry in general has a money problem and that is very difficult to combat”, while a data scientist at a German news outlet observed that “we can’t do everything ourselves. And if you want to do it, if you want to stay on an island, then you have so little data and so few resources that you’re not getting anywhere”.

Mission Focus and Pragmatism out of Necessity

A media company does not need to reinvent the wheel on everything. (News executive, USA)

A second reason that has often been mentioned in the same breath as the high development and operating costs is many publishers’ view that it is ultimately not their mission to develop AI technologies and the necessary infrastructure and that the latter is—in most cases—best left to “those who know better how to do those things”, as one German-based journalist involved in procurement decisions at his organisation puts it. Many say that especially when it comes to technical infrastructure that the same should not be developed by publishers themselves, not only due to the costs but also because it would not deliver the best quality. Commenting on the attempts of some competitors to still do so, one manager at a newspaper in the UK conveys his profound scepticism:

These [news] companies sometimes invest heavily in all kinds of infrastructure when they don’t need to, because the market can create a much better version of what they’re trying to achieve.

Quality in this context is often equated with properties such as efficiency, ease of use, stability, the ability to scale quickly, and longevity. “They make it easy to start with it […] you ingest your data and yeah, if you want to have a proof of concept it’s quick and easy. […] And is AWS still going to be a thing in 10 years? Or very well likely”, explains an IT manager at a British publication. One editor at a US-based publisher prizes the ability to do things quickly and at scale:

They have the expertise in it and if we need more space, we spin it up in an hour, I don’t know. Whereas it used to be we owned our own data centres, or we shared a data centre and you had to find the right person, they had to do it and their reliability wasn’t any better.

A German head of data speaks of long-term stability as a key reason why his publisher is relying on solutions provided by one of the platform companies:

Because ultimately, if you really want to make something out of data, you need consistency, over years and maybe even decades. And that always speaks in favour of somehow committing to a system [they provide].

Many argue that being open-minded in this area is vital and that the news industry must look beyond its own horizon, embrace existing options where problems have already been solved, and focus their energy on problems that are unique to the news and journalism.

In summary, publishers rely on platform companies for AI development (RQ2) due to high costs and resource constraints, including limited data, expertise, labor, and computing power. Platform companies have structural advantages in these areas, making their services cheaper and more effective than in-house development. Risk avoidance, different mission focus and the belief that AI should be left to those who can provide better quality in terms of efficiency, ease of use, stability, scalability, and longevity, contributes to the situation.

Publisher’s Views on AI Platform Dependency

Between Incredulity and Inevitability: Conflicting Views on AI Platform Dependency

There were relatively few respondents who were not concerned about increased dependency on platform companies (in the context of AI). “This [AI dependency] is one of many problems, but one of the smaller ones in my view, if at all”, as one German journalist puts it. A senior manager at a US publisher sees it in terms of basic utilities: “So many of these things are just now, they are like as boring as plumbing. And so you don’t really worry about running water too much”. The main reason these news workers, who ranged from journalists to senior leadership, are not concerned is because they see AI provided by platform companies mainly as a service, with the possibility to switch providers at any time if necessary. In the words of one executive at a large publisher:

We are currently using the Amazon voice. If that no longer fits, then we’ll use Google’s. If it doesn’t fit any more, we’ll use Microsoft’s. There is already a certain amount of competition with each individual service. It’s the same with cloud hosting solutions. So Google Cloud was our choice. But, uh, if they make stupid moves, then we’d just simply move away again.

Most interviewees across countries, organisations, and departments, however, are saying that they are concerned about existing dependencies on platform companies increasing due to publishers’ use of AI in both distribution and production. Pointing to many of the structural disadvantages and chokepoints outlined in the previous sections, many see such a development as inevitable, especially if news organisations want to leverage the opportunities arguably provided by AI in different contexts. As one senior manager in the UK puts it:

I think it’s inevitable. Like [our organisation] is probably as good as it gets in terms of the scale of R&D and development resources for this and we are not going to develop not even cutting edge, but sort of mainstream AI from scratch.

A German head of innovation management shares a similar sentiment and paints a picture of do or die: “There is no alternative for us. The alternative at the moment, to be honest, would be to simply not engage with AI at all”. While respondents argued that this dependency is not uniform and varies by task or application, there was broad agreement that the more core infrastructure or complexity is involved, the greater the current and future dependency, especially in the areas of large language models and generative AI.

And then, of course, the big things such as GPT3 or ChatGPT all come from outside. And most of them are of course not accessible to us. There is no chance that we can ever even come close to them. Of course, this has the effect that we always end up at the bottom of the supply chain. (Innovation Manager, Germany)

Perhaps unsurprisingly, many in the group of those concerned about platform dependency around AI also worry that the issue is not being taken seriously enough across the industry and at their own organisations, getting drowned out in a sea of other concerns or failing to register with people as an issue. As a German manager argues:

In daily business, this whole AI strategy is relatively far away. What ends up there are all the tools that we buy in, of course, that are used in production. And that’s something you don’t really think about. So any search functions that are backed by AI. No one thinks about what exactly is behind it, it’s like Google, you type in something and hope for a good result. Most people don’t care about anything else.

Four Reasons for Concern: Why Publishers Are Wary of AI Platform Dependency

The reasons why news publishers are concerned about this dependency vary, but can be broadly grouped in four categories: (1) past experiences with platforms, (2) because it presents them with a black box and a loss of control, (3) fear of lock-in and arbitrarily changing conditions, (4) entrenching existing dependencies.

Past Experiences with Platforms

The first reason that publishers are concerned about a growing dependency on platform companies around AI are similar experiences with platforms in the past, which are projected into the future. A recurring theme in this context is a lack of trust that platforms would not use their position of power to their advantage. As a German head of R&D puts it:

Actually, I am not only worried, but I expect that we will become more dependent. As I said, this has actually already happened several times, that processes that we still had in our own hands have been taken out of our hands. As I said, in advertising marketing we are already dependent on the Big Five, especially Google and Facebook. The situation is similar in the cloud business because they simply have a technological competence and a size that we cannot match. This can happen with AI in the same way and probably will.

Several participants also explained that it was, in their view, not past platform companies to covertly manipulate AI services and infrastructure to publishers’ disadvantage. “Amazon could slow down access to our stories that are critical of Amazon in a way that would be barely noticeable and hard to prove. But we haven’t seen that yet”, one US journalist remarks, with several others pointing to Google’s Project BernankeFootnote4 as an example where the company used secret methods to give its advertising clients an advantage over competitors on other platforms while keeping it hidden from their publisher clients:

Like, I mean I think it’s a real risk that they are leveraging their AI infrastructure at some point down the line to gain an unfair advantage over us, perhaps even tweaking them at the back, you know. Right? I don’t think anyone from the outside would easily detect it. Think of Google’s…what’s the name again…think of, of Google’s Project Bernanke. It’s not beyond them in my opinion. (Deputy Manager, USA)

Black Boxes and a Loss of Control

A second major concern for most respondents is that they are dealing with “black boxes” when it comes to platform companies’ AI, meaning a loss of control over the tools and services they increasingly use or want to use in both news production and distribution. This concern is particularly pronounced when it comes to the question of whether these systems have biases that go undetected and whether they produce errors. Most respondents express concern that they do not know what kind of datasets have been used to train the platform companies’ AI, how this data has been collected, evaluated and cleaned, whether ethical principles have been followed in the process and, finally, what biases may be present in the final applications. This loss of insight and control could play out negatively in the context of story finding and reporting, as on German data journalists at a broadcaster argues:

If I send some images [for analysis] to a Google API and it’s supposed to tell me what’s there, then I don’t know what it was trained with and what bias it might have. And that of course has an influence on what kind of story I might tell from it.

Similarly, a journalist at a British newspaper who uses an AI tool by a platform company for investigations explains his scepticism when it came to using the same, saying it was at worst like using an unreliable calculator:

I think my main concern is: is the tool missing something, is it a bad tool, is it misinterpreting what I want? And I think if you keep not finding stuff that you expect to, you can do, you know more manual tests.

A British innovation manager expresses their concern that values and biases encoded into AI systems could “go against your values and goes against what you want to do as an organisation”. As an example, various respondents pointed to conditions under which some AI systems were trained, and which could put publishers in a bind. One, citing the discovery that Microsoft-linked AI firm OpenAI had used Kenyan workers under allegedly exploitative conditionsFootnote5 to train their large language model, as similar to “discovering a sweatshop you didn’t know was there at the end of your supply-chain”. The inability to scrutinise these processes at the source and the constantly evolving nature of AI, many say, makes it harder for publishers to check if applications and services are in line with their values and interests:

Machine learning is constantly learning, so if the algorithm is improving, you may investigate it when you first decide to sign up to it, but then are you able to continually do it? Do we even have the resources? I doubt it. (R&D Lead, UK)

Lock-in and Arbitrarily Changing Conditions

A third reason why those saying that they see increasing platform dependencies are concerned about the same is worries about being subject to arbitrarily changing conditions from the side of platforms over which they have no control. In the words of one news worker in an audience team at a large US news publisher:

We are not all that important for them as an industry. They do and don’t really care about news. If we cover them in a negative way, they are upset. But if they decide to take something away that we’ve come to rely on, they couldn’t care less. Same with AI.

A manager at a German broadcaster expresses similar worries: “We don’t want to commit to a tool only to have to change again in a year’s time because XYZ framework conditions change”, they say, mentioning similar instances in the past. This problem extends to fixing faulty tools that might be important for publishers, but lower priority for platforms: “If something goes wrong, we have to wait for them to say in America, OK, yes, that’s worth it now, we have to do a bug fix or we actually have to take a look”.

In this context, many also worry about lock-in effects, pouring cold water on the idea that switching to different infrastructure or services is easily achieved, especially with larger systems. For one, many say that the current oligopoly presents them with a limited set of options when push comes to shove. In the words of one US-based software engineer: “There are three providers on the market and that’s not a f***** lot”. A second issue that people highlight in this context is that switching, while a possibility in theory, is undesirable in practice. “Nobody wants to move”, explains one data manager at a UK publisher describing it as a very painful undertaking “because the cost of migration is really, really high” and so are the risks. In their words, it is “not like moving a box but more akin to moving an entire factory while production is ongoing”. Such lock-ins could leave publishers at the mercy of platform business, especially in the form of price-hikes: “If you are dependent on one of their services, and in some cases we already are, if they want to increase the price, you are the loser” argues one manager at a German news outlet, with the risk in the eyes of many participants increasing the more central the AI service, tool, or infrastructure in question is to their work.

Entrenching Existing Dependencies

Finally, a concern expressed by almost all interviewees who saw increasing dependence, is that publishers’ use of AI tools, services, and infrastructure would further empower platform companies in the space of AI and more broadly, strengthen existing winner-takes-all dynamics and contribute to a further loss of expertise on the side of publishers in this field. This worry is particularly pronounced around providing access to publishers’ data, e.g., audience data, data from content moderation on their products, or archive material. Even if contractual agreements are in place which prevent platform companies from retaining or seeing the data itself, many argue that the data would still help train their systems, as one German archive manager explains:

But their algorithms are of course trained as well. There is no need to give in to this illusion that this is not happening. Of course, it is like that. Of course, they also have an interest in getting beautiful, clean audio produced by us in top quality in order to train their own algorithms.

A recurring concern was that by getting access to what many see as publisher’s main assets—well-maintained archives of high-quality content, both textual and audiovisual—platforms would be able to further entrench their already dominant positions in the AI space, as structured and cleaned data is a pre-requisite to train various AI models that have broad applicability beyond a news context, too, and can thus be used across a platform company’s other services, too.

With the archive, you have a value in the form of data, and in the end, of course, these companies have an interest in getting access to it. We had many knocking on the door in the past. But then they make their own services better and you are just a user and they can mess with you […] If you work with a start-up, you get the product at the end, you get the code, then you can continue yourself from the time it was completed, but you are not dependent on the services that you helped to develop there. But if you have outsourced expertise and suddenly, they do everything and if you want to continue, you have to buy it. That’s a loss of control. (Product Manager, Germany)

In summary, publishers’ assessment of their relationship with platform companies in the AI space (RQ3), interviewees showed varied perspectives. While all acknowledged dependency, there were disagreements about the extent of dependency and its significance for their organisations and the news. About a quarter viewed platform companies as mere service providers, akin to utility companies, and were unconcerned about potential risks. However, the majority expressed unease about increasing reliance on platforms around AI, seeing it as inevitable. Dependency levels were seen to vary depending on the complexity of tasks or infrastructure involved.

Discussion and Conclusion

News organisations have good reasons for increasingly relying on platform companies’ AI. They are drawn by the advantages offered by their robust data and computing infrastructures, the efficiency and effectiveness of many of their AI services, greater user-friendliness, security, stability, and seamless scalability. Given platforms’ multifaceted business interests and substantial resources, their infrastructures and services are tailored to cater to a wide user base, ensuring consistent upkeep. All this makes them attractive AI providers for publishers who can more strategically allocate their constrained resources, both financial and temporal, while mitigating the hazards often tied to in-house innovations such as high costs, a dearth of specialised knowledge, potential project failure, and protracted development timelines. Embracing platform AI thus affords news organisations a strategic edge, streamlining operations and bolstering their competitive stance in a swiftly evolving media landscape.

But there is no such thing as a free lunch. AI reshapes the dependency of publishers on platform companies by increasing their control over technological infrastructure, exacerbating existing dependencies in distribution, and introducing new dependencies in production, especially as generative AI is making inroads. This development plays out unevenly, both within and between news organisations and depending on the area concerned, with organisational differences more important than national ones and with large publishers likely in a better position to maintain some independence than smaller or local publishers. It is worth briefly dwelling here on the limitations. First, my sample is non-representative and focuses on the Global North and is thus not able to reflect the perspectives of media outlets from the Global South where these dynamics may play out differently (Nielsen and Cherubini Citation2022). Second, I do not consider the aims and motives of platform companies themselves—a fruitful area for further research which was, however, beyond the scope of this work.

One way to interpret the findings described here is through the lens of institutional isomorphism. Faced with uncertainty and the pressure to innovate, news organisations in the past have repeatedly looked towards those they see as “peers” for cues (Christin Citation2020; Kalogeropoulos and Nielsen Citation2018). Uncertainty and the fear of being left behind are also strong motivators to use platform companies’ AI: It is something the most successful among them do, something that delivers results and therefore is seen as worth imitating. Recurring movement of talent and the highly networked nature between these news organisations contribute to these forms of normative and mimetic isomorphism. However, AI itself—if interpreted as part of a large technological system in the vein of Thomas P. Hughes (Citation2012)—can also be seen as a coercive force that leads to more isomorphism across news organisations. With platforms companies playing a dominant role in the complex web of AI development and deployment, the incentive structures that drive the general adoption of this technology across industries and not just the news, almost automatically also drives news organisations towards platforms’ AI—because it is the most efficient thing to do, considering that it bestows a competitive advantage to first-movers, which in turn provides incentives to other news organisations to model themselves on the “winners” (Schroeder Citation2020, 159)—but also because they have few realistic outside options. What makes this finding—institutional isomorphism—interesting is that is applies across different media systems and various types of organisations. The socio-technical shaping of AI, and the central role of platform companies in this shaping is largely general, not context dependent.

Nevertheless, it is important to mention that both media systems and organisational factors still act as mediating factors. The platforms discussed here are US-based, and so while institutional isomorphism works across the Western context and the media-systems discussed here, this may not apply elsewhere and may also change with regulatory developments and shifting business strategies of both platforms and publishers. Future theoretical development and empirical work will need to take this into account. A second factor are organisational differences between publishers. Various scholars have shown that platform businesses’ power is relational, and that dependence on them varies depending on news organisations’ type, business model, and existing strategy (Nielsen and Ganter Citation2022, 160; Chua and Westlund Citation2022; Poell, Nieborg, and Duffy Citation2022). This extends to AI, too. While a full analysis is beyond the scope here, those publishers in the sample with more resources—technical, financial, human—and strong market positions seem to be somewhat better able to handle AI dependencies than those that lack the same features. This is mostly because they can afford to invest heavily in R&D and product development and have been able to dedicate staff early on to investigate the benefits and pitfalls of AI and explore tactics that reduce dependencies. How these dynamics play out exactly and vary is a fruitful avenue for future work.

Institutional theory suggests that if a dominant set of actors can exert control over the conditions of other organisations,this can have far reaching consequences.Footnote6 Simon (Citation2022) has argued that central frame here will be around news organisations’ autonomy. On the micro-level, the main effect on news workers’ autonomy is that the use of platforms’ AI—systems that are often beyond their control—limits their discretionary decision-making ability (Örnebring and Karlsson Citation2019) and contributes to what Jones et al. (Citation2022) have termed the “intelligibility issue of AI in journalism”. A result could be the erosion of journalistic norms, values and the quality of journalistic output, concerns that stand to intensify with the rise of generative AI. On the macro level, increasing platform control through AI likely further strengthens an already asymmetric power relationship (Chua and Westlund Citation2021), with a handful of platform companies holding outsized control over the direction of the technology and the conditions for its use. While publishers can innovate by using platforms’ AI, they can only do so within the constraints set by the same given that platform firms possess artefactual and contractual control over their technology and infrastructure (Simon Citation2022). Likewise, contentions that publishers “can always switch” to other services and thereby escape platform companies’ control are questionable given heightened risks of vendor lock-in in an oligopolistic market-environment which limits choice and where forms of parallelismFootnote7 could expose publishers to risks of price hikes, arbitrarily changing conditions, or at worst a potential manipulation. Finally, in line with earlier findings by Wu et al. (Citation2019) AI increases platforms’ capacity to embed their own logics – of greater quantification and commercialisation (Christin Citation2020; Diakopoulos Citation2019)–on the journalistic field.

However, the complexity of the technology makes it difficult to fully comprehend its full impact. Just as in the parable of the blind men and the elephant, individuals only see limited aspects of the issue, hindering a comprehensive understanding and leading to an overall blurriness in the direction and consequences of AI’s use in the news and the growing dependency of news organisations on the technology and its providers. The greatest risk here is perhaps not so much a loss of control over the conditions of elements of journalistic work, but the structural weakening of the news as an institution (see also Jungherr and Schroeder Citation2021). As the socio-technical chain of gatekeeping gets reshaped by AI, “outsiders” in the form of platform companies gain more control over the technical elements of this process as they increasingly are gatekeepers to the technology applications and infrastructures journalism needs in the future. At the same time, news’ use of their services ultimately also ends up improving these, given the learning that is central to AI, allowing platform companies to build better general-purpose AI products and services, which reinforces their hegemony in the space of AI even further, but also potentially allows them to take on tasks that once were central to the news themselves. It is not hard to imagine an AI-informed news service from one of the major platform companies that is built on the back of their large, sophisticated AI infrastructure and is delivered to users through their various products—circumventing publishers entirely when it comes to the provision of basic information, or at least not sending attention and traffic their way any longer. Experiments with generative AI in search at Google and Microsoft hint at where the journey might go.Footnote8

The growing importance of AI mirrors the historical implementation of other large-scale technological infrastructures such as electricity, with the difference that the set of providers is much more limited this time. Not yet “baked in”, AI is rapidly becoming an indispensable component of news work. As such, it will likely further tip the scales in favour of platform companies given the features of the technology itself, which is difficult to master without the structural advantages in terms of technological capabilities and resources these firms have, and which publishers are unlikely to attain in the foreseeable future. Publishers thus will likely become more dependent on platforms for the infrastructure and expertise needed to implement these technologies.

This long-term structural dependency, however, comes at a cost and the risks will have to be considered by news organisations when making strategic decisions regarding the integration of AI—especially as so-called generative AI is making headwinds, in whose development platform companies play a pivotal role (Lehdonvirta Citation2023). Publishers are arguably not fully at the mercy of structural forces beyond their control and in the past have found creative ways of reducing their dependency on platforms, for example by modifying editorial practices or business strategies (Chua and Westlund Citation2022). It stands to be reasoned that there will also be tacit and active resistance to the developments described herein. Yet, it will likely be highly challenging for them to keep up with the pace of innovation and development in AI technology set by platform companies, and to avoid dependency on them at the same time. They can try to escape, but for now it is unlikely that they will.

Acknowledgments

This research would not have been possible without all the news workers at various organisations who were willing to lend me their time to talk to me. I am also greatly indebted to Michelle Disser, Ralph Schroeder and Ekaterina Hertog for helping to shape this article from the onset, taking the time to read various drafts and providing excellent comments. Likewise, I am grateful to Maggie Mustaklem and Isabel Ebert as well as the team at the Reuters Institute for the Study of Journalism, particularly Richard Fletcher, Amy Ross Arguedas, Kirsten Eddy, Camila Mont’Alverne, Waqas Ejaz, and Sayan Banerjee for in-depth feedback on earlier drafts. My thanks further extend to Seth C. Lewis, Grant Blank, Natali Helberger, and Emily Bell who variously provided helpful comments or literature recommendations and Valeria Reséndez for inviting me to present this work at the University of Amsterdam. Finally, my great thanks go out to the anonymous reviewers who helped make this a much better paper.

Disclosure Statement

In the past, the author was employed on projects unrelated to this research at the Reuters Institute for the Study of Journalism which have received funding by Google and Facebook. I sit on the AI and Local News Steering Committee of Partnership on AI, which is funded from philanthropy and corporate entities and for which I receive an honorarium. I have no conflicts of interest to disclose.

Additional information

Funding

The author would like to thank the Leverhulme Trust and the OII-Dieter Schwarz Scholarship for supporting his doctoral studies. As well, the author gratefully acknowledges support for this article from a Knight News Innovation Fellowship at Columbia University’s Tow Centre for Digital Journalism and the Minderoo-Oxford Challenge Fund in AI Governance. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of these bodies.

Notes

1 Included in this category are also training programmes or funding to develop or use AI in a news-context.

3 ETL stands for extract, transform, and load, the process of combining data from multiple systems into a single database, data store, data warehouse, or data lake.

6 With control broadly defined here “as the ability of an agent A to effect actions (A causes B to do something) and/or modulate actions (A shapes B’s actions) and shape the possibility to act (B can/cannot act without A) of another agent B” (see Simon Citation2022; Beniger Citation1986).

7 A situation where companies independently adopt a common course of behaviour without evidence of direct communication.

References

  • Ahmed, Nur., Muntasir Wahed, and Neil C. Thompson. 2023. “The Growing Influence of Industry in AI Research.” Science 379 (6635): 884–886. https://doi.org/10.1126/science.ade2420
  • Becker, Kim Björn, Felix M. Simon, and Christopher Crum. 2023. “Policies in Parallel? A Comparative Study of Journalistic AI Policies in 52 Global News Organisations.” SocArXiv. https://osf.io/preprints/socarxiv/c4af9/
  • Beckett, Charlie. 2019. New Powers, New Responsibilities. A Global Survey of Journalism and Artificial Intelligence. London: Polis. London School of Economics. https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/.
  • Beniger, James R. 1986. The Control Revolution: Technological and Economic Origins of the Information Society. Cambridge, Massachusetts: Harvard University Press.
  • Bodó, Balázs. 2019. “Selling News to Audiences – A Qualitative Inquiry into the Emerging Logics of Algorithmic News Personalization in European Quality News Media.” Digital Journalism 7 (8): 1054–1075. https://doi.org/10.1080/21670811.2019.1624185
  • Caplan, Robyn, and danah Boyd. 2018. “Isomorphism through Algorithms: Institutional Dependencies in the Case of Facebook.” Big Data & Society 5 (1): 205395171875725. https://doi.org/10.1177/2053951718757253
  • Christin, Angèle. 2020. Metrics at Work: Journalism and the Contested Meaning of Algorithms. Princeton: Princeton University Press.
  • Chua, Sherwin, and Oscar Westlund. 2021. “Advancing Platform Counterbalancing: Examining a Legacy News Publisher’s Practices of Innovation over Time amid an Age of Platforms.” https://gup.ub.gu.se/publication/310802.
  • Chua, Sherwin, and Oscar Westlund. 2022. “Platform Configuration: A Longitudinal Study and Conceptualization of a Legacy News Publisher’s Platform-Related Innovation Practices.” Online Media and Global Communication 1 (1): 60–89. https://doi.org/10.1515/omgc-2022-0003
  • Diakopoulos, Nicholas. 2019. Automating the News: How Algorithms Are Rewriting the Media. Cambridge, Massachusetts: Harvard University Press.
  • DiMaggio, Paul J., and Walter W. Powell. 1983. “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields.” American Sociological Review 48 (2): 147–160. https://doi.org/10.2307/2095101
  • García Avilés, José Alberto, Bienvenido León, Karen Sanders, and Jackie Harrison. 2004. “Journalists at Digital Television Newsrooms in Britain and Spain: Workflow and Multi-Skilling in a Competitive Environment.” Journalism Studies 5 (1): 87–100. https://doi.org/10.1080/1461670032000174765
  • Hallin, Daniel C., and Paolo Mancini. 2004. Comparing Media Systems: Three Models of Media and Politics. Cambridge: Cambridge University Press.
  • Hansen, Anna Schjøtt, Natali Helberger, Tobias Blanke, and Rasa Bočytė. 2023. “Initial White Paper on the Social, Economic, and Political Impact of Media AI Technologies.” AI4Media - A European Excellence Centre for Media, Society and Democracy. https://www.ai4media.eu/reports/initial-white-paper-on-the-social-economic-and-political-impact-of-media-ai-technologies-2/.
  • Hughes, Thomas P. 2012. “The Evolution of Large Technological Systems.” In The Social Construction of Technological Systems, 45–76. Cambridge, MA: The MIT Press.
  • Humprecht, Edda, and Frank Esser. 2018. “Mapping Digital Journalism: Comparing 48 News Websites from Six Countries.” Journalism 19 (4): 500–518. https://doi.org/10.1177/1464884916667872
  • Jarrahi, Mohammad Hossein, Christoph Lutz, and Gemma Newlands. 2022. “Artificial Intelligence, Human Intelligence and Hybrid Intelligence Based on Mutual Augmentation.” Big Data & Society 9 (2): 205395172211428. https://doi.org/10.1177/20539517221142824
  • Jones, Bronwyn, Rhianne Jones, and Ewa Luger. 2022. “AI “Everywhere and Nowhere”: Addressing the AI Intelligibility Problem in Public Service Journalism.” Digital Journalism 10 (10): 1731–1755. https://doi.org/10.1080/21670811.2022.2145328
  • Jungherr, Andreas, and Ralph Schroeder. 2021. Digital Transformations of the Public Arena. Cambridge Elements. Politics and Communication. Cambridge: Cambridge University Press. https://www.cambridge.org/core/elements/digital-transformations-of-the-public-arena/6E4169B5E1C87B0687190F688AB3866E.
  • Jungherr, Andreas, and Ralph Schroeder. 2023. “Artificial Intelligence and the Public Arena.” Communication Theory 33 (2–3): 164–173.
  • Kalogeropoulos, Antonis, and Rasmus Kleis Nielsen. 2018. “Investing in Online Video News.” Journalism Studies 19 (15): 2207–2224. https://doi.org/10.1080/1461670X.2017.1331709
  • Lehdonvirta, Vili. 2023. "OII | Behind AI, a Massive Infrastructure Is Changing Geopolitics." Oxford Internet Institute. https://www.oii.ox.ac.uk/news-events/news/behind-ai-a-massive-infrastructure-is-changing-geopolitics.
  • Mitchell, Melanie. 2019. Artificial Intelligence: A Guide for Thinking Humans. London: Pelican.
  • Moore, Martin, and Damian Tambini, eds. 2018. Digital Dominance: The Power of Google, Amazon, Facebook, and Apple. Oxford: Oxford University Press.
  • Napoli, Philip M. 2014. “Automated Media: An Institutional Theory Perspective on Algorithmic Media Production and Consumption.” Communication Theory 24 (3): 340–360. https://doi.org/10.1111/comt.12039
  • Nechushtai, Efrat. 2018. “Could Digital Platforms Capture the Media through Infrastructure?” Journalism 19 (8): 1043–1058. https://doi.org/10.1177/1464884917725163
  • Newlands, Gemma. 2021. “Lifting the Curtain: Strategic Visibility of Human Labour in AI-as-a-Service.” Big Data & Society 8 (1): 205395172110160. https://doi.org/10.1177/20539517211016026
  • Newman, Nic. 2023. “Journalism, Media, and Technology Trends and Predictions 2023.” Reuters Institute Report. Oxford: Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2023.
  • Newman, Nic., Richard Fletcher, Craig T. Robertson, Kirsten Eddy, and Rasmus Kleis Nielsen. 2022. “Reuters Institute Digital News Report 2022.” Digital News Report. Oxford: Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2022.
  • Nielsen, Rasmus Kleis. 2018. “The Changing Economic Contexts of Journalism.” In Handbook of Journalism Studies, edited by Thomas Hanitzsch and Karin Wahl-Jorgensen, 2nd ed. London: Routledge. https://rasmuskleisnielsen.files.wordpress.com/2018/05/nielsen-the-changing-economic-contexts-of-journalism-v2.pdf.
  • Nielsen, Rasmus Kleis, and Federica Cherubini. 2022. Born in the Fire: What We Can Learn from How Digital Publishers in the Global South Approach Platforms. Reuters Institute Report. Oxford: Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/born-fire-what-we-can-learn-how-digital-publishers-global-south-approach-platforms.
  • Nielsen, Rasmus Kleis, and Sarah Anne Ganter. 2018. “Dealing with Digital Intermediaries: A Case Study of the Relations between Publishers and Platforms.” New Media & Society 20 (4): 1600–1617. https://doi.org/10.1177/1461444817701318
  • Nielsen, Rasmus Kleis, and Sarah Anne Ganter. 2022. The Power of Platforms: Shaping Media and Society. Oxford Studies in Digital Politics. Oxford, New York: Oxford University Press.
  • Örnebring, Henrik, and Michael Karlsson. 2019. “Journalistic Autonomy.” In Oxford Research Encyclopedia of Communication, edited by Jon F Nussbaum. Oxford: Oxford University Press. https://doi.org/10.1093/acrefore/9780190228613.013.829
  • Papaevangelou, Charis. 2023. “Funding Intermediaries: Google and Facebook’s Strategy to Capture Journalism.” Digital Journalism 0 (0): 1–22. https://doi.org/10.1080/21670811.2022.2155206
  • Poell, Thomas, David B. Nieborg, and Brooke Erin Duffy. 2022. “Spaces of Negotiation: Analyzing Platform Power in the News Industry.” Digital Journalism 11 (8): 1391–1409. https://doi.org/10.1080/21670811.2022.2103011
  • Schroeder, Ralph. 2020. “Weberian Social Theory: Rationalization in a Globalized World.” In The Oxford Handbook of Max Weber, edited by Edith Hanke, Lawrence Scaff, and Sam Whimster, 1st ed., 18. Oxford: Oxford University Press.
  • Seipp, Theresa Josephine, Natali Helberger, Claes de Vreese, and Jef Ausloos. 2023. “Dealing with Opinion Power in the Platform World: Why We Really Have to Rethink Media Concentration Law.” Digital Journalism 11 (8): 1542–1567. https://doi.org/10.1080/21670811.2022.2161924
  • Shoemaker, Pamela J., and Stephen D. Reese. 2013. Mediating the Message in the 21st Century: A Media Sociology Perspective. New York: Routledge. https://doi.org/10.4324/9780203930434.
  • Simon, Felix M. 2022. “Uneasy Bedfellows: AI in the News, Platform Companies and the Issue of Journalistic Autonomy.” Digital Journalism 10 (10): 1832–1854. https://doi.org/10.1080/21670811.2022.2063150
  • Whittaker, Jason Paul. 2020. Tech Giants, Artificial Intelligence, and the Future of Journalism. Boca Raton, FL: Routledge.
  • Wu, Shangyuan, Edson C. Tandoc, and Charles T. Salmon. 2019. “When Journalism and Automation Intersect: Assessing the Influence of the Technological Field on Contemporary Newsrooms.” Journalism Practice 13 (10): 1238–1254. https://doi.org/10.1080/17512786.2019.1585198