7,481
Views
6
CrossRef citations to date
0
Altmetric
Articles

A Question of Design: Strategies for Embedding AI-Driven Tools into Journalistic Work Routines

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

Abstract

With the promise of AI, the use of emerging technologies in journalism has gained momentum. However, the question of how such technologies can be interwoven with newsroom practices, values, routines, and socio-cultural experiences is often neglected. This article investigates the ways in which AI-driven tools are permeating newswork and design strategies for blending technological capabilities with editorial requirements. We followed a multi-method approach to investigate the deployment of AI in news production at two London newsrooms: (1) a design ethnography at the BBC with journalists and technologists, and (2) interviews with journalists at The Times.

Our findings show that while journalists are generally open to try AI-driven technologies that benefit their work, technologists struggle to integrate them into journalistic workflows. The consensus was that human judgement is required to make complex decisions in journalism and that journalistic values should be prioritised in AI tool design. We claim that AI tools need to fit with professional practices and values in journalism in order to be fully accepted as an editorial tool. Embedding new technologies into journalistic workflows requires therefore a close collaboration between journalists and technologists, and a sociotechnical design that blends in work routines and values.

1. Introduction

On eight September 2020 an op-ed column appeared in the UK’s Guardian newspaper. It began: ‘I am not a human. I am a robot. A thinking robot.’ It had been written by a new language generator called GPT-3, developed by OpenAi. The headline included the phrase ‘Are you scared yet human?’ (GPT-3 Citation2020). While this article was actually extensively edited by a human editor, it foregrounds the issues that many journalists have with the rise in use of Artificial Intelligence (AI) by media organisations – that mistakes could be made by these tools or that journalists could be replaced altogether.

Increased developments in AI have driven the uptake of emerging technologies in newsrooms. Journalists can automate many tasks that make up news production – such as detecting or verifying data, producing graphics, publishing with selected filters and automatically tagging articles. On the positive side, this could allow procedures to be sped up, stories to be covered that would not be otherwise, real-time coverage optimised and building a relationship with the audience by developing personalised content. Previous research has shown that some journalists are upbeat about the opportunities, as it could release them from menial tasks and instead use their time and resources to conduct in-depth investigations that reflect on the skills that human journalists embody (Schapals and Porlezza Citation2020).

Other journalists do not always see the benefits so clearly. In a 2019 report into the use of AI and journalism, 24% of media organisations surveyed said that cultural resistance was the biggest challenge to adopting AI (Beckett Citation2019). This resistance matches the rising (ethical) challenges of AI, such as the example of GPT-3 mentioned earlier (GPT-3 Citation2020; Schilder Citation2020), that has caused controversy because of potential algorithmic bias towards a number of minority groups (Johnson Citation2021). Thus, even when AI-based tools could help journalists to use their scarce resources in a more efficient way and to increase job satisfaction (Lindén Citation2017), technologies are still often met with resistance (Thurman, Dörr, and Kunert Citation2017).

While there are contrasting views on AI-based technologies, they will only become more common in news organizations (Lewis, Guzman, and Schmidt Citation2019). Some research has gained an understanding of how AI tools are interwoven with newsroom values, routines, and socio-cultural experiences, including the affordances of news automation (Sirén-Heikel et al. Citation2019), and its impact on ethics (Dörr and Hollnbuchner Citation2017) and values (Komatsu et al. Citation2020; Wu, Tandoc, and Salmon Citation2019). However, we do not yet have a comprehensive understanding of how these technologies are embedded in the everyday work of journalists nor the perspectives of those involved in the design of these tools. A rich sociotechnical understanding of how AI technologies influence the journalistic context in which they are situated, and vice versa, can help us explore the complex benefits and drawbacks of AI in newsrooms. We focus on the design and role of AI in the key journalistic task of newsgathering, while acknowledging that these technologies have been used by media organisations for other purposes, in particular distribution and monetization of news (e.g., to support personalisation, subscriptions, and moderation of user comments).

Diakopolous (Citation2019, 8) notes that it is still an open question as to “how […] humans and algorithms [should] be blended together in order to efficiently and effectively produce news information” – eventually leading to what he describes as hybrid journalism. Embedding new technologies into journalistic workflows requires a design that blends algorithms with newswork. This “blend” is difficult to achieve, as there is often a mismatch between the workflow of automated AI systems and the editorial production process. Moreover, there could be a potential mismatch with journalistic values such as transparency, accountability, and responsibility (Komatsu et al. Citation2020).

As a result, Guzman and Lewis (Citation2020, 8) rightly state that “much has yet to be learned regarding how people conceptualize and interact with these more advanced technologies within the context of their daily lives.” In this regard, this study’s main goal is to increase our understanding of how newsrooms respond to and adopt AI-driven tools and automation processes. Our study draws upon previous work to understand the complex role of technology in newsrooms, in particular HCI work that explores approaches to prototype new technologies and critically reflects on sociotechnical design challenges for newsrooms (Tolmie et al. Citation2017; Diakopoulos, De Choudhury, and Naaman Citation2012; Brehmer et al. Citation2014; Maiden et al. Citation2018); and Journalism Studies research that examines the challenges and opportunities for integrating emerging technologies into the unique social context of newsrooms, such as computational news discovery (Diakopoulos Citation2020) and automatic writing (Thurman, Dörr, and Kunert Citation2017).

We extend existing work by providing a new perspective on the role of AI in newsgathering from “the inside out”, where design challenges are explored from the perspective of the technologists working at newsrooms, and contextualising the appropriateness of AI tools from the perspective of journalists’ day-to-day work. This new perspective can inform the design of the next generation of AI technologies that provide greater autonomy and control for journalists. We analyse algorithms as more than computational artefacts or procedures, and part of complex sociotechnical networks embedded in a particular cultural context (Kitchin Citation2017; Seaver Citation2017). Our findings provide an enriched understanding of the entanglements of AI in newsrooms, which is necessary to design systems that strike useful (and situational) human-machine blends. It is timely due to the increased adoption of AI in newsrooms and the potential risk of inappropriate configurations between journalistic workflows and algorithms, reducing both the voice of journalists and potentially their agency in relation to news stories created (Schapals and Porlezza Citation2020).

We used a multi-method approach to investigate the role of AI in news production in the London newsrooms: a design ethnography with journalists and technologists at the BBC, and interviews with journalists at The Times. This article presents therefore an empirical account of the experiences of both technologists leading the development of new technologies in major UK newsrooms, as well as journalists’ and editors’ ideas about the design- and the usability-requirements of such AI-based tools. Overall, in our study we wanted to answer two research questions:

RQ1: How does reconfiguring design efforts for hybrid journalism assist effective embedding of AI tools in the journalistic workflow?

RQ2: What are the design strategies for blending technological capabilities and editorial requirements?

This study investigates how journalists respond to and adopt AI-driven tools and automation processes and how journalistic values and experience may be integrated into emerging technologies. In particular, we report the design strategies used for blending technological capabilities and editorial requirements: prioritising editorial voices by opening up new roles for data curation and editing; embedding technology into journalistic workflows by shaping data interventions; and crossing disciplinary boundaries to sustain slow (but perhaps long-term) socio-cultural change. Our findings and reflections consider how we might reconfigure design efforts to allow for hybrid journalism – that is, the effective embedding of AI-based tools in the journalistic workflow.

2. Literature Review

The discussion of emerging technologies and their impact in digital journalism is all but new: scholarly research ranges from publications emphasizing a technologically deterministic perspective (Pavlik Citation2013) to the relevance of professional, organizational, economical, and social factors (Anderson Citation2013; Ekdale et al. Citation2015). Nowadays, journalism finds itself amidst a new wave of technological innovation, as AI technologies employing Machine Learning (ML) and Natural Language Processing are becoming more pervasive in newsrooms (Thurman, Lewis, and Kunert Citation2019). However, even if AI plays a crucial role in the production and distribution of digital journalism, we should consider it beyond its technological infrastructure (Zelizer Citation2019). Questions of how technologies can be interwoven with journalistic routines to ensure an accountable use of algorithms become therefore paramount because “algorithms are judged, made sense of and explained with reference to existing journalistic values and professional ethics” (Bucher Citation2018, 129).

Journalism scholars refer to the concept of hybridity to explain ongoing transformations in journalism and the news industry (Witschge et al. Citation2019), particularly with regard to journalisms’ changing work practices in terms of “a profession in a permanent process of becoming” (Deuze and Witschge Citation2018, 177). Hybridization as a concept has been applied to several fields of enquiries within journalism studies such as blurring boundaries between news and entertainment (Bødker Citation2017), hackers and hacker culture (Lewis and Usher Citation2016; Porlezza and Di Salvo Citation2020), the datafication of journalism (Porlezza Citation2018), overlapping norms and practices between journalists and activists (Russell, Citation2016), or entrepreneurial journalists, who bring about new ideas, practices, and principles (Ruotsalainen, Hujanen, and Villi Citation2021). In addition, hybridity can also be helpful in understanding new journalist-machine relations, specifically as newsrooms have become hybrids of human-, machine-, or a combined human-machine news production.

The concept of hybridity is not free from criticism, specifically when it comes to its seeming vagueness. Witschge et al. (Citation2019, 2) criticise hybridity as being a one-size-fits-all term that can be applied to “denote everything that is complex as hybrid.” This is one of the reasons why hybridity as a theoretical concept remains underdeveloped. Baym (Citation2017, 12) describes systemic hybridity as the “multiple technological affordances, economic agendas, and institutional structures of media production and distribution are melding in uncertain ways.” Similarly, Chadwick (Citation2013, 4), describes hybridity as the changing “complex and ever-evolving relationships based upon adaptation, interdependence and simultaneous concentrations and diffusions of power.” In our contribution, we understand hybridity in a systemic way, as a hybrid social context in which numerous relationships exist and where newswork is influenced by numerous dimensions – be it with other journalists, the audience, and technology (Lewis and Westlund Citation2015). Diakopoulos (Citation2020) focused specifically on this issue by asking the central question how to blend humans and algorithms – what he dubbed as hybrid journalism.

Understanding the design challenges of hybrid journalism is vital, specifically when it comes to the values that are built into AI systems (Diakopoulos Citation2020; Komatsu et al. Citation2020). Previous research has pointed out that not many news outlets have a clear strategy on how to implement AI tools (Beckett Citation2019). There is also no clear understanding of how the teams who develop AI-based tools should be composed. In most news organizations, teams seem to be structured in an interdisciplinary way (idem, 40), but there is still no clear direction. This is confirmed by a recent study that looked into the deployment of ML in a Swedish newsroom: Stenbom, Wiggberg, and Norlund (Citation2021, 14) state that “there is a pressing need to foster interdisciplinary collaboration around communicative AI, where the skills of stakeholders involved in efforts employing the emerging technologies are shared with and understood by collaborators.”

If interdisciplinarity is not taken into account, it can seriously limit the organization’s ability to develop and leverage computational approaches in the newsroom. Lewis and Usher (Citation2014, 391) echo this cautionary tale as bringing together journalists and technologists is all but simple and “requires significant, coordinated, and sustained effort, and that the barriers between each field’s understanding of the other are real.” These authors warn against unlimited enthusiasm and boosterism (see e.g., Marconi Citation2020), because even if the fusion between technologists and journalists promises interesting innovations, they still need to be implemented in a complex social context, where different (sub-)cultures collide.

In addition to interdisciplinarity, another key question is: to what degree do people still have agency in relation to AI technologies (Jones Citation2019)? Control over any system means that the users can either solve their problems or undertake a given task, and the system is both transparent (how things are done) and explainable (why things are done). Users need to feel in control of the system, which means that sufficient information is provided to them. Examples from other areas such as self-driving cars demonstrate the importance of transparency and the driver’s awareness of how well the automation is currently handling that situation (Carsten and Martens Citation2019).

According to Diakopoulos (Citation2020, 245), a co-evolution between journalism and technology is necessary, as algorithms and people should complement each other to “boost the productivity, quality, and scope of journalistic efforts.” This becomes even more pressing as most AI-driven tools are developed by third parties with no imminent relation to journalism, which entails not only risks of value-misalignment between the tools and those of journalists (Komatsu et al. Citation2020), but also – if wrongly used – of bias and detrimental effects for the public sphere (Helberger et al. Citation2020). This procedure, however, entails a fundamental re-thinking of the organizational structures with regard to innovation (Usher Citation2016), and a reconceptualization of the overall design of the AI-driven tools for journalists to feel comfortable using them, and feel confident with the outputs. In other words: journalists need to trust the tools to facilitate adoption (Diakopoulos Citation2020, 247). This study therefore enriches the underdeveloped concept of hybridity, focusing on the central question of design when it comes to the development and implementation of AI-driven tools in the newsroom – an issue that needs further scholarly scrutiny.

3. Methods

We pursued a qualitative, multi-method approach to study the challenges involved with the design and deployment of AI-driven tools, and their fit with professional values and practices from journalists.

3.1. Fieldwork in Newsrooms

We visited two London newsrooms that provided us complementary perspectives on British journalism: First, we undertook a design ethnography at the BBC, a public service broadcaster, involving (a) journalists and editors who produced news content (hereafter shorthanded as journalists) and (b) technologists who designed and developed in-house technologies for the newsroom. Second, we organised interviews with journalists at The Times, a private-owned broadsheet. Overall, we collected a sample of 22 participants, including participatory observations with five journalists and interviews with 17 journalists and technologists. provides an overview of our participants and data collection methods.

Table 1. Participant codes with details of data collection and average study duration.

Data collection was guided with a semi-structured protocol that probed participants about their work practices and technology usage, with focus on how algorithms fitted into newswork (e.g., how do you go about researching stories? What tools do you use and how?). Technologists were also asked about how new technologies fit into newsroom culture and values. We focused on participants’ interactions with a diversity of tools rather than narrowing down the data collection to particular AI methods (e.g., audience data or search engines). This helped us to collate a diversity of opinions and perspectives, and to analyse AI as sociotechnical systems, focusing on entanglements between people, technology and work practices (Button and Harper Citation1995; Seaver Citation2017). Details of the sample and methods applied are described below.

3.1.1. Design Ethnography at the BBC

We conducted a design ethnography at the BBC newsroom in London between September 2019 and April 2020 with 16 participants. Design ethnography is an ethnomethodologically informed method that offers a practical, design-focused approach for “understanding and conveying the sociality of work so that designers might better respond to the challenge” (Crabtree, Rouncefield, and Tolmie Citation2012, 184). This approach gave us insights into work practices and ecologies around technologies, and kept us focused on interactional work, useful to ground the design of computational systems. We combined participatory observations and on-site interviews to produce a rich account of the day-to-day work at the newsroom. Participants were keen to talk about their work and encouraged the researcher (first author) to “hang around” the newsroom – inviting her to walk around the Broadcasting House, introducing her to colleagues, inviting her to join meetings, and having cups of tea. We recruited participants using snowballing sampling and divided them in two categories according to their role in the newsroom: journalists and technologists.

As detailed in , the journalists category included nine journalists and two editors across different BBC news products: news podcast (PE1, PJ2, PJ3), language services (PJ4, PJ8-11), monitoring (PJ6, PJ15) and news desk (PE7). The sample included novice journalists with a couple of years of experience and seasoned editors with more than 25 years in the business. Five of these participants agreed to take part in participant observations, where the researcher shadowed them and asked questions while performing work tasks (Laurier Citation2010) such as writing stories, monitoring media, and planning features. The remaining six participants took part in semi-structured interviews to investigate the fit of technology into their everyday work. The interviews had a duration of between 30–60 min and took place at the participant’s workstation, so they often used their computers to illustrate their responses.

The technologists category included five participants who had a strategic role in leading the design of cutting-edge technologies for the BBC. We chose a small but representative sample as this organisation pioneers media research and innovation, for both journalists and audiences, as a part of its long history of innovation and public service remit (Tambini Citation2021). We interviewed participants working on audience content recommenders (PT5, lead editorial; PT16, data science lead), language services (PT14, research lead), and audience data products (PT12, PT13 project managers). As each participant had a unique role and background, we asked about their strategies for designing and deploying technologies in the newsroom. We specifically recruited technologists as their responses gave us a particularly useful insight on the current and future state of hybrid journalism.

3.1.2. Interviews at The Times

As a second step, we carried out six semi-structured interviews at The Times in London in October 2019 to reflect on the role and design of technology in news production. We interviewed one editor and five seasoned journalists with a variety of specialisms, including leader writing (PJ17), data journalism (PJ20), business news (PE19, PJ22), and investigative journalism (PJ18, PJ21). The interviews lasted around 25 min and took place at the newsroom. No observations were carried out at The Times, as permission was only granted for interviews. Participants were recruited with the help of an internal gatekeeper, who arranged the interviews for us. The purpose of these interviews was to extend our understanding of journalistic practices as we sought for more general statements on the design of hybrid technologies. Combined with data from BBC journalists, the evidence gathered at The Times served to complement and contextualise our findings on the journalists’ views of AI technology in a reliable and naturalistic way, and provided an additional means of triangulation by ensuring the findings extended beyond the idiosyncratic practices at one newsroom.

Our sample size was decided (1) in terms of pragmatics, as newsrooms are restrictive workplaces and securing access was often complicated, and (2) in terms of “information power” (Malterud, Siersma, and Guassora Citation2016), where a rough sample size was determined a priori, and the final sample size was determined after an initial analysis of The Times’ interviews, where we deemed that a suitable level of insight had been gained to address our research aims. While limited access to participants may, in theory, have restricted our ability to make sampling decisions based on information power, the openness from the available participants and rich insights gained meant this was not the case.

3.2. Data Collection and Analysis

We captured data via observations (typed up in field notes), photos, and audio material whenever appropriate. The audio recordings were transcribed, and the data was analysed using inductive thematic analysis. This way to analyse data can be seen as a foundational method for qualitative analysis (Braun and Clarke Citation2013), particularly when it comes to “identifying, analyzing, organizing, describing, and reporting themes found within a data set” (Nowell et al. Citation2017, 2). We started the analysis by organising the data into rough codes by identifying recurrent subjects in participants’ answers. Afterwards, we divided the codes by sample categories. First, we refined the codes that involved responses from technologists. Second, we analysed codes from journalists, integrating BBC and The Times responses, and looking for relevant practices and understandings around AI technologies.

Next, we cross-referenced the codes generated from data from all participants, overlapping the responses of technologists with the practices described by journalists (and vice versa). For instance, a code where PT12, a technologist, mentioned a specific tool for audience analytics was matched with an instance where PJ6, a journalist, demonstrated the same tool in use during the observations. This step allowed us to compare and contrast perspectives between sample categories, as we often merged or split codes. At the end, we generated a total of 50 codes organised into 3 groups − 11 unique codes for technologists (e.g., R&D tensions; human autonomy), nine unique codes for journalists (internal communication; information access), and 30 overlapping codes (tech training; UIs’ pain points). The overlapping codes between technologists and journalists (both from BBC and The Times) provided a rich account of the current state of hybrid journalism, allowing us to put together two different yet complementary perspectives. This article concentrates on this overlap, focusing on the challenges and opportunities of designing and deploying AI technologies in the newsroom from the perspective of technologists, complemented and contextualised with the practices reported by journalists.

As a final step in our data analysis, we undertook several in-house rounds of ideation across our interdisciplinary team to look out for themes in the groups of codes. The main thematic foci present in the data are a) prioritising editorial voices over automation, b) embedding AI into journalistic workflows, and c) crossing disciplinary boundaries to sustain change. Each of these themes was associated with related sub-themes that represented five design strategies for the creation of AI tools.

3.3. Limitations

While the interviews with journalists at The Times confirmed our findings at the BBC and provided evidence of generalisability, we are cautious in our claims of generalisability overall, as we did not have access to technologists beyond the BBC. Other limitations include: Only focusing on 2 (albeit different) newsrooms, which provides some indication of the generalisability of our findings, but does not mean that our findings apply to all types of newsrooms. As suggested by (Crabtree, Rouncefield, and Tolmie Citation2012), a fast-paced design ethnography yielded rich design-focused data; focusing primarily on insight provided by technologists and supported by journalists (rather than the other way around). This proved useful in many respects, providing a much-needed focus on the challenges and opportunities of deploying AI technologies in newsrooms, but it also may over-represent the technologists’ views and design strategies compared to those of journalists. We were therefore careful not to assume the technologists’ views were necessarily shared by the journalists and weighed up the evidence in the data to provide a faithful analysis that focused primarily on the findings from technologists but did not neglect the views of journalists (across both organisations – the BBC and The Times).

4. Findings

Journalists in our sample had similar perceptions of AI, being aware of how algorithms have permeated their workflows, but careful not to consider them a replacement of journalistic tasks. This is a notable finding as we sampled two widely different media organisations in terms of culture and business models, as well as participants with different roles and levels of expertise. However, it was the responses of technologists that provided us with a granular understanding of the challenges and opportunities for AI design in the newsroom – from idea generation to system deployment. Technologists reflected on a broader sociotechnical understanding of journalism and design, focusing on journalistic workflows and values rather than on merely technical or aesthetical considerations. We now present our findings, contextualised with quotes of participants as appropriate, where PEx represents the editors in our sample, PJx journalists, and PTx technologists.

4.1. Prioritising Editorial Voices Over Automation

Editors in our sample welcomed the support of AI technologies to support their editorial tasks and help them make more informed decisions. PE1 was optimistic that AI technologies could help overturn one of the biggest challenges at the BBC, which he described as “how do you broaden out the story choice?” As he further explained, while there have been changes when it comes to technology in the newsroom, there have not been changes “in the [journalistic] job that is done.” PE1 explained that editors are required to “make a snap judgement” to decide what “makes a story” based on their knowledge and expertise – but that these decisions are often narrowed down as editors have “preconceived notions of what a story is.” Similarly, PJ17 indicated that she was pushing back the restrictive editorial tone at The Times, stating that she had “an agenda” for writing more varied leads and thus appealing to a wider readership.

PE1 said that news aggregators could help editors to make these decisions in a way that expands their access to information: “I don’t know anything about algorithms and computers, but I started to conceive of a world of public service algorithms which would genuinely try to open your [filter] bubbles rather than put you more into them.” However, he felt that this challenge was not a “technological problem” but rather a “cultural” one, as it was up to the editors, and not the technology, to broaden out their story choices. While journalists were clear about the positive impact of technology in editorial tasks, they were cautious about relinquishing control to algorithms, such as automated UGC verification, and felt that “probably there isn’t a substitute” (PE1) for journalistic expertise, as it was unclear “how a machine could ever do that” (PE7). Technologists agreed with this view – their responses emphasized the strong human element in journalism and were thus opposed to full automation. Instead, they expressed a commitment to preserve editorial voices and values, and saw AI as a means of enhancing decision-making with relevant data. The two design strategies that prioritised editorial voices over automation were (1) keeping editorial voices in the loop and (2) assimilating journalistic values into AI design.

4.1.1. Design Strategy 1. Keeping Editorial Voices “in the Loop”

Technologists emphasized that their aim is to automate carefully selected journalistic tasks – not to relinquish decision-making to the machines. The consensus was that while journalists could use technology to speed up their work, the prioritisation of editorial control makes full automation (e.g., automatic story writing) unlikely at the BBC. PT12 explained that in their audience engagement system there is no automatic decision-making, as “it is up to the human to look at that [data] report and interpret it”, and that deriving actionable insights from data should be considered in great detail as they could clash with editorial decisions. Similarly, PT5 described content curation as a complex process, mixed with “something of a secret sauce” – what they saw as journalistic experience or instinct. While technologists can “deconstruct” content curation to a certain extent and algorithms can be fine-tuned to editorial guidelines, human judgement is required to make decisions. PT5 characterised ML as a “centaur”, where “all the power and leverage” happens in the horse (machine) part, but that the human part would be making decisions “up against the most problematic content.” PT14 argued that the “uniqueness” of the BBC in terms of their value of public service excluded full automation, as the organisation had a “red line about having a human in the loop.

4.1.2. Design Strategy 2. Assimilating Journalistic Values into AI Design

Technologists and journalists aligned on their priority to respect the culture and values of the newsroom. Technologists in our sample fully embraced journalistic values, and as stated by PT13, are “at the heart of whatever we do.” In particular, technologists expressed their commitment to follow the remit of the BBC to provide a public service that informs, entertains, and educates audiences. For technologists, the focus on audiences was a key value to design technologies to help guide journalists with “more data to help enable their creative decisions” (PT13). This was also a reflection on the BBC business model and organizational goals, that are based on providing a public service rather than on advertising or optimising traffic. For example, PT16 mentioned that they opted for “less performing” models on a technical level that were “more compliant with BBC values” and policies. This commitment with journalistic values was expressed in practical approaches to AI design. PT16 was working on “very strict collaboration with editorial staff” to define “business rules” for their models and creating an internal “checklist of items for ML principles in public service” aimed to “reflect the public service mission of the BBC” into the algorithms.

Technologists felt responsible for respecting journalistic values, but expected the same from journalists. Therefore, when it came to integrating values into newswork, technologists said that it is still journalists (not technology) who are accountable for applying them into stories. Furthermore, technologists felt that it was up to journalists to decide how to use technology in their stories. As mentioned by PT12, journalists decide what stories should be published, trusting their “own gut and skills”. PT14 explained that “when creating technology for editorial colleagues, we are almost always focused on creating a tool to get a specific job done, rather than enshrining editorial judgements such as impartiality in the tool itself.” As example, he said that while journalists frequently use Google Translate, he expected that translations would be double-checked as a part of a culture of “never taking anything at face value.”

4.2. Embedding AI into Journalistic Workflows

When it comes to using AI-driven tools, journalists placed most trust in their own skills for doing journalism, rather than on the technical understanding of algorithms. AI was thus considered a desirable tool but not a replacement of journalistic work. As expressed by PE7, while they “use quite a lot of technology” in news production and is not “a weird thing” to be using AI support, “nothing still beats the human look across the story.”

Journalists across the two newsrooms used data-led technologies extensively. PJ21 expressed that he used Google Search “every day” and “totally relied on it”, even if he did not know “what the implications were for the algorithms” or the manner (or extent) to which Google Search featured AI. PJ18, an investigative journalist at The Times, also used Google Search regularly but had concerns about how Google’s algorithms might skew his research by “promoting” or “hiding” content as a part of its business model. Instead, participants argued that Google (and other algorithmic tools) were just a “first check” (PJ18) rather than a substitute for journalistic exercise. As a point of comparison, Factiva, an in-house search engine at The Times was also referred to as a starting point in the research, confirming that it is journalism, not trust on algorithms, that drove their research.

While the use of AI technologies was therefore common, BBC technologists mentioned that journalists mostly stick to tools that they feel comfortable using – as a way of retaining control and saving time and thus lamented a low adoption rate of new tools. As PT5 mentioned, “very few journalists have enough time to get a login for something and then learn how to use it”. PT14 candidly mentioned that adoption into everyday use is “possibly one of the hardest bits” of his work, as frequently there is resistance to try out unfamiliar systems. In our study, journalists described both practical and cultural reasons for the low uptake of in-house tools. Practical reasons were related to the access to technology. PJ11 considered that Google was “the most useful and widely used journalism tool” because, when compared to other ad-hoc tools (i.e., those developed in-house or by academics), Google was readily available without installing software or needing “boss’s permission” for downloading it on newsroom machines. More than looking for trusted algorithms, journalists focused on using tools that were readily available and helped to perform their journalistic duties.

Our participant observations made evident some cultural reasons behind slow adoption – PJ6 confirmed that lack of time constraints adoption as these tools often required journalists to develop new skills. She preferred tools that had “an idiot’s guide” (a signposted set of steps for using the tool where “you really can’t go wrong”) rather than requiring to “go off the rota for two hours” to attend a training session to learn how to use it. PJ6 demonstrated the use of an in-house audience engagement system (developed by PT12’s team), where she found out the number of views a certain story had. However, her team did not do “a lot of analysis” with it, as they felt that deciding how to “change” stories based on more advanced data insights would require “too much” effort for the type of stories they produced. In short, only part of the audience data insights was considered useful by journalists. Furthermore, PJ6 reflected on how sharing data insights within teams opened questions on “what incentives do you have for journalists to give up their scoops to other teams,” expressing that “there is no way on earth they are ever going to do that. I cannot see that happening.”

The role of technologists was to facilitate adoption of emerging technologies in consideration of both practical and cultural constraints. Participants described two design strategies: (3) intervening workflows with relevant data and (4) lowering adoption thresholds.

4.2.1. Design Strategy 3. Intervening Workflows with Relevant Data

In general terms, technologists were confident in the usefulness of their systems, but still struggled with how to embed system outputs into journalists’ workflows. PT12 stated there was a “thirst” for data-led technologies in the newsroom, but questioned how to design them in order to get people’s attention, and how to supplement workflows with “meaningful data to help them do their job better.” His design strategies revolved around making journalists familiar with the system outputs (in PT12’s case, audience engagement data) by purposefully inserting data into workflows and facilitating conversations around it. For example, when first introducing a system, PT12 would attend early morning meetings to present audience data – which in turn, would inform the stories for the day. Afterwards, journalists could access the data through their system. Beyond that, large screens displaying audience data were introduced across the newsroom. In his view, designing interventions on how to deploy AI systems was important to integrate data “into the culture of our newsroom and making it part of the day-to-day and the daily conversation.” However, as hinted by PJ6 in the section above, PT12 acknowledged that it is the usefulness of the data – not the system design on itself – that predicts its adoption.

Similarly, PT14 mentioned that journalists tend to be critical of the potential usefulness of tools into their own workflow (“why do I need this?”, “this is of no use to me”) or wary that it might make them lose time (“this is going to complicate things”). From PT14’s point of view, what tends to work out is when the tool “gains its own momentum,” as journalists find out that it is useful for creating stories. He gave an example of the design of an “extremely simple” transcription tool that got picked up as it was useful and required low effort. This tool became “a really good example of where something quite simple makes a really big difference and people just start using it.”

4.2.2. Design Strategy 4. Lowering Adoption Thresholds

Technologists were conscious that certain design decisions could facilitate or hinder the adoption of AI tools, and thus invested plenty of time in doing user research to figure out the most suitable way to create a system. Design processes often involved interdisciplinary teams and employed human-centred design techniques for identifying the practical and cultural challenges in the newsroom. PT12 and PT13 worked closely with internal engineering and User Experience (UX) teams to discover workarounds to the restrictive digital environments in newsrooms and to gain a systemic understanding of “individual users and what challenges they face” (PT12). According to PT13, the inclusion of UX methods in AI design, such as shadowing and iterative prototyping, helped to “sketch out what a typical day looks like when they make decisions so you can really understand their workflow” and ultimately, design tools that solve “a genuine problem for journalists.

4.3. Crossing Disciplinary Boundaries to Sustain Change

Journalists and technologists agreed on the need of including more data-led technologies to ensure that the newsrooms remained competitive not only in comparison to other news providers, but in the digital landscape more broadly. Participants (from BBC and The Times) were invested in serving audiences, telling stories that are “deeply researched” (PE19) in an engaging way. A prominent example of AI use at the BBC was using data analytics for understanding the audience and enhancing story selection based on specific publics. According to PJ11, data can help journalists “make better commissioning decisions” when it comes to competing with digital platforms with “fine-tuned algorithms” that are “ruthlessly pursuing people’s attention,” such as Spotify and Netflix.

Technologists agreed with the view that AI solutions support wider organisational goals on audience engagement. However, rather than brute-forcing adoption or monitoring performance, technologists reassured us that their focus was on communicating the value of AI to journalists. Technologists were interested in tackling the challenges associated with deploying (or embedding) AI systems into complex sociotechnical environments rather than on the technical challenges of developing them. Journalists were thus involved throughout the AI lifecycle – from conceptual design to deployment – to weave in their needs. One core strategy for this was (5) creating design partnerships to promote value exchange.

4.3.1. Design Strategy 5. Creating Design Partnerships to Promote Value Exchange

The BBC has positioned journalists not only as users, but as creators of AI-driven tools, as journalists directly influence and shape its design. Prominently, two of the technologists in our sample, PT13 and PT5, were seasoned journalists who have switched careers from journalistic posts to technical ones, acting as conduits between the needs of journalists and the technical teams. Their role was to become embedded in technical teams to develop a deeper understanding of technology. These participants used their experiences in journalism to mediate conversations with stakeholders around the newsroom, often performing the design activities of translating and interpreting social needs (Zingale Citation2016). For PT13, an important part of her role was to “translate what the tool does in a really easy to understand and meaningful way so it’s obvious to journalists why this helps them.” Likewise, PT5 acted as a translator with data scientists by attending stand-up meetings in order to build trust and a strong working relationship with technical people.

Another way of crossing boundaries was building partnerships with journalists across the newsroom, approaching AI design as an ongoing conversation between the technical and editorial teams. PT5 mentioned she had “very lively conversations” with key stakeholders, using artefacts such as “explainability documents” to discuss the purpose of tools and how models work. These conversations were supported by embedding content editors into the data science team for a period of time, in order to bring their knowledge into the algorithm design and facilitate later adoption.

5. Discussion

In 2019, only 37% of newsrooms had a dedicated AI strategy (Beckett Citation2019) which coupled with ‘automation anxiety’ (Akst Citation2013), has led to resistance by journalists to technological advances for fear of losing their status or jobs. Context plays a crucial role in adopting new technologies (Broussard et al. Citation2019, Bastian, Helberger, and Makhortykh Citation2021); for instance, journalists may be more interested in tools which may free them up from repetitive tasks, as opposed to marketing analytics. In addition, some newsrooms have been more open than others: the BBC, where our research took place, has shown itself repeatedly welcome to study the innovation diffusion that has taken place in the corporation for several decades (e.g., Cottle and Ashton Citation1999; Wallace Citation2013; Hannaford Citation2015; Jones and Jones Citation2019).

Our ethnography in the BBC newsroom found that some of the concerns voiced by journalists can be alleviated when design issues surrounding AI are examined from a broader, sociotechnical perspective. This can help ensure meaningful and useful tool development. Specifically, we examined how journalists respond to and adopt AI-driven tools and automation processes, how journalistic values and experience were integrated into emerging technologies, and the design strategies for blending technological capabilities and editorial requirements. Our findings provide an account of how AI-driven technologies are being implemented in newsrooms, and how these tools have altered newswork more broadly. Below we suggest three empirically informed design directions of best practice to advance the discussion around hybrid journalism and the design of AI-driven tools for the newsroom.

5.1. Support New Editorial Roles

Journalists employ a range of considerations to judge what constitutes news (Gans Citation1979). These considerations are extremely difficult to operationalise because they vary from news story to news story. Moreover, journalists themselves often find it difficult to articulate such news values beyond instinctive and creative approaches – what they would see as a “self-evident and self-explaining sense of newsworthiness: the journalistic gut feeling” (Schultz Citation2007, 190). Technologists acknowledged the impossibility of reverse engineering this “secret sauce” approach to journalism. Thus, in the most successful AI tools to assist newswork, they moulded technology to amplify these journalistic approaches, rather than replace or change them, serving to keep editorial voices in the loop (Design Strategy 1). With this in mind, we suggest that journalists should not only play larger roles in shaping the design of technologies, but also assume new editorial roles to retain control, ownership, and authority over data – such as on making complex “human” editorial decisions while AI “editors” do the more routine tasks of research and synthesis. For instance, PT5 and PT16 reported partnerships where journalists joined the data science team for a few weeks in order to infuse their knowledge into algorithm development.

These partnerships could facilitate participatory approaches to design thinking and “doing,” that could lead to shared ownership of the products created (Flint and Blyth Citation2021; Gutierrez Lopez et al. Citation2019). Having participatory approaches to design where editors “supervise” ML algorithms could help to fine-tune them with highly contextualised guidelines. Participatory approaches to design are now increasingly common in HCI practice and are considered a key way of prioritising the voices of potential users of technology over those of developers (Bannon, Bardzell, and Bødker Citation2018). They have also been used successfully to inform algorithm design (Saxena and Guha Citation2020), highlighting the feasibility of journalist-data scientist partnerships. We therefore suggest that the common ground for automation between technologists and journalists should be on participation and journalistic values that hold news organisations accountable. This could serve to assimilate journalistic values into AI design (Design Strategy 2).

5.2. Stage Data Interventions

Perhaps unsurprisingly, we found that journalists are sceptical of using new technologies for newswork – even those technologies that are built in-house. To overcome this, technologists at the BBC staged “data interventions” to communicate to journalists how system outputs, like content recommendations or web analytics could be useful in enhancing their workflows, thereby intervening workflows with relevant data (Design Strategy 3). Our research suggests that journalists are not interested in technology in and of itself, but in what these system outputs could do for their work. Previous research has found that innovative collaborations can occur between technologists and journalists when there are shared interests and directions in hackathons (Boyles Citation2020), open-source systems (Lewis and Usher Citation2014), and blogging (Nielsen Citation2012). Similarly, data interventions can be useful to signpost how, why, and when data could be embedded into newswork – in other words, ensuring that data is widely available and understandable for the daily work of journalists. These interventions could establish common ground in data integration and everyday use, thus facilitating UI/UX design efforts. We bring forward these interventions as an integral part of the design process for hybrid technologies. In these interventions, where technologists engage with journalists, considerable value exchange may occur (Design Strategy 5). Additionally, we propose there should be a shift from user-centred to data-centred design, where it is not the user interfaces or experiences, but the data outputs and their delivery that are carefully designed and evaluated in line with professional values, to give users maximum control on how they can integrate relevant data into their stories.

5.3. Support Slow Change

The BBC newsroom has fostered organisational change around technology by opening up new interdisciplinary team configurations and enabling knowledge exchange, such as using UX methods to lower AI adoption thresholds (Design Strategy 4). Technology might be reshaping newsroom culture “one tool at the time,” but the most effective change we observed resembled more of a social process than a “technological revolution.” Permitting a slow, progressive adoption of technologies, carefully guided by technologists invested in newsroom values (Design Strategy 2 – assimilating journalistic values into AI design), seems to be an effective way of designing hybrid journalism. It is not merely about engineering or data science teams finding innovative solutions, but creating a shared understanding, common ground, and building acceptance of tools across the newsroom. While interdisciplinary collaboration is not straightforward (Lewis and Usher Citation2014), the goal of providing a trustworthy public service could guide both journalistic and technological endeavours.

6. Conclusion

The design strategies observed in our study demonstrate the value of close working relationships between technologists and journalists across the AI lifecycle, from conceptual design to deployment phases. This ensures the design of truly useful AI tools, that can be driven by high-quality data that promotes useful insight, and also fit synergistically with both journalists’ and organisations’ workflows. It is only by understanding the complex role of AI newsroom technologies from a sociotechnical perspective and exploring their potential for supporting but also undermining journalists’ work that we can design AI tools that strike a meaningful blend between algorithms and journalists’ expertise.

Future work might focus specifically on whether and how it is possible to create a ‘useful blend’ of human expertise and AI when designing in specific journalism AI contexts (e.g., automated journalism). We advocate informing future design directions by focusing not only on understanding technologists’ design strategies for ensuring useful AI tools, but also journalists’ approaches for making the most of the AI technologies currently within their newsrooms. Also, it is possible to devise new design methods specific to an AI context (e.g., based on participatory and value-sensitive design principles), that incorporate some of the best practice we observed when technologists and journalists work together to inform design.

Acknowledgements

The authors would like to thank the participants for their engagement in the project. We would also like to express our gratitude to the BBC and The Times for supporting our research. Last not least, we would also like to thank the anonymous reviewers for their constructive feedback.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was funded by the Google News Initiative project DMINR.

References

  • Akst, Daniel. 2013. “Automation Anxiety.” The Wilson Quarterly 37 (3). Available at: http://archive.wilsonquarterly.com/sites/default/files/articles/AutomationAnxiety.pdf.
  • Anderson, Chris W. 2013. Rebuilding the News: Metropolitan Journalism in the Digital Age. Philadelphia: Temple University Press.
  • Bannon, Liam, Jeffrey Bardzell, and Susanne Bødker. 2018. “Reimagining Participatory Design.” Interactions 26 (1): 26–32.
  • Bastian, Mariella, Natali Helberger, and Mykola Makhortykh. 2021. “Safeguarding the Journalistic DNA: Attitudes towards the Role of Professional Values in Algorithmic News Recommender Designs.” Digital Journalism (Abingdon, England) 9 (6): 835–863.
  • Baym, Geoffrey. 2017. “Journalism and the Hybrid Condition: Long-Form Television Drama at the Intersections of News and Narrative.” Journalism 18 (1): 11–26.
  • Beckett, Charlie. 2019. New Powers, New Responsibilities. A Global Survey of Journalism and Artificial Intelligence. London: LSE. Available at: https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/.
  • Bødker, Henrik. 2017. “Vice Media Inc.: Youth, Lifestyle – and News.” Journalism 18 (1): 27–43.
  • Boyles, Jan Lauren. 2020. “Laboratories for News? Experimenting with Journalism Hackathons.” Journalism 21 (9): 1338–1354.
  • Braun, Virginia, and Victoria Clarke. 2013. Successful Qualitative Research: A Practical Guide for Beginners. London: SAGE Publications Ltd.
  • Brehmer, Matthew, Stephen Ingram, Jonathan Stray, and Tamara Munzner. 2014. “Overview: The Design, Adoption, and Analysis of a Visual Document Mining Tool for Investigative Journalists.” IEEE Transactions on Visualization and Computer Graphics 20 (12): 2271–2280.
  • Broussard, Meredith, Nicholas Diakopoulos, Andrea L. Guzman, Rediet Abebe, Michel Dupagne, and Ching-Hua Chuan. 2019. “Artificial Intelligence and Journalism.” Journalism & Mass Communication Quarterly 96 (3): 673–695.
  • Bucher, Tania. 2018. If… Then: Algorithmic Power and Politics. Oxford, UK: Oxford University Press.
  • Button, Graham, and Richard Harper. 1995. “The Relevance of ‘Work-Practice’ for Design.” Computer Supported Cooperative Work (CSCW) 4 (4): 263–280.
  • Carsten, Oliver, and Marieke H. Martens. 2019. “How Can Humans Understand Their Automated Cars? HMI Principles, Problems and Solutions.” Cognition, Technology & Work 21 (1): 3–20.
  • Chadwick, Andrew. 2013. The Hybrid Media System: Politics and Power. Oxford, UK: Oxford University Press.
  • Cottle, Simon, and Mark Ashton. 1999. “From BBC Newsroom to BBC Newscentre: On Changing Technology and Journalist Practices.” Convergence: The International Journal of Research into New Media Technologies 5 (3): 22–43.
  • Crabtree, Andrew, Mark Rouncefield, and Peter Tolmie. 2012. Doing Design Ethnography. London, UK: Springer-Verlag London.
  • Deuze, Mark, and Tamara Witschge. 2018. “Beyond Journalism: Theorizing the Transformation of Journalism.” Journalism 19 (2): 165–181.
  • Diakopolous, Nicholas. 2019. Automating the News: How Algorithms Are Rewriting the News. Cambridge, MA: Harvard University Press.
  • Diakopoulos, Nicholas. 2020. “Computational News Discovery: Towards Design Considerations for Editorial Orientation Algorithms in Journalism.” Digital Journalism 8 (7): 945–967.
  • Diakopoulos, Nicholas, Munmun De Choudhury, and Mor Naaman. 2012. “Finding and Assessing Social Media Information Sources in the Context of Journalism.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12), 2451–2460.
  • Dörr, Konstantin Nicholas, and Katharina Hollnbuchner. 2017. “Ethical Challenges of Algorithmic Journalism.” Digital Journalism 5 (4): 404–419.
  • Ekdale, Brian, Jane B. Singer, Melissa Tully, and Shawn Harmsen. 2015. “Making Change: Diffusion of Technological, Relational, and Cultural Innovation in the Newsroom.” Journalism & Mass Communication Quarterly 92 (4): 938–958.
  • Flint, Adrian, and Simon Blyth. 2021. “Facilitating Genuine Community Participation: Can Development Learn from Design?” Development Studies Research 8 (1): 63–72.
  • Gans, Herbert J. 1979. “Deciding What’s News: Story Suitability.” Society 16 (3): 65–77.
  • GPT-3 2020. “A robot wrote this entire article. Are you scared yet, human?” The Guardian, September 8. https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
  • Gutierrez Lopez,Marisela, Sondess Missaoui, Stephann Makri, Colin Porlezza, Glenda Cooper, and Andrew MacFarlane. 2019. Journalists as Design Partners for AI. Paper Presented at the CHI 2019 ACM Conference on Human Factors in Computing Systems, 04–09 May 2019, Glasgow, UK. Available at: https://openaccess.city.ac.uk/id/eprint/22998/1/journalist-hci.pdf.
  • Guzman, Andrea L., and Seth C. Lewis. 2020. “Artificial Intelligence and Communication: A Human–Machine Communication Research Agenda.” New Media & Society 22 (1): 70–86.
  • Hannaford, Liz. 2015. “Computational Journalism in the UK Newsroom.” Journalism Education 4 (1): 6–21.
  • Helberger, Natali, Max Van Drunen, Sarah Eskens, Mariella Bastian, and Judith Moeller. 2020. “A Freedom of Expression Perspective on AI in the Media – With a Special Focus on Editorial Decision Making on Social Media Platforms and in the News Media.” European Journal of Law and Technology 11 (3).
  • Johnson, Khari. 2021. “OpenAI and Stanford researchers call for urgent action to address harms of large language models like GPT-3.” VentureBeat, February 9. https://venturebeat.com/2021/02/09/openai-and-stanford-researchers-call-for-urgent-action-to-address-harms-of-large-language-models-like-gpt-3/
  • Jones, Steve. 2019. “Untitled, No. 1. (Human Augmentics).” In A Networked Self and Human Augmentics, Artificial Intelligence, Sentience, edited by Z. Papacharissi, vol. 5, 201–205. New York: Routledge.
  • Jones, Bronwyn, and Rhianne Jones. 2019. “Public Service Chatbots: Automating Conversation with BBC News.” Digital Journalism 7 (8): 1032–1053.
  • Kitchin, Rob. 2017. “Thinking Critically about and Researching Algorithms.” Information, Communication & Society 20 (1): 14–29.
  • Komatsu, Tomoko, Marisela e Gutierrez Lopez, Stephann Makri, Colin Porlezza, Glenda Cooper, Andrew MacFarlane, and Sondess Missaoui. 2020. “AI Should Embody Our Values: Investigating Journalistic Values to Inform AI Technology Design.” In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society (NordiCHI '20), October 25–29, 2020, Tallinn, Estonia.
  • Laurier, Eric. 2010. “Participant Observation.” In N Clifford, S French & G Valentine (eds.), Key Methods in Geography ( 116–130. 2nd ed, SAGE Publications Ltd, London.
  • Lewis, Seth C., Andrea L. Guzman, and Thomas R. Schmidt. 2019. “Automation, Journalism, and Human–Machine Communication: Rethinking Roles and Relationships of Humans and Machines in News.” Digital Journalism 7 (4): 409–427.
  • Lewis, Seth C., and Nikki Usher. 2014. “Code, Collaboration, and the Future of Journalism.” Digital Journalism 2 (3): 383–393.
  • Lewis, Seth C., and Nikki Usher. 2016. “Trading Zones, Boundary Objects and the Pursuit of News Innovation: A Case Study of Journalists and Programmers.” Convergence: The International Journal of Research into New Media Technologies 22 (5): 543–560.
  • Lewis, Seth C., and Oscar Westlund. 2015. “Actors, Actants, Audiences, and Activities in Cross-Media News Work: A Matrix and a Research Agenda.” Digital Journalism 3 (1): 19–37.
  • Lindén, Carl-Gustav. 2017. “Algorithms for Journalism: The Future of News Work.” The Journal of Media Innovations 4 (1): 60–76.
  • Maiden, Neil Konstantions, Zachos, Amanda Brown, George Brock, Nyre Lars, Alexander Nygrad Tonheim, Dimitris Apostolou, and Jeremy Evans. 2018. “Making the News: Digital Creativity Support for Journalists.” In Proceedings of the 2018 ACM Annual Conference on Human Factors in Computing Systems (CHI ’18), 21–26.
  • Malterud, Kirsti, Volkert D. Siersma, and Ann D. Guassora. 2016. “Sample Size in Qualitative Interview Studies: Guided by Information Power.” Qualitative Health Research 26 (13): 1753–1760.
  • Marconi, Francesco. 2020. Newsmakers: Artificial Intelligence and the Future of Journalism. New York, US: Columbia University Press.
  • Nielsen, Rasmus Kleis. 2012. “How Newspapers Began to Blog: Recognizing the Role of Technologists in Old Media Organizations’ Development of New Media Technologies.” Information, Communication & Society 15 (6): 959–978.
  • Nowell, Lorelli S., Jim M. Norris, Deborah E. White, and Nancy J. Moules. 2017. “Thematic Analysis: Striving to Meet the Trustworthiness Criteria.” International Journal of Qualitative Methods 16 (1): 160940691773384..
  • Pavlik, John V. 2013. “Innovation and the Future of Journalism.” Digital Journalism 1 (2): 181–193.
  • Porlezza, Colin. 2018. “Deconstructing Data-Driven Journalism: Reflexivity between the Datafied Society and the Datafication of News Work.” Problemi Dell’Informazione 43 (3): 369–392.
  • Porlezza, Colin, and Philip Di Salvo. 2020. “The Accountability and Transparency of Whistleblowing Platforms: Issues of Networked Journalism and Contested Boundaries.” Journalism Studies 21 (16): 2285–2304.
  • Ruotsalainen, Juho, Jaana Hujanen, and Mikko Villi. 2021. “A Future of Journalism beyond the Objectivity–Dialogue Divide? Hybridity in the News of Entrepreneurial Journalists.” Journalism 22 (9): 2240–2258. online first,
  • Russell, Adrianne. 2016. Journalism as Activism: Recoding Media Power. Cambridge, UK: Polity Press.
  • Saxena, Devansh, and Shion Guha. 2020. “Conducting Participatory Design to Improve Algorithms in Public Services: Lessons and Challenges.” In Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing. 383–388.
  • Schapals, Aljosha K., and Colin Porlezza. 2020. “Assistance or Resistance? Evaluating the Intersection of Automated Journalism and Journalistic Role Conceptions.” Media and Communication 8 (3): 16–26.
  • Schilder, Frank. 2020. “GPT-3: The good, the bad and the ugly.” Towards Data Science, September 24. https://towardsdatascience.com/gpt-3-the-good-the-bad-and-the-ugly-5e2e5b7f0f66
  • Schultz, Ida. 2007. “The Journalistic Gut Feeling: Journalistic Doxa, News Habitus and Orthodox News Values.” Journalism Practice 1 (2): 190–207.
  • Seaver, Nick. 2017. “Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems.” Big Data & Society 4 (2): 205395171773810–205395171773812.
  • Sirén-Heikel, Stefanie, Leo Leppänen, Carl-Gustav Lindén, and Asta Bäck. 2019. “Unboxing News Automation.” Nordic Journal of Media Studies 1 (1): 47–66.
  • Stenbom, Agnes, Mattias Wiggberg, and Tobias Norlund. 2021. “Exploring Communicative AI: Reflections from a Swedish Newsroom.” Digital Journalism : 1–19.doi:10.1080/21670811.2021.2007781.
  • Tambini, Damian. 2021. Public Service Media Should Be Thinking Long Term When It Comes to AI. London: LSE. Available at: https://blogs.lse.ac.uk/medialse/2021/05/12/public-service-media-should-be-thinking-long-term-when-it-comes-to-ai/.
  • Thurman, Neil J., Konstantin Dörr, and Jessica Kunert. 2017. “When Reporters Get Hands-on with Robo-Writing.” Digital Journalism 5 (10): 1240–1259.
  • Thurman, Neil J., Seth C. Lewis, and Jessica Kunert. 2019. “Algorithms, Automation, and News.” Digital Journalism 7 (8): 980–992.
  • Tolmie, Peter, Rob Procter, David William Randall, Mark Rouncefield, Christian Burger, Geraldine Wong Sak Hoi, Arkaitz Zubiaga, and Maria Liakata. 2017. “Supporting the Use of User Generated Content in Journalistic Practice.” In Proceedings of the Conference on Human Factors in Computing Systems (CHI '17): 3632–3644.
  • Usher, Nikki. 2016. Interactive Journalism: Hackers, Data, and Code. Urbana, IL: University of Illinois Press.
  • Wallace, Sue. 2013. “The Complexities of Convergence: Multiskilled Journalists Working in BBC Regional Multimedia Newsrooms.” International Communication Gazette 75 (1): 99–117.
  • Witschge, Tamara, Chris W. Anderson, David Domingo, and Alfred Hermida. 2019. “Dealing with the Mess (we Made): Unraveling Hybridity, Normativity, and Complexity in Journalism Studies.” Journalism 20 (5): 651–659.
  • Wu, Shangyuan, Edson C. Tandoc, Jr., and Charles T. Salmon. 2019. “When Journalism and Automation Intersect: Assessing the Influence of the Technological Field on Contemporary Newsrooms.” Journalism Practice 13 (10): 1238–1254.
  • Zelizer, Barbie. 2019. “Why Journalism is about More than Digital Technology.” Digital Journalism 7 (3): 343–350.
  • Zingale, Salvatore. 2016. “Design as Translation Activity: A Semiotic Overview.” In Proceedings of DRS2016: Design + Research + Society – Future-Focused Thinking, edited by P. Lloyd & E. Bohemia, 1062–1072. Brighton, UK: Design Research Society.