3,791
Views
14
CrossRef citations to date
0
Altmetric
Research Article

Designing What’s News: An Ethnography of a Personalization Algorithm and the Data-Driven (Re)Assembling of the News

&

Abstract

This article presents the results of an in-depth ethnographic study of the development of a personalization algorithm in a large regional news organization in Denmark. Drawing on the concept of sociotechnical assemblage, we argue that in the process the news organization moves from distributing news to the users as segments of consuming collectives to algorithmically constructing individual users as aggregated data points. Second, we show how personalization disassembles the constitution of “the news” as a finite arrangement of articles, replacing one structural organization and routinization of news distribution with an algorithmic and numeric form of organizing the distribution. This disassembling leads to negotiations over loss of control, as editors realize that their publicist and democratic mission is at stake and as they struggle building news values such as timeliness and localness into the algorithm, thus “translating back” the agency from the algorithm to the journalistic staff. Finally, we discuss how the negotiations involved in this concrete case study has far reaching implications for the future of journalism, as this transformation further emphasizes the economic value of news for the individual, while putting the societal value of new journalism and audiences as democratic collectives at stake.

Vignette

Sitting next to the data scientist, Chris, at one of the desks at the office when he pulled up an Excel sheet onto his screen. The top rows displayed users’ past “clicks” and the bottom rows displayed algorithmic recommendations. He explained how the recommendations came from a sister project with a commercial aim, but the base algorithm was the same as the one they would be using to personalize the news feed at MedieHuset. Considering the content on the screen and comparing past purchases with the recommendations, we attempted to determine whether these recommendations were “good”, which became a guessing game, as we did not know the exact reasoning behind the algorithm’s choices. When using the human eye, it will always be difficult to determine the goodness of the recommendations beyond trying to identify similarities with past clicks. Chris then explained that although the projects were similar, there were things that differed: “For example, we must consider the time horizon. The machine does not consider time. It just makes recommendations from the pool of deals. In contrast to editorial content where the time horizon is negotiable, this one is simple because either a deal is live and can be recommended or it is not.”

The scene above is from an ongoing ethnography following the development process of a personalization algorithm at a large Danish regional news organization, which we will refer to as “MedieHuset”Footnote1 in this article to protect the identity of the workers involved in the project, who let us into their offices and meetings. While the excerpt from our fieldwork illustrates how the personalization of news brings up the classic journalistic questions of which events and how these events become news (Tuchman Citation1973; Epstein [1973] Citation2000; Gans [1979] Citation2004; Hartley Citation2011b; Willig Citation2011), it also specifically highlights the negotiations and difficulties related to aligning journalistic values with the market-driven aim to be relevant to users in a highly competitive datafied media environment. The key value of the study lies in its portrayal of this rather troublesome journey from excitement and visions of personalization to the realization that the project might put at stake the very identity and mission of the news organization, thereby inducing the implementation of a range of “control” measures to regain agency in the process and ensure that existing values are not fully lost.

The news organization used as a case study is far from alone in experimenting with building algorithms for personalizing news feeds. A report by Reuters from 2018 revealed that almost three-quarters of those surveyed, including news CEOs, editors and digital leaders, were already using or planning to use artificial intelligence as a part of their publishing practices. Also, 59% of them said they were using artificial intelligence as a means for improving their content recommendations, and in 2020, the trend has continued to flourish (Newman Citation2018, Citation2020). This trend marks a shift from journalism’s traditional focus and orientation on shared importance and the public sphere (Fenton Citation2010; McNair Citation2018) to emphasizing highly individualized news experiences, where news distribution is responsive and based on algorithmic surveillance and the interpretation of individuals’ past behaviours (Braun Citation2015). In newsrooms today, decisions are increasingly made based on large amounts of automatically generated big data relating to the audiences (Napoli Citation2014; Arsenault Citation2017; Christin Citation2020). These data are now being utilized in personalization projects (Bodó Citation2019), placing this development within the growing datafication of the news, which is described as “the process of rendering into data aspects of the world not previously quantified” (Kennedy, Poell, and van Dijck Citation2015, 1). The current wave of personalization within the news industry also builds on approaches and algorithmic models used by large commercial platforms, such as Amazon, Google and YouTube (Smith and Linden Citation2017; Bodó Citation2019). This illustrates the growing dependency of news organizations on the data infrastructures supplied by commercial platforms, which Van Dijck, Poell, and de Waal (Citation2018) have also highlighted, arguing that we are now facing a “platformization of the news” (Van Dijck, Poell, and de Waal Citation2018, 49).

Although there seems to be no doubt that this development will affect the democratic role and contribution of the press both in academic and public debate, its impact is still unclear and remains debated (Flaxman, Goel, and Rao Citation2016; Holtzhausen Citation2016; Helberger Citation2019). Within the field of journalism studies, discussions revolve around, for example, how this will affect news values and gatekeeping mechanisms (Tandoc Citation2014; Bodó Citation2019) as well as the core of journalistic epistemology (i.e., how journalists know what they know [Carlson Citation2018]). Critics have also expressed concerns about increased audience fragmentation and polarization, as personalization risks creating so-called echo chambers or filter bubbles, in which audiences are exposed to content they are likely to agree with and that strengthen pre-existing beliefs – risking the loss of a shared public sphere (Pariser Citation2011; Colleoni, Rozza, and Arvidsson Citation2014; Flaxman, Goel, and Rao Citation2016). While these claims have remained largely underexplored empirically and have also been debunked in recent scholarship (Bruns Citation2019; Zuiderveen Borgesius et al. Citation2016), they continue to play an important discursive role in the way in which personalization is imagined and discussed by practitioners and by the general public. Other critics have focussed on the opacity of algorithms in their selection, which has also induced calls for increased transparency to hold algorithms accountable and to understand their effects on the perception of, for example, relevant information or democracy (Gillespie Citation2014; Diakopoulos Citation2015; Diakopoulos and Koliska Citation2017).

On the more optimistic side of the debate, there have been advocates for the potentially positive effects, which include increasing general engagement with news, news becoming more responsive to audiences, counteracting negative effects of information overload, and personalization offering new business models that can ensure the survival of an otherwise challenged news industry (Adar et al. Citation2017; Helberger Citation2019). This has also induced a growing interest in studying the effects of the more direct use of algorithms in newsrooms and not only the output of algorithms (e.g., the use of metrics) (Thurman Citation2011; Thurman and Schifferes Citation2012; Bucher Citation2017, Citation2018; Carlson Citation2018; Bodó Citation2019; Helberger Citation2019; Sørensen Citation2019). These studies have illustrated important findings regarding the discourse surrounding algorithms and produced large comparative studies of news organizations’ endeavours to work with personalization algorithms. However, so far, only limited empirical work has been carried out to extensively study the actual experimentation processes with personalization algorithms in the news industry and how the distribution of news is affected by datafication processes, which is the aim of this article.

Thus, this study contributes to this growing field through an in-depth empirically informed analysis of how a personalization algorithm as a sociotechnical assemblage comprised of interdependent relations among human and material actors, visions and values (Callon Citation2007) comes into being and the negotiations around news values involved in the process. In this study, the algorithmic design process at MedieHuset becomes an analytical window for how existing news values are translated and transformed when they become part of the new algorithmic system. Following Laurent’s (Citation2016) argument concerning participatory processes as analytical opportunities to study democracy at work, this empirical window also becomes a way of understanding the wider discussion of the democratic role of journalism and its addressed public.

Below, we situate the study in the existing literature in the fields of newsroom studies and science and technology studies (STS). We then briefly present the methodological design before considering the analysis. In the final section, we discuss the implications of the study, the consideration of which is only becoming more pressing as more processes within the news industry become datafied and intertwined with platforms and data providers though the use of algorithms.

The State of the Art: From Deciding to Designing What’s News

Within the field of journalism studies, there has long been a sociological interest in understanding decision-making in the newsroom. What Simon Cottle (Citation2000) named the first wave of newsroom ethnographies, which were done in the 1970s, is often seen as one of the starting points for this interest. At that time, multiple ethnographers began to study the values, norms and routines that guided news production (White Citation1950; Breed Citation1955; Warner Citation1971; Tunstall Citation1971; Tuchman Citation1973; Epstein [1973] Citation2000; Altheide Citation1976; Schlesinger Citation1978; Gans [1979] Citation2004; Fishman Citation1988). Overall, these studies highlighted how news work was highly structured around routines and implicit hierarchies in the journalistic field. This was also specified by Ida Willig (Citation2011), who described how newsworthiness is most often considered a “gut feeling” among journalists and seen as predominately linked to explicit news values (e.g., in Denmark, the predominant factors are timeliness, relevance, identification, conflict and sensation) (Willig Citation2011, 196). However, in her study of Danish newsrooms, following in the footsteps of the ethnographies of the 1970s, Willig showed that a news story is never newsworthy in itself or newsworthy only in the eyes of the beholder; rather, the newsworthiness of a story is always a question of positioning. Illustrating the negotiated and relational agency of editors and individual journalists in deciding the balance between often contradicting news values in the news-making process highly depends on questions of power. The relational aspect in Willig’s study centred on the relationship between the agents in the field, journalists and news organizations, and was less concerned with the relationship and negotiated agency between journalism and the audiences.

With the digital revolution beginning in the 1990s, when many media organizations began supplementing their print editions with online news, Cottle (Citation2000) argued that a second wave of news ethnographies was required to understand the impact of these technological developments in the newsroom. One of the major changes explored as part of this second wave related to the increasing datafication of audiences (although it was not named as such in the literature). These studies have shown that these technological changes induced changes in gatekeeping mechanisms and editorial decision-making and increased tensions between what the audience wanted to know (represented in metrics) and what the journalists thought they should know (Anderson Citation2011; Hartley Citation2011b, Citation2011a; Hartley Citation2013; Tandoc Citation2014; Ali and Hassoun Citation2019; Christin Citation2020). With algorithmically personalized news, the relation and battle for agency between editors, audiences and news is yet again undergoing transformation, and a possible shift in the news selection process can be detected – playing on the famous phrase by Gans ([1979] Citation2004) – in that news organizations are moving from deciding what’s news to designing what’s news.

The word “designing” is chosen deliberately in this study as a way of underlining what is significant in this shift: the delegation of agency to new actors (e.g., algorithms, developers and data) in the newsroom, an algorithmic system that is to be designed and the transformations that occur due to newly acquired negotiation powers. With the implementation of personalization algorithms, previous relational practices of decision-making inside the newsroom are leaving the newsroom to be built into a specific design – an algorithmic system – which is intended to reassemble the social (i.e., what becomes news) in specific ways and to make “the social durable” (Yaneva Citation2009, 280). It is a durability that is obtained by assembling users and values in the system, for example, by assigning them specific numeric values or grouping the users of certain data. Design processes, therefore, not only become a specific practice in which values are negotiated but also where those same values become finally settled upon in a specific stabilized form. As argued by Bruno Latour (1991), the final design “transcribes and displaces the contradictory interests of people and things” (Latour 1991, 153 in Yaneva Citation2009, 278).

As these decisions of newsworthiness move out of the newsroom and into the “design room” inhabited by data scientists, data analysts, editors and marketing staff, our approaches to analysis must follow. We found STS, particularly actor network theory (ANT), useful in this study. ANT is a field of study that has focussed much analytical attention on the design of technological artefacts and is a route of enquiry that has also gained increasing attention among media scholars (Anderson and Kreiss Citation2013; Gillespie Citation2014; Anderson and De Maeyer Citation2015; Spöhrer and Ochsner Citation2016). Therefore, we are contributing to the continued bridging between what has been known as “STS ethnographies” (Hess Citation2001) and newsroom ethnographies as an alternative route for exploring the ongoing transformation of news and its implications.

Theoretical Framework: Socio-Materiality, Agency and News Assemblage

When analytically approaching the design process, we drew on the concept of “sociotechnical assemblage”, as conceptualized within ANT (Callon Citation2007), as a heuristic to underline that design – or in this case, the algorithm – is not simply an entity; it is an assemblage comprised of relations between actors, such as editors, technology, data and values, which brings forward the socio-materiality in the analysis. The notion of assemblage or, more broadly, the focus on assembly work, has also been considered in various studies of algorithms as an analytical framework for avoiding the ascription of inherent power to the algorithm and as a way of underlining the fragility and associational dependency of such assemblages (Bucher Citation2013; Kavanagh, McGarraghy, and Kelly Citation2015; Neyland Citation2015, Citation2019; Neyland and Möllers Citation2017; Schwennesen Citation2019). With this understanding, power, in the Latourian sense, has to be “produced, made up, composed” (Latour Citation2005, 64). Following the general tenets of ANT (Law Citation2009; Mol Citation2010), in the analysis, we descriptively traced the negotiations, choices and relations made throughout the algorithmic design process, which enabled us not only to differently understand the central question of how an event will become news in the future and the epistemological and ontological transformations that this entails but also how this might conflict with ideals of what should become news. This also brings to the foreground what actors “make a difference” (Latour Citation2005, 154) in determining how relations and entities are assembled, illustrating the changing distribution of agency and power shifts.

In comparison with the perhaps more well-known concept of “network”, “assemblage” allows a more processual view in which the ongoing processes of assembling, disassembling and reassembling, which are a part of moving from one assemblage to another (i.e., from one way of ordering the news distribution and constructing newsworthiness to another), can be followed. Petter Holm (Citation2007) has argued that this move will inevitably involve ontological destabilisations and mutations of known values, entities and relations. This also emphasizes the politicalness of the process of building the algorithm. As Moser (Citation2008) has highlighted, following the recent “ontological turn” in ANT (Woolgar and Lezaun Citation2013) in assembling processes, “some worlds-in-progress but not to others will be prioritized” (Moser Citation2008, 99). This is why, we argue, much detailed attention must be paid to what could seem to be mere “technical” design choices: such choices can become transformative across relations.

In tracing these assembling processes, we focussed our analytical attention on moments where the proposed reality of the algorithm underwent “trials of strength”, referring to “the trials in which actors test the resistance that defines the reality of the world surrounding them” (Muniesa, Millo, and Callon Citation2007, 1). These are the moments when different versions of what constitute a “good” news recommendation (e.g., regarding localness and timeliness as specific values in news production) are negotiated between data scientists and editors, thereby making them ontologically defining moments in which multiple realities of news encounter are “coordinated” by, for example, hierarchizing realities (Mol Citation2002, Citation2010; Mol and Law Citation2004). It was in these moments that we analysed how previously important values of journalism were negotiated in relation to, for example, the technical capacities of the algorithm, which then become defining for the future of journalism.

Methods: Online and Offline Ethnography

This study involved a situated enquiry into the development process of a personalization algorithm in a Danish media organization. MedieHuset, as we refer to it in this study, is a classic example of a modern regional media organization, which, following the acquisitional trend of the last decades (Willig Citation2008), has acquired a range of smaller local newspapers and other media outlets in different parts of Denmark, thereby covering large geographical areas and multiple topics ranging from very localized news to national content.

The majority of the ethnographic fieldwork for this study was carried out from May 2019 to September 2020. The observations were conducted both in person and digitally, as the media organization was spread out over several locations and COVID-19 limited our physical access. During this period, we attended and ethnographically observed physical meetings and workshops as well as digital meetings, and we had full days of observations, particularly at the office where the developers building the algorithm were placed. The ethnographic observations were conducted by both authors at different times and at different locations, but we followed the same process of making initial “jot notes” and later took extensive “thickened” field notes of the visits (Geertz Citation1973; Dewalt and Dewalt Citation2011). After the COVID-19 pandemic began in the spring of 2020, all meetings and other activities were made digital, and from that point, we exclusively attended and observed digital meetings held via MS Teams. The meetings, both physical and digital, included weekly status meetings, steering committee meetings and coordination meetings between the different departments involved. Furthermore, we conducted eight in-depth ethnographic interviews with the key actors in continuation of the newsroom ethnography (Spradley Citation1979). The interviews were accompanied by “design game elements” to enable participants to explicate and reflect on the practices observed in the ethnography and as a way of compensating for the limited physical presence outside meetings (Brandt, Messeter, and Binder Citation2008). The interviews and audio recordings from the meetings were all transcribed. Subsequently, all field notes and transcriptions were analysed inductively in NVivo, identifying general themes across the data. Subsequently, the material was revisited and analysed in a concept-driven manner, focussing on specific theoretical concepts (Gibbs Citation2007).

Analysis

In this section, we present the analysis of our study in (nearly) chronological order, following the process of designing the algorithmic system and zooming in on the “trials of strength” as they occurred. In the first part of our presentation, we show how the entity “the user” is assembled anew, illustrating how personalization entails moving from a construction of the users as a collective consuming news to algorithmically constructed individual news users based on the data that the news organization has access to, which is then organized via the algorithms – a move that (re)configures existing relations and the process of how events become news. Second, we illustrate how the emergence of individual news users disassembles “the news” as a finite arrangement of articles (Carlson Citation2018, 5) because personalization entails a shift from editorially constructed news sites to algorithmically (and individually) constructed news sites (Thurman Citation2011). Following this disassemblance, negotiations arise over how to “build-in” journalistic values of timeliness and localness to ensure that the articles shown still have the “right” newsworthiness and that the media’s democratic mission and identity are not lost in the process (i.e., in the reassembling of the news).

Assembling Individual News and Disassembling the News

Sitting in a small office, the data scientist, Chris, explained that the aim of the algorithmic project, as he saw it, was to be able to present articles on the online news site that the individual would find more relevant: “The editor in a city knows everything about that city. It’s not that the machine is smarter than him, but it plays by different rules because it can offer individual things. If the editor were able to offer individual things to all users in that city, then it would be damn amazing if he knew what they should be. The machine knows them a little.” He further explained how personalization algorithms come in different variants or standardized models, each of which has a different logic and is based on different input data. He explained that “what we have built is a collaborative filtering algorithm. It simply means that you use the behaviour of ‘who has read what’.” He continued, using a simple illustration from a former PowerPoint presentation, describing how the algorithm simply ordered recommendations by finding similarities between users and their reading behaviours: “Someone like you found this article good, and here, ‘like you’ means you have read similar articles.” This, in simple terms, means that if user A reads articles A, B and C, and user B has read A and B, then that person will likely be recommended article C, but in reality, this is a calculation made with thousands of users and complicated linear algebra. (Excerpt from observations and interview)

What this excerpt initially illustrates is how the algorithm takes part in or automates editorial decisions of who gets to see what, as described in the literature (Bodó Citation2019). However, in doing so, editorial decisions of what is to be presented to whom is assembled in new ways. In what follows, we describe the initial assembling process of newsworthiness at MedieHuset, which ultimately becomes an assembling of individual news users as aggregated data points that are constantly in flux and adapting to how the user engages with the news site.

This assembling process began, as the excerpt illustrates, with the choice of algorithmic model – in this case, a collaborative filtering algorithmFootnote2 – which initially “acted” in the assembling process by “asking” for specific input data of “who read what”. The operationalization of this request and the choice of input data were delegated to the data scientists placed in the marketing department at MedieHuset, moving decisions of relevance away from editors, who were involved but rarely questioned the choices made by the data scientists. The choice of input data was described by the data scientist, Chris, as “a little off the shelf” (Interview 6), because they employed user data that had already been collected and stored in their data collection systems, such as Google Analytics and Tealium (e.g., past clicks on articles and time spent on the page, which could be connected across the users’ different devices through cookie recognition), foregrounding the dependencies on the data options made available by these systems. This assembly work illustrates the changing distribution of agency to new actors (i.e., the algorithm, data platforms and data scientists), who become demarcating actors in determining how the individual news user – and therefore newsworthiness – will be assembled anew.

Judgements regarding what should become news at MedieHuset were already highly entangled in data systems, as editors and journalists had access to “score sheets” from a locally developed metrics system valuing all the journalistic content, and a live centre was constantly readjusting the position of news on the front page based on live data. However, all these editorial practices still approached audiences as collectives, as data were pooled together and represented large segments of users and their interests, whereas the algorithm could approach the audience as individuals because, unlike the editor, the algorithm can – at least in theory – come to “know” each news user as an individual, as the excerpt illustrates. As data scientist Chris highlights in the excerpt, the user moves into a world of linear algebra in which individual users are constantly (re)assembled through the aggregation and combination of multiple data points relating to each individual news user’s online behaviour and other users’ behaviours. This changes the nature of the user, which is at once made both definite and predictable as a list of specific recommendations produced by the algorithm but is also a fluid construct that constantly adapts based on data flows. Both of these characteristics make the algorithmically produced individual user radically different from past evaluative abstract figures of users based on compilations of data (e.g., reader profiles).

The assembling of the individual user has wide-ranging transformative effects because, as the individual news user emerges, the way in which each article gains its status as newsworthy and becomes “news” equally changes, as does the way the online news site as a whole is prioritized. Currently, MedieHuset’s news sites are organized primarily chronologically, and each article is placed on the website based on an assigned value ranging from 1 to six by the editor in charge, where a “1” signifies a top placement on the site. The news sites are therefore organized in a predetermined manner, and articles are materially constituted as newsworthy by being assigned a specific “box” on the website. By assembling individual news users as predictable entities, the news and newsworthiness become assembled through an anticipatory logic so that the demarcation of newsworthiness is made by anticipating which stories the reader might like (the highest-ranked recommendation by the algorithm). The newsworthiness of each article will, therefore, no longer be constituted through its relation to other news or through its placement on the site but instead be associatively dependent on past user engagement. The newsworthiness of each article will, therefore, instead depend on it becoming a part of the recommended articles for a specific user. This is quite a significant change, as chief editor John noted, because “now every story has to find its audience” (Interview 5). There is a reversal of logic from audiences finding their way through the content to the content finding their way to them. The consequence of this is that “the news” as constituted by a “finite arrangement of texts” (Carlson Citation2018, 5) is, in a material sense, disassembled because there no longer exists a common representation of “news of the day”. During the same interview, the chief editor also commented, “It becomes sort of mind blowing when you think about it. How are we actually going to relate to the current news flow we have right now, if we cannot see what anyone is seeing?” (Interview 5). Such moments in which questions were raised about how existing editorial practices of making judgements regarding newsworthiness could coexist with this new algorithmic future of news continued throughout the design process, inducing a need to set in motion a process of reassembling the news.

Reassembling the News

The realization that editors would have no way of monitoring or controlling the algorithm in everyday news work once it started distributing news to individual users induced concerns about whether the news organizations’ democratic mission and identity might be endangered, which in turn induced a process of regaining “editorial control” over the algorithm. The road forward at MedieHuset was one of cautious testing, in which the results of the algorithmic system were continuously evaluated by the involved editor to determine whether they were “good enough” because, although there was excitement about the potential for prioritizing individual relevance, there was also a fear that other values relating to what constitutes newsworthiness – in this case timeliness, localness and societal importance – would be lost in the process.

The Issue of Timeliness

At a coordination meeting between the data and marketing department and the primary team running the project, editor Carl raised the question of timeliness: “It is a bit difficult right now because we do not have anything that is called ‘lifetime’, and the question is how do we solve it; Should the journalists assign a certain number of hours of relevance to an article, but then how do you know when a traffic accident is no longer interesting? And how do you then adjust it if the first judgement was wrong?” Data scientist Chris suggested a datafied way of judging it based on user traffic. However, this idea was rejected by Carl, who stated that, for example, in the case of a traffic jam, then that story “will live as long as there is traffic on the roads and not as long as the congestion is actually there”. He advocated having the journalists assign how long an article should “live” using a numerical system like that of editorial priority. This led Chris to question whether journalists, if given this task, would not just attempt to “game” the algorithm and ascribe too long a lifetime to their articles so they could circulate longer to get “as many views as possible”. (Excerpt from meeting).

The necessity of finding a solution for the issue of timeliness was a direct result of the disassembling of the chronologically ordered news site where the articles would “naturally” move down the site. There was a fear that the algorithm, if not controlled through a filtering mechanism, might recommend “old news”. Showing untimely news was considered a great risk by the editors, as it would endanger their very identity of being a news medium. As editor Carl, during a meeting discussing the future filtering mechanism, emphasized, “We need to have some filters relating to time because we cannot have ancient content there. As a news site, it has to be something relatively timely” (Meeting Transcript). However, what the above excerpt illustrates is that while the team members knew that the question of timeliness would have to be handled, it was not a straightforward process but involved negotiations regarding what timeliness is and who would be the best judge of that. The latter question highlighted the divide between countering the values of journalistic and algorithmic objectivity and authority (Gillespie Citation2014; Carlson Citation2015).

From newsroom studies, we know that “deciding what’s news” is both a relational and a situational practice that includes complex negotiations regarding the categorisations of different types of news stories and their relation to other news within and outside the news organization. Making the algorithm account for such complex negotiations is difficult because the algorithm dictates a format in which a binary choice of recommend or not can be made for each article. This binarity means, as the excerpt illustrates, that when newsworthiness becomes configured anticipatorily, timeliness has to be predetermined and exist in a durable format, similar to the way in which MedieHuset already assigned editorial value to articles. This technical solution to the problem of timeliness ultimately ontologically transforms what timeliness is and how it comes into being because in the future, it will be established through its relation to the individual article rather than its relation to the collective news ecology, making it a lasting and definite quality rather than a situational and relational one. Similar issues arose at MedieHuset regarding the idea of serving “local publics”, which suddenly also had to be delineated, as we describe in the following.

Localness and the Publicist’s Editorial Mission at Stake

During an interview Carl sent a link through the chat function in Teams that transferred me to a Google Sheet where he had organized the latest test results from the algorithm in a way that illustrated overlap between previous reading behaviour and the algorithmic recommendations. He was quite happy with the look of the results, emphasizing how “you could see that the articles being recommended were actually articles which were in the same category as the ‘mostly read’. So, it is illustrating that although we do not have a manual filter that can filter on geography, we will hit very locally with our content, with the reservation that a user, as we see here for example, might have read a lot in the sports category and therefore also got a lot in the sports category… . There, we might need the manual filter to ensure that there is also a fair amount of local content and not just – what can you say? – sports news”. (Excerpt from Interview 4)

Ensuring that the presented content on the news site in the future would be more local was considered key to the project. Particularly, this was seen as contributing to the medium’s editorial mission, which was generally described as supporting “local democracy and local societies” by contributing to “local public opinion”. This was something that MedieHuset’s large regional sites with their large amount of shared content currently hindered, as local content quickly disappeared in the flux of content. What the excerpt illustrates is how the “goodness” of the algorithmic recommendations highly depends on delivering on this promise and how another filtering mechanism again becomes a way of coordinating between the different realities at stake.

The tension between realities is exemplified here when the algorithm, in a sense, is “betrayed” by a user whose news consumption is less locally oriented (which is what guides the algorithmic selection) than what Carl finds appropriate. This dilemma can be seen as a new algorithmic version of the classic tensions in media organizations of balancing what users want to read and what organizations, based on their editorial missions, determine that the users at least should be confronted with on a front page (Ang Citation2002). The editor at MedieHuset was concerned that if there was not enough local content on the front page, it might counteract the intended role of the personalization algorithm in supporting their editorial mission, which, particularly in Nordic countries, is a strong part of the news organizations’ identity and self-understanding (Willig Citation2008). Therefore, it was considered necessary to experiment with different measures in the filtering mechanism and thereby test the boundaries of, in Carl’s words, “how much we can endure presenting to the users that which is not local without – what can you say? – no longer fulfilling our editorial mission to support the local”. Carl suggested, for example, having a filter that would ensure that “50% of the content will be local” (Meeting Transcript). This experimentation mimics the findings of Hartley (Citation2011a), where different logics in the move from print to online were settled by finding the “right mix” of content (Hartley Citation2011a, 289). However, contrary to the findings in Hartley’s study, the right mix at MedieHuset was decided through a material instantiation of what localness meant for the newspaper, which again was induced by the disassembling and loss of control of the front page. When they could no longer continuously discuss the right mix, there was a need to create a “fixed” value of what localness is that could be applied across the individualized front pages, again creating a highly specific and standardized version of localness that would relate only to how much rather than perhaps what types of local content were presented. The boundary-seeking experimentation reveals not only how the editorial values are important for how the experimentation unfolds but also how the experimentation equally begins to transform and give specific form to what “publicist” (i.e., the very identity and key values of the news organisation) means now and might mean in the future. As we unfold in the following, at MedieHuset, “publicist” also entailed an idea and ideal of serving certain democratic publics, which was challenged by the implementation of the algorithm.

Re-Assembling “the Newswithout Losing the Collective

During an interview, the chief editor John explained that while there are many positives related to personalization, it remains important to be “something for a collective. He underlined that that is what separates them from social media, because unlike social media, they do not let individual interest alone control the site; they choose content that everyone must see. He also stated that this would be ensured by having the editors continue to have control of the top fields on the news site, which they will “hand-holdto ensure that the news that is considered societally important reaches all users. He argued against personalizing everything: “I do not think we should make 1.1 million different editions because we do have an editorial mission.

The editor’s statements illustrate how serving the public as a collective continues to be the foundational pillar of what it means to be a news medium and what separates them from the equally algorithmically driven social media. There was an emphasis on news as not only catering to the individual, a role the editors ascribed to commercial social media platforms, but also always orienting the news towards a collective. This is a distinction that (re)enacts the divide between market-driven logic and editorial logic, which has also been found in previous studies of the usage of metrics in editorial decision-making, while adding a new twist to it because a decision has to be made on how this must figure in the algorithmic system.

The negotiations for ensuring the incorporation of a collective differed because in the case of MedieHuset, it was considered impertinent for the editors to actually circumvent the algorithm. What was interesting was that the decision of what counted as collectively relevant, was considered to be within the capabilities of the (human) editors alone, contrary to what was the case in the previous sections, where delegated agency to human actors through assigning lifetime, localness or editorial values was considered sufficient. Editor Carl used the example of the coverage of an incident at a school to explain the importance of the editors having this role. Such an incident, he explained, might hold relevance for schools all over the region, not just in the local area, and it would be bad if such stories were not distributed widely. However, he also stated that “the editors should catch and hand-hold such stories to ensure that people see them” (Interview 2). The algorithm was seen as unable to “catch” such societal importance, connecting this particular form of newsworthiness to editorial decision-making alone. In that way, it also separated the social and situated “editorial decision-making” from the technical and designed algorithmic relevance and how the former continues to have a specific value for news organizations in society. As the COVID-19 crisis unfolded in the spring of 2020, the editors involved further emphasized the importance of sharing information with a collective public . In such situations, they stated, the algorithm would simply have to be turned off. This situation illustrates how two distinct versions of journalism were still present during this process: one that was reminiscent of the classical journalistic task of serving society, and the other that allowed for news to become individualized – and that one had not fully replaced the other. The two understandings remained locked in a battle, which became materially recognizable in how agency was distributed in the algorithmic system. Editors were given both indirect and direct agency over the presentation of news, but new actors also gained increased agency over tasks that were previously predominately editorial. In the following, we continue this discussion of how agency, and thereby power, is shifting due to this transformation.

Discussion: Moving from the Newsroom into the Design Room

The process of building the new algorithmic personalization system at MedieHuset and the resulting processes of disassembly and reassembly of the news illustrates how core values of journalism and the otherwise strong professional ideology of journalism (Deuze and Witschge Citation2018) are renegotiated and reconfigured when encountering new actors (i.e., the algorithmic system, data scientists, data infrastructures, etc.). However, what also becomes clear is that during this process, much power and agency changes hands, as decisions concerning the future of news are ultimately moved from the newsroom and out of the hands of the editors and journalists into the design room and the hands of data scientists, developers and algorithmic models. This shift was paradoxically clear even in the first workshop conducted at MedieHuset regarding the personalization project, where the participants, who were mainly from the IT-development and the data analysis departments, were asked to draw their visions of the algorithmic system on large pieces of white paper, which were later transferred to a whiteboard. On the finished drawing, neither editors nor journalists were anywhere to be found, leading us to cautiously enquire about their rather curious lack of presence. This question resulted in a yellow Post-it Note with the label “journalist” being placed in the corner of the drawing, standing as an outsider looking into the algorithmic system. Equally, only two editors were involved in the project as representatives of editorial judgement, and while the question of involving local editors and journalists was raised multiple times, doing so was postponed, partly because the team found it difficult to know how and when to include the journalists. Later, the team decided that it was best to wait until they had something finished that they could show the editors and journalists. This exclusion of the editorial staff is particularly interesting because the idea of the personalization project had actually originated from an editorial vision of improving the democratic function of the news organization. The implications of such agential shifts have already been discussed in the journalism literature (Guzman and Lewis Citation2020; Lewis, Guzman, and Schmidt Citation2019; Milosavljević and Vobič Citation2021; Schapals and Porlezza Citation2020; Shangyuan, Tandoc, and Salmon Citation2019), illustrating the growing importance of understanding not only how agency is delegated in new ways but also what that means for the future of journalism, for example, in relation to epistemology, as mentioned in the introduction.

Based on this study, we can point to different important implications. First, algorithms cannot be viewed as neutral “machines” entering the journalistic context, and neither are the decisions considered mere “technical matters”; rather, they became “ontological engines”, to paraphrase the wording of Yoni Van Den Eede (Citation2015). Thus, algorithms function as a generative force in transforming journalistic values by, for example, demanding specific formats (Van Den Eede Citation2015, 151). Second, the editorial counterweight to these “technical decisions” (i.e., the editors and journalists) was limited to a select group who had to negotiate the place of existing journalistic values in the algorithmic systems. Their role can also be discussed in relation to the opacity of algorithms, as throughout the process, the working of the algorithmic system was constantly “black-boxed” by both data scientists and editors, making it difficult, particularly for the editors, to “act” against the algorithmic system. This lack of involvement makes the transformation a form of “invisible revolution”, Holm (Citation2007) which underlines the the need to describe it in detail to see the full effects. Third, it can be discussed that the external data providers and internal data departments came to establish themselves as what Latour (Citation1987) calls “centres of calculation”, which describes commercial organizations that routinely gather and distribute inscriptions containing specific knowledge claims about complex phenomena, for example, audiences (Latour Citation1987, 223). These centres of calculation shape the cumulative character of many other actors, establishing power through materials, as they can dominate actions in other places from a distance (Latour Citation1987). This power through materials proved to be rather forceful at MedieHuset, where the data and marketing department controlling the algorithm development and handling the data were driving the data selection process, and their choices were rarely challenged. This illustrates how the platformization of news, which Van Dijck, Poell, and de Waal (Citation2018) have illustrated on a macro- and meso-level, has a concrete impact on news organizations, for example, changing how news values such as timeliness and localness are transformed in the process of personalization.

Conclusion

Empirically, we have seen how the disassembling of individual news users as data points and the website frontpage as a product in turn created a sense of loss of control for the editors. As the editors realized that this might put their editorial and democratic mission at stake, they began to reassemble the news to build existing news values, such as timeliness and localness, into the algorithmic system. This manoeuvre proved troublesome, as “quantifying” news work and editorial decision-making involved reconfiguring those values into formats that the algorithmic system could interact with, meaning that it had to be binary, predetermined and applicable across all articles. By tracing these processes of assembling, disassembling and reassembling, we have shown how such processes are not only reconfiguring journalism in quite significant ways, but we have also illustrated how the move from deciding newsworthiness as part of a relational and situated practice to designing newsworthiness into an algorithmic system has implications exceeding the technology itself. Further studies are needed to understand how this, after the implementation of an algorithmic system, might change the relational and situational values and gatekeeping inside the newsroom.

Although this remains a preliminary ethnographic case study, what has become clear is that the movement from deciding to designing what’s news and the increasing datafication of news organizations will not leave journalism as we know it “untouched”. More studies will therefore be needed in what could be called third-wave news ethnographies to fully understand the reach and effects of datafication on news practices and what it means for the journalistic field as a democratic institution in modern societies.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This study was funded by Velux Fonden.

Notes

1 ‘MedieHuset’ is the Danish word for ‘the media organisation’ and is commonly used by both national and regional media organisations when describing themselves.

2 The specific model is not a key focus point in this article, as the initial distributional and transformative effects would be present no matter the model. However, the choice of model does become a determining factor in how much power different actors can have in the design process and potentially also has different democratic effects, as initially illustrated by Helberger (Citation2019).

References

  • Adar, Eytan, Carolyn Gearig, Ayshwarya Balasubramanian, and Jessica Hullman. 2017. “PersaLog: Personalization of News Article Content.” In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 3188–3200. Denver Colorado USA: ACM.
  • Ali, Waleed, and Mohamed Hassoun. 2019. “Artificial Intelligence and Automated Journalism: Contemporary Challenges and New Opportunities.” International Journal of Media, Journalism and Mass Communications 5 (1): 40–49. https://doi.org/10.20431/2454-9479.0501004.
  • Altheide, David. 1976. Creating Reality: How Television News Distorts Events. Beverly Hills: CA: Sage Publications.
  • Anderson, C. W. 2011. “Between Creative and Quantified Audiences: Web Metrics and Changing Patterns of Newswork in Local US Newsrooms.” Journalism 12 (5): 550–566.
  • Anderson, C. W., and Juliette De Maeyer. 2015. “Objects of Journalism and the News.” Journalism 16 (1): 3–9.
  • Anderson, C. W., and Daniel Kreiss. 2013. “Black Boxes as Capacities for and Constraints on Action: Electoral Politics, Journalism, and Devices of Representation.” Qualitative Sociology 36 (4): 365–382.
  • Ang, Ien. 2002. Desperately Seeking the Audience. Hoboken: Taylor and Francis.
  • Arsenault, Amelia H. 2017. “The Datafication of Media: Big Data and the Media Industries.” International Journal of Media & Cultural Politics 13 (1): 7–24.
  • Bodó, Balázs. 2019. “Selling News to Audiences – a Qualitative Inquiry into the Emerging Logics of Algorithmic News Personalization in European Quality News Media.” Digital Journalism 7 (8): 1054–1075.
  • Brandt, Eva, Jörn Messeter, and Thomas Binder. 2008. “Formatting Design Dialogues – Games and Participation.” CoDesign 4 (1): 51–64.
  • Braun, Joshua A. 2015. This Program is Brought to You by…: Distributing Television News Online. London: Yale University Press.
  • Breed, Warren. 1955. “Social Control in the Newsroom: A Functional Analysis.” Social Forces 33 (4): 326–335.
  • Bruns, Axel. 2019. Are Filter Bubbles Real? Cambridge, UK: Polity Press.
  • Bucher, Taina. 2013. “The Friendship Assemblage: Investigating Programmed Sociality.” Television & New Media 14 (6): 479–493.
  • Bucher, Taina. 2017. “Machines Don’t Have Instincts”: Articulating the Computational in Journalism.” New Media & Society 19 (6): 918–933. https://doi.org/10.1177/1461444815624182.
  • Bucher, Taina. 2018. “Neither Black nor Box: (Un)Knowing Algorithms.” In If…Then: Algorithmic Power and Politics. Oxford: Oxford University Press. https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190493028.001.0001/oso-9780190493028-chapter-3.
  • Callon, Michel. 2007. “What Does It Mean to Say That Economics is Performative?.” In Do Economists Make Markets?: On the Performativity of Economics, edited by Donald MacKenzie, Fabian Muniesa, and Lucia Siu, 311–357. New York: Princeton University Press.
  • Carlson, Matt. 2015. “The Robotic Reporter: Automated Journalism and the Redefinition of Labor, Compositional Forms, and Journalistic Authority.” Digital Journalism 3 (3): 416–431.
  • Carlson, Matt. 2018. “Automating Judgment? Algorithmic Judgment, News Knowledge, and Journalistic Professionalism.” New Media & Society 20 (5): 1755–1772. https://doi.org/10.1177/1461444817706684.
  • Christin, Angèle. 2020. Metrics at Work: Journalism and the Contested Meaning of Algorithms. New York: Princeton University Press.
  • Colleoni, Elanor, Alessandro Rozza, and Adam Arvidsson. 2014. “Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data.” Journal of Communication 64 (2): 317–332.
  • Cottle, Simon. 2000. “New(s) Times: Towards a “Second Wave” of News Ethnography.” Communications 25 (1): 19–42.
  • Deuze, Mark, and Tamara Witschge. 2018. “Beyond Journalism: Theorizing the Transformation of Journalism.” Journalism (London, England) 19 (2): 165–181.
  • Dewalt, K., and B. Dewalt. 2011. Participant Observation: A Guide for Fieldworker. Lanham, MD: Rowman & Littlefield.
  • Diakopoulos, Nicholas. 2015. “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.” Digital Journalism 3 (3): 398–415.
  • Diakopoulos, Nicholas, and Michael Koliska. 2017. “Algorithmic Transparency in the News Media.” Digital Journalism 5 (7): 809–828.
  • Epstein, Edward Jay. 2000. News from Nowhere. Television and the News. Chicago: Ivan R. Dee.
  • Fenton, Natalie. 2010. “Drowning or Waving? New Media, Journalism and Democracy.” In New Media, Old News: Journalism & Democracy in the Digital Age, edited by Natalie Fenton, 3–16. London: SAGE Publications Ltd. https://doi.org/10.4135/9781446280010.n1.
  • Fishman, Mark. 1988. Manufacturing the News. Austin, TX: University of Texas Press.
  • Flaxman, Seth, Sharad Goel, and Justin M. Rao. 2016. “Filter Bubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80 (S1): 298–320.
  • Gans, Herbert J. 2004. “Deciding What’s News: A Study of CBS Evening News, NBC Nightly News, Newsweek, and Time.” Northwestern University Press.
  • Geertz, Clifford. 1973. “Thick Description: Towards an Interpretative Theory of Culture.” In The Interpretation of Cultures, edited by Clifford Geertz, 310–323. New York: Basic Books, Inc.
  • Gibbs, G. 2007. Analysing Qualitative Data. Los Angeles: SAGE.
  • Gillespie, Tarleton. 2014. “The Relevance of Algorithms.” In Media Technologies, edited by Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot, 167–194. Cambridge, MA: The MIT Press. https://doi.org/10.7551/mitpress/9780262525374.003.0009.
  • Guzman, Andrea L., and Seth C. Lewis. 2020. “Artificial Intelligence and Communication: A Human–Machine Communication Research Agenda.” New Media & Society 22 (1): 70–86.
  • Hartley, Jannie Møller. 2011a. “Radikalisering af Kampzonen: en Analyse af Netjournalistisk Praksis og Selvforståelse i Spaendingsfeltet Mellem Idealer og Publikum.” PhD Thesis., Roskilde, Denmark: Roskilde University Centre. https://forskning.ruc.dk/da/publications/radikalisering-af-kampzonen-en-analyse-af-netjournalistisk-praksi.
  • Hartley, Jannie Møller. 2011b. “Routinizing Breaking News: Categories and Hierarchies in Danish Online Newsrooms.” In Making Online News: Newsroom Ethnography in the Second Decade of Internet Journalism, edited by David Domingo and E. Paterson, 73–87. Bern: Peter Lang.
  • Hartley, Jannie Møller. 2013. “The Online Journalist between Ideals and Audiences: Towards a (More) Audience-Driven and Source-Detached Journalism?” Journalism Practice 7 (5): 572–587.
  • Helberger, Natali. 2019. “On the Democratic Role of News Recommenders.” Digital Journalism 7 (8): 993–1012.
  • Hess, David. 2001. “Ethnography and the Development of Science and Technology Studies.” In Handbook of Ethnography, edited by P Atkinson, A Coffey, S Delamont, J Lofland, and L Lofland, 234–245. London: SAGE Publications Ltd.
  • Holm, Petter. 2007. “Which Way is up on Callon?.” In Do Economists Make Markets?: on the Performativity of Economics, edited by Donald A. MacKenzie, Fabian Muniesa, and Lucia Siu, 225–243. Princeton, NJ: Princeton University Press.
  • Holtzhausen, Derina. 2016. “Datafication: Threat or Opportunity for Communication in the Public Sphere?’ Edited by Professor Andrea Catellani.” Journal of Communication Management 20 (1): 21–36.
  • Kavanagh, Donncha, Sean McGarraghy, and Séamas Kelly. 2015. “Ethnography in and around an Algorithm.” In 30th EGOS Colloquium: Sub-Theme 15:(SWG) Creativity, Reflexivity and Responsibility in Organizational Ethnography, Athens, Greece, 3–5 July 2015. http://researchrepository.ucd.ie/handle/10197/7348.
  • Kennedy, Helen, Thomas Poell, and Jose van Dijck. 2015. “Data and Agency.” Big Data & Society 2 (2): 1–7.
  • Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press.
  • Latour, Bruno. 2005. Reassembling the Social – an Introduction to Actor-Network-Theory | Bruno-Latour.Fr. New York: Oxford University Press.
  • Laurent, Brice. 2016. “Political Experiments That Matter: Ordering Democracy from Experimental Sites.” Social Studies of Science 46 (5): 773–794.
  • Law, John. 2009. “Actor Network Theory and Material Semiotics.” In The New Blackwell Companion to Social Theory, edited by Bryan S. Turner, 141–158. Oxford, UK: Wiley-Blackwell.
  • Lewis, Seth C., Andrea L. Guzman, and Thomas R. Schmidt. 2019. “Automation, Journalism, and Human–Machine Communication: Rethinking Roles and Relationships of Humans and Machines in News.” Digital Journalism 7 (4): 409–427.
  • McNair, Brian. 2018. “Journalism as Public Sphere.” In Journalism, edited by Tim P. Vos, 149–168. Berlin, Boston: De Gruyter. https://doi.org/10.1515/9781501500084-008.
  • Milosavljević, Marko, and Igor Vobič. 2021. “Our Task is to Demystify Fears”: Analysing Newsroom Management of Automation in Journalism.” Journalism 22 (9): 2203–2221.
  • Mol, A. 2010. “Actor-Network Theory: Sensitive Terms and Enduring Tensions.” Kölner Zeitschrift Für Soziologie Und Sozialpsychologie. Sonderheft 50 (1): 253–269. https://dare.uva.nl/search?identifier=75bbc661-0a89-475a-9eef-8c8c5a2e9904.
  • Mol, Annemarie. 2002. The Body Multiple: Ontology in Medical Practice. Durham, NC: Duke University Press.
  • Mol, Annemarie, and John Law. 2004. “Embodied Action, Enacted Bodies: The Example of Hypoglycaemia.” Body & Society 10 (2–3): 43–62.
  • Moser, Ingunn. 2008. “Making Alzheimer’s Disease Matter. Enacting, Interfering and Doing Politics of Nature.” Geoforum 39 (1): 98–110.
  • Muniesa, Fabian, Yuval Millo, and Michel Callon. 2007. “An Introduction to Market Devices.” The Sociological Review 55 (2_suppl): 1–12.
  • Napoli, Philip M. 2014. “Automated Media: An Institutional Theory Perspective on Algorithmic Media Production and Consumption: Automated Media.” Communication Theory 24 (3): 340–360.
  • Newman, Nic. 2018. “Journalism, Media and Technology Trends and Predictions 2020.” The Reuters Institute for The Study of Journalism. https://www.digitalnewsreport.org/survey/2018/
  • Newman, Nic. 2020. “Journalism, Media and Technology Trends and Predictions 2020.” The Reuters Institute for The Study of Journalism. https://www.digitalnewsreport.org/survey/2020/
  • Neyland, Daniel, Norma Möllers. 2017. “Algorithmic IF… THEN Rules and the Conditions and Consequences of Power.” Information, Communication & Society 20 (1): 45–62.
  • Neyland, Daniel. 2015. “On Organizing Algorithms.” Theory, Culture & Society 32 (1): 119–132.
  • Neyland, Daniel. 2019. The Everyday Life of an Algorithm. Cham: Springer International Publishing.
  • Pariser, Eli. 2011. The Filter Bubble: How the New Personalized Web is Changing What We Read and How We Think. New York: Penguin Press.
  • Schapals, Aljosha Karim, and Colin Porlezza. 2020. “Assistance or Resistance? Evaluating the Intersection of Automated Journalism and Journalistic Role Conceptions.” Media and Communication 8 (3): 16–26.
  • Schlesinger, Philip. 1978. Putting Reality Together. London: Constable.
  • Schwennesen, Nete. 2019. “Algorithmic Assemblages of Care: Imaginaries, Epistemologies and Repair Work.” Sociology of Health & Illness 41 (S1): 176–192.
  • Shangyuan, Wu, Edson C. Tandoc, and Charles T. Salmon. 2019. “A Field Analysis of Journalism in the Automation Age: Understanding Journalistic Transformations and Struggles through Structure and Agency.” Digital Journalism 7 (4): 428–446.
  • Smith, Brent, and Greg Linden. 2017. “Two Decades of Recommender Systems at Amazon.Com.” IEEE Internet Computing 21 (3): 12–18.
  • Sørensen, Jannick Kirk. 2019. “Public Service Media, Diversity and Algorithmic Recommendation: Tensions between Editorial Principles and Algorithms in European PSM Organizations.” In CEUR Workshop Proceedings, 2554:6–11. CEUR Workshop Proceedings. https://vbn.aau.dk/en/publications/public-service-media-diversity-and-algorithmic-recommendation-ten.
  • Spöhrer, Markus, and Beate Ochsner, eds. 2016. Applying the Actor- Network Theory in Media Studies. Hershey, PA. IGI Global book series Advances in Media, Entertainment, and the Arts (AMEA).
  • Spradley, James. 1979. The Ethnogrpahic Interview. New York: Holt, Rinehart & Winston.
  • Tandoc, Edson C. 2014. “Journalism is Twerking? How Web Analytics is Changing the Process of Gatekeeping.” New Media & Society 16 (4): 559–575.
  • Thurman, Neil, and Steve Schifferes. 2012. “The Future of Personalization at News Websites: Lessons from a Longitudinal Study.” Journalism Studies 13 (5–6): 775–790.
  • Thurman, Neil. 2011. “Making “the Daily Me”: Technology, Economics and Habit in the Mainstream Assimilation of Personalized News.” Journalism 12 (4): 395–415.
  • Tuchman, Gaye. 1973. “Making News by Doing Work: Routinizing the Unexpected.” American Journal of Sociology 79 (1): 110–131.
  • Tunstall, Jeremy. 1971. Journalists at Work: Specialist Correspondents: Their News Organizations, News Sources, and Competitor-Colleagues. London: Constable.
  • Van Den Eede, Yoni. 2015. “Tracing the Tracker.” In Postphenomenological Investigations: Essays on Human-Technology Relations, edited by Robert Rosenberger and Peter P. C. C. Verbeek, 143–158. Lanham: Lexington Books. https://research.utwente.nl/en/publications/postphenomenological-investigations-essays-on-human-technology-re.
  • Van Dijck, José, Thomas Poell, and Martijn de Waal. 2018. The Platform Society: Public Values in a Connective World. Oxford: Oxford University Press.
  • Warner, Malcolm. 1971. “Organizational Context and Control of Policy in the Television Newsroom: A Participant Observation Study.” The British Journal of Sociology 22 (3): 283–294.
  • White, David Manning. 1950. “The “Gate Keeper”: A Case Study in the Selection of News.” Journalism Quarterly 27 (4): 383–390.
  • Willig, Ida. 2008. “Publicisme 2.0.” Information, May. https://forskning.ruc.dk/da/publications/publicisme-20.
  • Willig, Ida. 2011. “The Journalistic Gut Feeling: Journalistic Doxa, News Habitus and Orthodox News Values.” In Cultural Meanings of News: A Text-Reader, edited by D. A. Berkowitz, 83–98. London: SAGE Publications.
  • Woolgar, Steve, and Javier Lezaun. 2013. “The Wrong Bin Bag: A Turn to Ontology in Science and Technology Studies?” Social Studies of Science 43 (3): 321–340.
  • Yaneva, Albena. 2009. “Making the Social Hold: Towards an Actor-Network Theory of Design.” Design and Culture 1 (3): 273–288.
  • Zuiderveen Borgesius, Frederik J., Damian Trilling, Judith Möller, Balázs Bodó, Claes H. de Vreese, and Natali Helberger. 2016. “Should We Worry about Filter Bubbles?” Internet Policy Review 5 (1): 1–16.