12,026
Views
52
CrossRef citations to date
0
Altmetric
Original Articles

Interested in Diversity

The role of user attitudes, algorithmic feedback loops, and policy in news personalization

ORCID Icon, , &

Abstract

Using survey evidence from the Netherlands, we explore the factors that influence news readers’ attitudes toward news personalization. We show that the value of personalization depends on commonly overlooked factors, such as concerns about a shared news sphere, and the depth and diversity of recommendations. However, these expectations are not universal. Younger, less educated users have little exposure to non-personalized news, and they also show little concern about diverse news recommendations. We discuss the policy implications of our findings. We show that quality news organizations that pursue reader loyalty and trust have a strong incentive to implement personalization algorithms that help them achieve these particular goals by taking into account diversity expecting user attitudes and providing high quality recommendations. Diversity-valuing news readers are thus well placed to be served by diversity-enhancing recommender algorithms. However, some users are in danger of being left out of this positive feedback loop. We make specific policy suggestions regarding how to address the issue of diversity-reducing feedback loops, and encourage the development of diversity-enhancing ones.

1. Introduction

Algorithmic agents1 that personalize our digital information flows are ubiquitous.2 Personalized recommender systems are often portrayed as necessary to manage the digital information overload and enable user autonomy (Gauch et al. Citation2007; Oulasvirta and Blom Citation2007; Friedman and Nissenbaum Citation1997). Others, however, see personalization as a threat (Borgesius et al. Citation2016). An influential stream of literature argues that algorithmic agents may manipulate our worldview because they put people and communities in “filter bubbles” and “echo chambers” (Pariser Citation2011; Sunstein Citation2009; Yeung Citation2017). The current media law and policy debate (Borgesius et al. Citation2016; Yeung Citation2017) about echo chambers (Sunstein Citation2002) or “Daily Me’s” (Negroponte Citation1995) is concerned that personalized recommendations are opaque in how they filter/recommend information to individual users, and that this filtering may lead to a number of undesirable consequences. One such potential consequence is the emergence of filter bubbles, which are described in more detail later. Another expected negative effect is that such algorithmic agents could reinforce processes of self-selection. News users were shown to prefer those sources that reaffirm their ideology and worldview, to avert cognitive stress (Sears and Freedman Citation1967; Stroud Citation2011). Recommendation algorithms can potentially catalyze this self-selection process by taking away users’ choice to avoid or confront dissonant content. Such a situation could conflict with the fundamental values of our information society, including access to diverse information3, a shared public sphere (Habermas Citation1989; Castells Citation2008), the ability to make free and informed decisions in private4 or to partake in political decision-making.5

Despite the increasing amount of literature on the topic, there are many unanswered questions around the processes and dynamics behind personalized recommendations.6 We have little insight into users’ attitudes, concerns, and expectations regarding personalized news selection. Our theories about how algorithms interpret user intentions, attitudes, and interests, and how algorithms respond to user signals, are even more patchy. We understand little of the interaction between end-users and algorithms.

Despite the widespread criticism and the general lack of conclusive supporting empirical evidence, Pariser’s filter bubble argument had a major impact on how we imagine the interaction between users and recommendation algorithms (Borgesius et al. Citation2016; Dutton et al. Citation2017; Haim, Graefe, and Brosius Citation2017. High Level Group on fake news and online disinformation Citation2018; Koene et al. Citation2015; Quattrociocchi, Scala, and Sunstein Citation2016; Vīķe‐Freiberga et al. Citation2013). Next to influencing the academic discourse around news personalization, the filter bubble argument has also begun to frame way into national and international policy discussions on future media regulation.7

Worries about filter bubbles are typically based on two fundamental assumptions: People are diversity averse, and algorithms reduce diversity. Together, users and algorithms create a spiral, in which users are one-dimensional and prefer their information diet to be filtered so that it reflects their interests, and in which this filtering reinforces the individual’s one-dimensionality. The goal of this article is to pave the way toward a better understanding of the users of personalized news services, and of their concerns and expectations. Perhaps some users are more at risk of ending up in filter bubbles. We show that a better understanding of the user matters if we want to understand how users shape personalization algorithms and how these algorithms shape their users.

To reach our goal, we combine survey-based evidence with theory to provide an alternative view of algorithm–user interaction process. In Section 2, we review the premises of the filter bubble discussion. Using evidence from the Netherlands, in Section 3 we show that, contrary to the assumptions commonly made by the followers of the filter bubble argument, the diversity of recommendations is an important factor in citizens’ evaluations of news recommenders. In Section 4, we also use our data to demonstrate that not all users are equal, and that the current debate about algorithms and filter bubbles fails to acknowledge that the audience is heterogeneous, and has diverse personal preferences and propensities not just in terms of their interests and ideologies, but also in terms of their attitudes toward news personalization and expectations vis-à-vis news services. We then use theoretical arguments in Section 5 to show that there are ample incentives for the business entities that operate algorithms to respond to diversity-seeking people. In Section 6, we conclude that, under the right conditions, the individual user’s expectation of diverse recommendations could be the starting point for a diversity-enhancing feedback loop. We also warn that diversity expectation is not a universal user trait, and we provide empirical data on societal groups that might still be in danger of being locked in filter bubbles because this diversity enhancing feedback loop does not kick in.

2. Premises of the Filter Bubble Discourse

Personalization has been the subject of intense debate since the publication of Eli Pariser’s (Citation2011) influential book on filter bubbles. Pariser argues that personalizing algorithms lock people in interest-based filter bubbles. This argument rests on a few somewhat oversimplified assumptions, such as:

  1. Individual users do not value diversity and are not interested in complex societal issues (Pariser Citation2011, 51).

  2. Algorithmic agents do not recognize and serve complex user profiles, and they disregard preferences such as users’ desire for diverse or in-depth news (Pariser Citation2011, 54).

  3. The sole goal of algorithms is to identify narrow personal interests and provide people with relevant information that fits their profile (Pariser Citation2011, 54–56).8

  4. Personalization is already ubiquitous, and soon there will be only personalized media (Pariser Citation2011, 33), and personalization will be invisible to people (Pariser, Citation2011, 10).

Based on these assumptions, the filter bubble discourse warns of several effects, such as a reduction in the diversity of information and opinions that people are exposed to; the formation of echo chambers; the subsequent polarization and fragmentation of the public debate; and the disengagement of certain social groups from the political process. Underlying the filter bubble discourse is thus yet another, more implicit assumption, namely that diversity, and exposure to diverse news9 is inherently a good thing. It is worth noting that this assumption is not self-evident. Diversity can compete with other, not less important public or economic values, such as the need for reducing complexities (Neuberger and Lobigs Citation2010), personal autonomy of the audience, and the provision of information of personal importance to the audience. Also, research shows that diversity policies and exposure to dissimilar perspectives more specifically can at times backfire and increase polarization rather than reducing it (Dilliplane Citation2011; Wojcieszak Citation2011). And yet, evidence also suggests that diversity in the media can create opportunities for users to encounter different opinions, self-reflect on their own viewpoints (Kwon, Moon, and Stefanone Citation2015), enhance social and cultural inclusion (Huckfeldt, Johnson, and Sprague Citation2002), tolerance (Mutz Citation2002), increases one’s familiarity with views oppositional to one’s own (Price, Cappella, and Nir Citation2002), and also leads people to more accurately perceive public opinion (Wojcieszak and Rojas Citation2011). This is why for the purpose of this article we follow the European Court of Human Rights in its conclusion that there can be no democracy without diversity.10

The filter bubble theory has been subject to much criticism. Elsewhere (Borgesius et al. Citation2016), we have argued that empirical evidence adds substantial qualifications to Pariser’s assumptions. Research suggests that many other factors shape the diversity of someone’s information diet. Algorithmic agents are only one, and probably not the most important, of those factors.

Another qualification is the type of personalized recommendations. Fears about selective exposure and filter bubbles presume that personalized recommendations give people exactly the kind of information they are interested in. Some algorithmic recommenders indeed strongly focus on short-term goals and simplistic metrics, such as the number of clicks and likes. But this is only part of the story. Algorithmic recommendations could also provide people with a more diverse choice, more depth, or less popular contents (Munson and Resnick Citation2010; Jannach et al. Citation2010).11 Recommender technology is maturing, as are the goals of news media, social network sites, search engines, and other parties (Newman at al. Citation2017). News organizations increasingly employ recommenders to offer their users better services and more choice, unlock long-tail content, and increase long-term engagement (Bodó Citation2018; New York Times Citation2014; BBC Citation2017; Newman et al. Citation2017, Citation2018; In all probability, these market-driven developments lead to more complex and sophisticated recommender systems, which can better profile audiences and can better respond to and guide their actual preferences. Algorithms and audiences do not develop in isolation. The current state of algorithmic personalization shapes users’ future attitudes and expectations, which, in turn, algorithms are intended to measure and better serve. Thus, it is important to better understand the forces that shape the development of users’ relationships with personalized recommendations.

Much of the academic literature concentrates on external factors that may influence access and exposure to diverse information. Systemic factors, such as the level of polarized media and politics in a country (Bobok Citation2016; Boczkowski and Mitchelstein Citation2013; Garrett Citation2013; Iyengar and Westwood Citation2015; Mancini Citation2013), have been argued to set the baseline for exposure diversity. In addition, studies have shown that the nature of information has a significant effect on information avoidance, particularly if the information is counter-attitudinal (Hart et al. Citation2009; Knobloch-Westerwick and Kleinman Citation2012; Lee Citation2016; Messing and Westwood Citation2014; O’Hara and Stevens Citation2015; Sears and Freedman Citation1967; Valentino et al. Citation2009; Wojcieszak Citation2010). Furthermore, the choice of alternative media sources increases the likelihood of exposure to diverse information. Online media offer a seemingly unlimited variety of sources and increase access, if not exposure, to diverse content (Napoli Citation2011). Online social networks, which have become an important news source, have also been shown to expose people to more diverse information than traditional media or physical networks of friends, family, colleagues, and neighbors through the diversity of weak social media ties, and the accidental exposure they facilitate (An, Quercia, and Crowcroft 2013; Bakshy, Messing, and Adamic Citation2015; Barberá et al. Citation2015; Bozdag Citation2015; Duggan and Smith Citation2016; Flaxman and Rao Citation2016; Fletcher and Nielsen Citation2017; Webster Citation2010).

Empirical data on users’ attitudes and expectations regarding algorithmic agents are scarce. Most studies on personalization have focused on targeted advertising; only a few have looked into media personalization (e.g., McDonald and Cranor Citation2010; Turow et al. Citation2009). These studies concentrated on privacy concerns, and not so much on user expectations regarding the content or quality of algorithmic recommendations. An exception is the study by Sørensen (Citation2013), who investigated user attitudes toward self-selected personalization in Denmark. A couple of studies have researched how users interact with personalized recommendations, and how users value the output of recommendations. In an experiment on news recommendation diversity, Munson and Resnick (Citation2010) found that 25% of their respondents sought diversity and valued counter-attitudinal recommendations. Duggan and Smith (Citation2016) found that more than a third of their American respondents find political discussions with people they disagree with “interesting and informative,” and that 83% of their respondents ignore content they find disagreeable. Yet, the same study also found that almost 40% of the respondents took active steps to curate their information environment, and had at least once removed counter-attitudinal, offensive, or annoying political content, or unfollowed people who posted it.

In this context, the filter bubble theory is useful not necessarily because it gives an accurate description of the impact of algorithmic personalization on our information diet, but because it helped to identify the domains where we need better theories and evidence to adequately reconstruct the societal impact of algorithmic personalization. For instance, it is unclear how users interact with personalized news recommendations, especially if the recommendations contain counter-attitudinal items. Also, while much of the research on profiling and personalization focuses on users’ attitudes toward potential conflicts with their privacy (McDonald and Cranor Citation2010; Turow et al. Citation2009), research looking into other factors, such as diversity, is scarce. This is why we set out to answer the following questions:

  • Who are the users of personalized recommendations, what is their attitude to diversity, and how great is their propensity to selective exposure?

  • What incentives news producers have to detect and respond to users’ attitudes?

  • Does policy have a role in the (algorithmically mediated) interaction of users and news producers?

In the following section, we present the findings of a survey in the Netherlands about users’ expectations regarding news personalization. We first analyze factors of user acceptance of news personalization. More specifically, we investigate the relationship between attitudes toward diversity, privacy, efficacy, and a shared public sphere, and acceptance of news personalization. Next, we use latent class analysis to investigate whether these relationships hold for all user groups or whether there are specific user groups that are more vulnerable to filter bubbles. In the second step, we make an effort to lay out how organizations that deploy algorithmic agents may respond to these expectations. A better understanding of users’ attitudes toward news personalization and its impact on the diversity of recommendation they receive has relevance in multiple domains. First, it can serve as a basis for a better-informed and more matured academic debate about the potential negative or positive democratic impact of news personalization. Secondly, understanding users’ attitudes toward personalization, and diversity is also critical for policymakers. For example, if we find evidence that people do not care about the diversity of recommendations, and at the same time they are less likely to be exposed to a diverse media offer (e.g., because they rely primarily on personalized information offer, e.g., on social media), this could be a signal to policymakers that here is a target group that is not likely to benefit from diversity-enhancing policies, and may indeed be a group that is more likely to be in danger of one-sided information and all the possible consequences thereof (radicalization, polarization, etc.).

3. User expectations regarding personalization

We conducted a cross-sectional survey to explore what factors influence the desirability of personalized news services. We collected data from a representative sample of Dutch adults (n = 1556) through computer-assisted web interviewing (CAWI) in the period October 5–November 14, 2015. The survey was administered by the Dutch polling company CentERdata, and the sample was drawn from the Dutch academic household panel, the LISS panel12.

The Netherlands is a particularly useful case to study for two reasons. First, the technological infrastructure relevant to news personalization is very advanced: There is almost universal access to high-speed internet, and in recent years Dutch media companies have benefited from a steadily growing GDP. Second, because the Dutch journalistic culture is characterized by “freedom of speech, plurality and self-regulation and has a strong tradition concerning ethic codes and codes of conduct” (Paapst and Mulder Citation2017), it is a good example of the democratic corporatist model (Hallin and Mancini Citation2004). It can be expected that the Dutch news audience, which is used to independent news from diverse sources, has similar expectations of news personalization. It should be noted that choosing the Netherlands as a case limits the generalizability of the results to countries with similar characteristics. However, we believe that the relationships we identify in this study are likely to be the results of processes of attitude formation toward news personalization that are also likely to occur under different circumstances.

We defined the desirability of personalization as a cumulative of three survey items.13 We addressed the filtering function of personalization with “I find it useful if a news website leaves out news that is not relevant for me.” We measured the perceived user need for such filtering with “I find it annoying if a news site shows news that is not important to me.” And we surveyed the usefulness of the recommendation function with “I find it useful if a news website highlights news that is especially important to me.”

We defined four areas in relation to personalization that we wished to further explore. These areas roughly follow the assumptions of the filter bubble theory and the literature on the effects of online communication on diversity exposure:

  • The use of different news channels

  • Expectations regarding personalized news services

  • Attitudes toward a shared public sphere

  • Privacy concerns.

The use of different news channels, including broadcast, print, and personalized and un-personalized online channels, may be important for multiple reasons. First, many news channels are not personalized, their extensive use may suggest that personalization is far from being a ubiquitous phenomenon. Secondly, the parallel use of multiple news channels (source diversity) may be a signal of a demand for diversity. Third, previous experience with personalized and un-personalized news channels may influence the desirability of personalization. We measured exposure to news via the self-reported use of a) news websites (M: 3.03, SD: 2.76), b) news apps (M: 1.96, SD: 2.73), c) main evening news broadcast (M: 4.89, SD: 2.43), d) political information programs (M: 3.27, SD: 2.42), and e) social media (M: 3.32, SD: 3.02). The measure was the number of days the respondent used each channel in a typical week (7-point scale, with higher values indicating stronger agreement).

In addition to the frequency of social media use, we also surveyed the value of personalized social media as a news source. The item was Social media are a good way to access mass media news” (M: 4.17, SD: 1.84), again measured on a 7-point scale, with higher values indicating stronger agreement.

We directly surveyed users’ expectations regarding news personalization in two dimensions. We measured the expected impact of personalization a) on the diversity of recommended news items and b) on the depth of those news items. The questions were respectively “If a news website could account for my interests, the news I would get would have fewer or more topics” (M: 3.67, SD: 1.69), and “If a news website could account for my interests, the news I would get would have less or more depth” (M:4.20; SD: 1.55), both measured on a 7-point scale, with higher values indicating stronger agreement.

We surveyed whether users’ are concerned about the negative impact of news personalization on the public sphere with two statements: “There are news and current affairs that everybody should know about” (M: 5.52, SD: 1.52), and “Everybody should have access to more or less the same news baseline” (M: 5.44, SD: 1.68). Both were measured on a 7-point scale, with higher values indicating stronger agreement. We used these last two measures to see whether users have expectations regarding the quality (diversity, depth, societal relevance) of recommendations which recommenders could detect, and take into account.

Since personalization requires personal data collection, we surveyed concerns about privacy in the context of news consumption, and in the more general context of commercial advertising, with the following questions: How acceptable is it that websites collect information to personalize content based on a) your clicks on political websites (M: 2.29, SD: 1.78), b) your clicks on ads (M: 2.34, SD 1.78)?” Each question was measured on a 7-point scale, with higher values indicating fewer concerns.

We were interested in the relationship between news personalization and political efficacy for two reasons. First, people who are not interested in politics may value personalization, because it could help them to filter out political news. Second, the filter bubble theory predicts that people who rely heavily on personalization might have reduced political efficacy because personalization could reduce the amount of societally relevant information in their news feed, even if they do not actively filter out political information. We constructed a scale to study efficacy with three items, all measured on a 7-point scale: “I know more than most people about what is going on in politics in the Netherlands,” “Sometimes politics seems so complicated that people like me cannot understand what is going on,” and “I have a good idea about the most important problems in my country” (Cronbach’s alpha: .67, M: 4.04, SD 1.33).

We constructed an OLS linear regression model to test the effects of news exposure, expectations on the outcome of personalization, and the effects of fears of fragmentation on the desirability of news personalization, and to control for the effects of age, gender, and education.

presents the results of the regression analysis. The model fit was adequate (Adjusted R2: 0.152, F: 12,982 p < .0001). The high unexplained variance points to a limitation of our research, namely that the respondents might have been unfamiliar with news personalization and its opportunities and threats. Yet, the model allows us to formulate some observations for theoretical considerations and future empirical testing.

TABLE 1 OLS regression predicting acceptance of news personalization, n = 1556

The model suggests that users’ expectations regarding the output of news recommenders have the strongest positive effect on the desirability of personalization. Respondents value personalization more if they expect that personalizers will deliver them more diverse news (Beta: .107, SE: .028). This means: users who have one item step higher expectations of diversity, are predicted to move up on acceptance of news personalization by 0.1 item step. The relationship is highly significant, it is more than 99% certain that the relationship that exists in the sample also exists in the population.

We found a similarly strong effect of the acceptance of social media as a news platform (Beta: 0.121; SE: 0.024). However, the direction of causality here is unclear: Either people who have already had a positive experience with personalization on social media platforms, may consequently positively evaluate personalization, or people who have a positive attitude toward personalization might accordingly more easily accept social media as a news source. We believe that social media experience informs attitudes toward personalization, rather than the other way round since most users will have first experienced news personalization on Facebook and other social media.

The concern about a shared news baseline became statistically significant in the regression model, although the effect is small (Beta: −.057, SE: 0.27). It is 97% certain the relationship found in the sample also exists in the population) The result is intuitive: The higher the level of appreciation of universal news access, the lower the level of appreciation of news personalization.

Interestingly, privacy concerns became insignificant in our model, implying that the differences in user attitudes toward news personalization cannot be statistically related to differences in concerns about privacy.

Finally, the regression model confirmed that people with less education or less political efficacy place a higher value on news personalization. These findings seem to be contrary to the other findings, which associate the desirability of personalization with diversity and depth. Would less educated, less politically efficacious people also value personalization for its ability to deliver diverse news? If so, personalization could enable emancipation. To answer this question, we added an interaction effect of expected impact on news diversity and efficacy to the model.

The interaction term was marginally significant (Beta: 0.4, SE: 0.02). It is 93% certain that the relationship found also exists in the population. According to this finding, for people with high efficacy the expectation of diversity plays a significant role in the desirability of personalization. A person who feels very confident about politics values personalization much more if she expects it to deliver more diverse news. On the other hand, the assessment of personalization by someone with low efficacy does not depend on diversity.

Taken together, these results suggest that users do not want personalization at all costs. The value of news personalization depends on whether there remains a minimal shared news sphere, while providing people with a diverse and in-depth news diet. The expectation that personalization will prevent the maintenance of a common news baseline, has a negative impact on the desirability of news personalization. The expectation that personalization leads to less news diversity or depth also has a negative impact, at least among more politically efficacious people.

These findings paint a more optimistic picture of the user than the filter bubble theory. The desirability of news personalization increases as people expect these services to deliver more diverse and in-depth news. As we discuss later, this is a useful starting point for understanding the relation between expectations and diverse news recommendations. On the other hand, these diversity expectations are not universal. Less efficacious people value personalization regardless of diversity. A lack of diversity expectations is not problematic if people use both un-personalized and personalized news channels (Borgesius et al. Citation2016). But if personalized news replaces, rather than complements, print and broadcast sources with human editors, the lack of diversity expectations might lead to undesirable effects. In the following section, we further explore whether this is a real threat in the Netherlands.

4. Endangered users in the algorithmic recommendation landscape

In this section, we identify people who might end up in filter bubbles because they do not consume diverse news. In the previous section, we identified two factors that could lead to reduced diversity: over-reliance on personalized news sources at the cost of more traditional, un-personalized news sources, and a lack of expectations regarding the diversity and depth of algorithmically recommended news. We used a latent class analysis (LCA), with the Exposure to news variables as input for the model,14 to classify the population according to news consumption practices. The news exposure variables measured both use frequency and news source diversity. We treated source diversity as a proxy for actual individual demand for diversity. We explored with the LCA whether there are groups in the Netherlands that use less diverse information sources, or demonstrate an over-reliance on personalized news sources.

shows the fit values for the different models. We chose a four-class model, which had only a marginally worse fit, but suggested four well-defined groups with quite distinct news consumption patterns.15

TABLE 2 Fit values for the LCA procedure for models with 1–6 classes

summarizes the statistics for the four classes. Traditional news seekers (36%) use TV, radio, and newspapers extensively, but use very few digital news sources. They are the oldest and the least educated. News omnivores (13%) make extensive use of all available news channels, including mobile apps and social media. They have the highest net income, are the most educated, and have the highest political efficacy. Moderate news users (30%) also use most news channels, though less frequently than news omnivores; they are younger and slightly less educated than the omnivores. And finally, the social media users (22%) use very few traditional news channels, but they make extensive use of social media. They comprise the youngest, lowest income, second lowest education group, and have the lowest political efficacy.

TABLE 3 Summary statistics of the news consumer classes identified by LCA

In addition, we included the cluster means for the variables used in the previous regression analysis and noted where there was a statistically significant difference in the means, using the social media users16 as the reference category.17

The LCA offers a different perspective on the relationship between news consumers, personalized and non-personalized news media, and news diversity. The results reflect three generations: older broadcast media users, middle-aged “digital immigrants” (Prensky Citation2001), and young “digital natives.” The two older generations have no trouble accessing diverse news: Older people prefer non-personalized broadcast and print media, and digital immigrants—spanning news omnivores and moderate news users—use both personalized and non-personalized media. Combining personalized and non-personalized use has two effects: These groups have a window into the world outside of any potential filter bubble, and their use of diverse sources may signal diversity expectations that are also present vis-à-vis personalized environments. The youngest generation, however, over-relies on social media for news. They have the lowest level of exposure to traditional media, and only the old-school news consumers consume fewer online news sources than this social media users group. This group relies more heavily on personalized social media for news than any other group, even though they do not use more social media than the news omnivores, and use as much social media as moderate news users. The social media users have the lowest level of expectation regarding diversity, and the lowest level of appreciation of a shared public sphere, even though they appreciate personalization and accept social media as a news platform as much as the other groups. In addition, the social media users group is the least politically efficacious.

Taking all these findings together, the social media users match most closely the users envisioned by the filter bubble argument, who have little exposure to non-personalized news and do not expect much diversity. This group has fewer defenses against the possible effects of algorithmic news personalization than any other segment of Dutch society. Whatever the effects of personalization may be, some people, especially the youngest and least educated, are more susceptible to these effects than others.

5. The machine side of the feedback loop

In the previous sections, we provided evidence that news users differ in more than just their topical interests: Some people have very particular expectations of news recommenders, whereas others do not. For some people, personalized news sources complement non-personalized sources, but for others personalized channels are the dominant information source. Yet, the current theories do not account well for the non-topical heterogeneity of users, and how algorithms may take that into account. In this section, we take the first steps toward extending theory to account for this heterogeneity of users and describe how this heterogeneity shapes the interaction between algorithms and their users.

Feedback loops are part of our current understanding of personalized news media. This understanding, shaped by Pariser, and referred to in this section as the naïve theory of news personalization, assumes that algorithms measure user engagement to provide recommendations that further increase that engagement.18 The same naïve theory also assumes that user engagement depends on how closely the recommended news items align with the user’s interests. Consequently, for the media organization that deploys the recommender, the most relevant differences among users are topical. The task of news personalization is to distinguish, say, the tennis aficionado from the political junkie and provide them with tennis and political news, respectively.

The naïve theory rests on the assumption that algorithmic feedback loops focus on a particular set of user signals. Users curate their information environment all the time. User engagement (e.g., reading, liking, sharing, commenting, or paying for an article) signals topical interest.19 Other acts, such as hiding news items, unfollowing sources, or ignoring recommendations, may signal disinterest. These engagement signals are both abundant and easy to detect, so recommender algorithms, and the naïve theories of algorithmic personalization tend to focus on them.

Yet, our empirical evidence suggests that users also differ in their long-term personality profiles and fit categories, such as people “who have a wide interest,” “who appreciate diversity,” “who engage with multiple topics,” “who value serendipity,” “who are curious about unknown information,” “who like to be surprised” (Schönbach Citation2007), “who get easily bored with familiar things,” “who highly engage with societal issues,” etc. (Duggan and Smith Citation2016; Munson and Resnick Citation2010; Webster Citation2010). These profiles reflect long-term attitudes, which are harder to detect and harder to reconstruct from short-term engagement signals. Yet, these profiles directly shape news consumption practices, therefore they must indirectly shape the algorithms themselves. To account for these long-term forces that shape news personalization dynamics, we propose an extension to the naïve news personalization theory, which we call the producer-focused news personalization theory.

Elsewhere, through interviews we conducted within European quality news organizations, we’ve found that many commercial and public service news organizations take into account long-term signals, such as their users’ diversity expectations, and long term journalistic goals, such as promoting quality articles, to shape the behavior of algorithms, and the recommendations they produce (Bodó Citation2018). We call this behavior the producer-focused feedback logic of algorithmic recommendation. The user-focus logic assumes that the goal of the recommendation agents is to please users by maximizing engagement. By contrast, the interviews we conducted suggest that the producer-focused logic works in a different manner. The key performance indicators of algorithmic recommenders depend on the business entities that deploy them; maximizing user engagement is not the only, and often not the most preferred, goal; and ultimately these producer-set optimization goals define the development path of recommendation models.

All kinds of news organizations deploy algorithmic news recommenders, each striving to achieve their own goals through personalization (Bodó Citation2018). Some commercial media organizations aim to sell as much advertising as possible, and for that they need as much user engagement as possible. Media organizations that produce serious journalism may pursue a steady subscriber base, which they hope to achieve by nurturing loyalty to the brand and cultivating trust in the quality of their journalism (Boczkowski and Mitchelstein Citation2013). Public service media have charters that oblige them to educate, inform, and sustain social cohesion (Splichal, Citation2007), and an ongoing challenge for public service media is interpreting their mission in the light of the contemporary societal and technological context (Jakubowicz Citation2007; EBU Citation2016). The performance metrics by which these organizations measure the success of their algorithmic recommendations will reflect these particular goals, namely profitability, loyalty, trust, or social cohesion (Bodó Citation2018; Hindman Citation2017; Van den Bulck and Moe Citation2017).

The key performance indicators are different from mere user engagement with the recommendations, although the indicators also reflect engagement (Ferrer-Conill and Tandoc Citation2018; Tandoc and Thomas Citation2015; Powers Citation2018). Producer-set metrics use the aggregates of user engagement signals over long periods of time and across a large number of users. These aggregates expand the evaluation beyond individual users and short-term goals. In addition, producers often need to balance contradictory short- and long-term goals. For example, in the domain of news personalization, recommending controversial stories might maximize ad revenues, as these create much user engagement. But such controversial stories may harm long-term goals, such as the trustworthiness and brand value of the news organization. The performance of the algorithms is measured by complex metrics that mix these short- and long-term considerations.

In this producer-focused feedback loop, the performance of recommender algorithms is measured by a number of indicators that cover user engagement and measures for long-term goals, such as loyalty, brand value, conversion to subscribers, or diverse recommendations (Bodó Citation2018). In the end, the development of recommendation algorithms will be determined by long-term producer-focused goals, rather than by short-term user-centric goals.

The user has a different role in this feedback loop than that envisioned by the naïve theory. The purpose of the recommender is to maximize producer-set performance metrics, sometimes at the cost of satisfying the short-term desires of users. These metrics determine the development of the recommendation algorithm and not just the next recommendation the user receives. If the organization wants to provide recommendations that lead to loyal subscribers, then the performance of algorithms must be measured by their success in serving users who expect diverse and in-depth recommendations, as well as topically relevant suggestions. In other words, the producer-set algorithmic feedback loop leaves room for indirect user agency, whereby the user’s more complex expectations are represented in, and measured by, the long-term performance metrics.

6. Conclusions and policy considerations

The filter bubble argument is alarming because it envisions diversity-averse users and diversity-blind algorithms. Yet, as we have shown in this article, this is not the only logic that plays a role in the interactions between algorithmic recommenders and users.

We have shown that some users assess the value of news personalization according to the diversity and depth of the recommendations. We also identified why news producers may be interested in catering to these expectations. Confronted with misinformation and attempts by states to influence the political process of foreign countries, many traditional news organizations reaffirmed their mission to serve the public interest, and to maintain trust or customer loyalty (BBC Citation2015; Viner Citation2017; Coleman Citation2017). These news organizations now need to consider how algorithmic agents can help them achieve these goals, instead of merely maximizing user engagement. Were news recommenders to look beyond simple engagement and meet diversity-seeking users, this could create a virtuous (in the truest sense of the word) circle of increasing diversity.

Nevertheless, we also found that there are substantial differences in the intensity and diversity of the channels people use to consume news, and these seem to correlate with expectations of content diversity. Many Dutch people rely on social media for their news and show little concern for diversity or the public sphere. These people use news channels that focus mostly on short-term engagement and that mirror their users’ limited concern for diversity or the public space. This group constitutes a strong case for policy intervention, because its members risk falling victim to unconstrained market forces, which may or may not develop in a societally desirable manner. What are the policy tools to prevent these people getting caught in the wrong feedback loop?

The model outlined in this article offers two loci of policy intervention: on the user’s side, creating favorable conditions for exposure to diverse content, and on the side of the producer of the algorithmic agent. Policy measures and regulations that address exposure diversity move at a tenuous balance between the positive obligations of states to ensure the optimal conditions for people to exercise their right to freedom of expression, including the right to diverse content,20 and the obligation to refrain from state interference with the media as well as respect for users’ privacy (for an extensive overview of this debate see Helberger, Citation2012). It is worth noting, however, that while in earlier policy documents matters of exposure diversity were per se excluded from the regulatory ambit (Council of Europe, 1999), more recent policy documents have moved to address very explicitly the need for states to take measures to “enhance users’ effective exposure to the broadest possible diversity of media content” (Council of Europe, 2018, para. 2.5). And while with the commercial and the online media the focus is on encouraging those to facilitate and promote exposure to diverse content, it is the public service media that is expected to play an active role in furthering media diversity (Council of Europe, para. 2.5 and 2.8). Public service media traditionally play a strong role in informing the public and introducing the young to citizenship by, for example, providing a common set of issues and balanced, diverse information. However, in the current fragmented media landscape, these public service media fail to capture the attention of especially the young digital natives.

Accordingly, public service media and quality news media should be stimulated to reach digital native audiences with appropriate, possibly personalized content (Helberger Citation2015). Many of the leading public service broadcasting organizations are currently investigating ways in which data-driven recommendations can contribute to promoting public values, such as diversity, and more generally their mission to inform (Sørensen Citation2013). The first of the 10 recommendations of the European Broadcasting Union (EBU)’s Vision 2020 report is “Better understand your audiences”, so that the public service media is able to adjust their services to the different information needs and preferences of a heterogeneous audience (EBU Citation2016, 15). Personalization is explicitly mentioned as a possible tool to do so, as is what the report calls ‘innoversity’ (EBU Citation2016, 17) – using diversity as a source of content innovation and development of new formats. In doing so, the public service media are moving within the sensitive minefield of initiatives that still may qualify as part of their general public mission, and areas in which they come dangerously close to the no-go area that is within the competitive domain of the commercial media and the press, and where state aid laws as well as national media laws draw strict lines that publicly-funded organizations may not cross, in order not to distort the overall competition in media markets (Van den Bulck and Moe Citation2017; Donders and Van Rompuy Citation2012).

Moreover, media law and policy should acknowledge that audiences are diverse, and that some groups in society are more prone to selective exposure than others. Media law and policy are often built on simplistic assumptions about media users, without differentiating between people and how they consume media content21, or the impact that doing so has on public opinion formation (Craufurd and Tambini, Citation2012). Minors and people with disabilities are an exemption: in many jurisdictions, media law protects minors from certain content22 and formulates specific access rights for people with disabilities.23 But even in this area, media law protects people by limiting or enabling access to certain media or content.24 Existing media laws are seldom informed by a deeper understanding of how minors and people with disabilities engage with media, benefit from diverse content, or are hindered by technological developments, for example through a dependence on personalized recommendations. Media law and diversity policies reflect little of the growing body of scholarly literature on youth and media (boyd Citation2015; Livingstone Citation2009), and different media uses by the young and the elderly. We argue that diversity policies need to move from a one-size-fits-all to a more diversified approach.

On the producer side, we saw that some media organizations are motivated to implement recommendation agents with performance goals that incorporate more than just maximum short-term engagement. Traditional news organizations have professional codes of conduct, sometimes centuries-old reputations, and deep roots in local socioeconomic conditions. In other words, these news organizations have a lot to lose if recommendation performance indicators do not take into account long-term and societal considerations. Having said that, investing in the development of more sophisticated recommendation agents can be both costly and time-consuming, and often requires expertise that these organizations do not have in-house. Media policy makers can have an important role in creating more favorable conditions for the development of such more sophisticated algorithms by, for example, funding innovation projects, academic research, and concrete academia–industry research collaborations.

As a baseline, and as this article has also shown, for the broad majority of users probably the best safeguard against the dangers of selective exposure and filter bubbles is a vibrant media landscape where users can encounter and choose from various personalized and un-personalized news sources. Thus, public policy should continue to stimulate the broad availability of news sources. In addition, more targeted initiatives may be needed to reach “social media only users,” particularly those who are not interested in diverse news.

For the news media, our article has demonstrated that a significant share of the audience does care about diverse news, and that there is a demand for diverse recommendations. Open and audited metrics to measure the societal impact and diversity of algorithmic recommenders could help news organizations to meet that demand, and offer personalized news in a societally responsible way (Helberger Citation2016; Eskens, Helberger, and Moeller Citation2017). Offering a diverse media diet will matter for some types of media more than for others. Due to the logistics of digital advertising, some social media platforms clearly have an incentive to disseminate unchecked and highly engaging information (Tambini Citation2017) for the sake of clicks and other metrics of short-term engagement. This may also apply to some media organizations that have little concern for a healthy public debate, and instead focus on increasing profitability and shareholder value. Still, public pressure and reputational costs have forced even these organizations to change some of their algorithmic priorities (Ohlheiser Citation2016; Trefis Team Citation2016). Seeing the impact that social media have on the media diet of at least certain parts of the population, a new challenge for media policymakers and regulators is to establish systematic monitoring of this field to assess the impact of these new, highly personalized players on social cohesion, a diverse information landscape, and a shared public sphere.

6.1. Limitations and future directions

Studying attitudes in the Netherlands provided us with the unique opportunity to study attitudes towards a technology that is just emerging. However, as it is a case study, the generalizability of our results is limited. The Netherlands a characterized by a functioning, diverse media system that includes a strong public broadcaster and almost universal access to broadband internet. Future research should further investigate how contextual factors such as the media system shape the attitudes towards news dissemination technology (see also Thurman et al., Citation2018).

DISCLOSURE STATEMENT

No potential conflict of interest was reported by the authors.

FUNDING

The research was conducted as part of the PERSONEWS ERC-2014-STG, European Research Council project, [grant no: 638514]. PI: Prof. Dr. N. Helberger.

Additional information

Funding

The research was conducted as part of the PERSONEWS ERC-2014-STG, European Research Council project, [grant no: 638514]. PI: Prof. Dr. N. Helberger

Notes on contributors

Balázs Bodó

Balázs Bodó (author to whom correspondence should be addressed), Institute for Information Law (IViR), University of Amsterdam (UvA), The Netherlands. E-mail: [email protected]. ORCID http://orcid.org/0000-0001-5623-5448

Natali Helberger

Natali Helberger, Institute for Information Law, University of Amsterdam, The Netherlands. E-mail: [email protected]

Sarah Eskens

Sarah Eskens, Institute for Information Law, University of Amsterdam, The Netherlands. E-mail: [email protected]

Judith Möller

Judith Möller, Amsterdam School of Communication Research, University of Amsterdam, The Netherlands. E-mail: [email protected]

Notes

1 We use the term “algorithmic agent” to suggest that machine learning methods enable firms to delegate decision making to algorithms in a growing number of activities from credit scoring to selecting relevant information. The use of the term “agent” does not imply that these algorithms operate autonomously, without human oversight. On the contrary, as we explain in the paper, such oversight, either direct (provided by those who develop and oversee the algorithms) or indirect (provided by users who feed data into these systems) is an essential part of the system.

2 Nearly all of the companies leading the list of the most visited internet websites (https://www.alexa.com/topsites, last visited on March 23, 2018) employ algorithmic information personalization one way or another. While almost all major e-commerce websites, search engines, and social media websites utilize personalization techniques, it is easy to encounter personalized information even on non-personalized websites if they carry ads served by third party advertising networks. Advertising networks – such as Google’s AdSense network, which currently has almost a 70% market share – decide which ads to show based on who the user is, and thus would also qualify as an algorithmic information personalization service. See: https://www.datanyze.com/market-share/advertising-networks, last visited on March 23, 2018.

3 Article 19 Universal Declaration of Human Rights: ‘Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.’; Article 19(2) International Covenant on Civil and Political Rights: ‘Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.’

4 In both the US and the EU, the fundamental right to privacy has been interpreted as containing a right to ‘decisional privacy’: the right to autonomously make life-defining choices. See Van der Sloot (Citation2018); Roessler (Citation2005).

5 Article 21(1) Universal Declaration of Human Rights: ‘(1) Everyone has the right to take part in the government of his country, directly or through freely chosen representatives.’; Article 25 International Covenant on Civil and Political Rights: ‘Every citizen shall have the right and the opportunity, without any of the distinctions mentioned in article 2 and without unreasonable restrictions: (a) To take part in the conduct of public affairs, directly or through freely chosen representatives.’

6 While the technical details of information personalization are well known (Jannach et al. Citation2010), there is very little research on how these technologies are implemented in various settings, such as news (Bodó Citation2018).

7 In one of its recent recommendations, the Council of Europe warned that “[selective exposure to media content and the resulting limitations on its use can generate fragmentation and result in a more polarized society” (Council of Europe 2018) thereby repeating earlier warnings by e.g. the High Level Expert Groups on Media Pluralism (Vīķe‐Freiberga et al. Citation2013) and Fake News (High Level Group on fake news and online disinformation Citation2018).

8 More recent discussions suggest that besides topics, higher-level factors, such as ideological preferences, might also be considered as the basis for filter bubbles; however, such factors are notoriously hard to establish and measure, especially outside of relatively simple, binary ideological systems like the US.

9 An extensive discussion and conceptualization of this malleable concept would exceed the scope of this article, see instead: Helberger and Wojcieszak (Citation2018).

10 E.g. European Court of Human Rights, Refah Partisi and Others v Turkey, 13 February 2003, paras. 87, 88, 89.

11 A growing number of recommendation algorithms seek to break filter bubbles. Examples include Huffington Post’s Flipside and Wall Street Journal’s Red Feed, Blue Feed; independent initiatives such as Read Across the Aisle; Escape your Bubble (Chrome), and the Swedish Filterbubbland; as well as sophisticated recommender projects by, for example, the New York Times, Blendle, and the Dutch Volkskrant.

12 By relying on standardized survey data, we could compare attitudes towards personalization across a large and representative sample of the population. This approach allowed us to statistical tests to ascertain the relationship between individual characteristics and opinions, and attitudes towards news personalization. Yet, standardizing questions means that we cannot capture the full causal mechanism that links these variables together. For this purpose, future research should aim to gain a deeper understanding of what it is that links expectations of diversity and acceptance of news personalization, for example in focus groups or using big data research. By relying on self-reported attitudes and behaviors we also rely on the capability of respondents to accurately recall and express their attitudes and behavior in a questionnaire. Therefore, our results should be interpreted keeping in mind that respondents misunderstanding or misremembering could have impacted the findings.

13 Cronbach’s alpha between the three items was .75, M: 3.42; SD 1.54. Higher values indicate higher levels of agreement.

14 We used the poLCA package (Linzer and Lewis, Citation2013) in R to conduct the analysis.

15 The three-class model had the best fit, but did not distinguish between News omnivores and Moderate news users.

16 We tested the results with different reference groups to the same effect.

17 The inclusion of cluster membership in the OLS model instead of the news exposure variables did not produce statistically significant results for cluster membership.

18 In this section, we use the term “algorithm” to refer to different approaches to provide personalized content recommendations based on user profiles built on engagement history. For our discussion, the choice of a particular algorithmic approach, and its parametrization, is less important than the fact that user engagement is collected, preserved, and used by a particular subset of algorithmic recommenders that rely on user histories (rather than, for example, the recommended contents semantic proximity) for making recommendations. For a detailed discussion of the different algorithmic models, including collaborative, content-based, knowledge-based, and hybrid models see, for example, Jannach et al. (Citation2010).

19 We arguably follow an individualistic approach when we interpret users’ engagement signals as a representation of their intrinsic values, interests, attitudes, and beliefs. In contrast, Csigó (Citation2016) argues that users’ signals are also shaped by their guesstimates about what would be popular among their peers, and as such, the signals are an inseparable mix of individual preferences and guesses about what choices would be seen favorably by those peers who see and interpret those signals.

20 European Court of Human Rights, Dink v. Turkey App nos 2668/07 and 4 others (ECtHR, 14 September 2010), para 137.

21 See for example the European Audiovisual Media Service Directive (Directive 2010/13/EU): in Recital 81 it lays down the presumption that ‘technological developments give users increased choice and responsibility in their use of audiovisual media services’, without considering whether this is the case for all media users.

22 See among others Articles 12 and 27 Audiovisual Media Service Directive; Schedule 5, clause 60, Australian Broadcasting Services Act 1992.

23 See among others Article 7 Audiovisual Media Service Directive; Article 21 UN Convention on the Rights of Persons with Disabilities.; US Twenty-First Century Communications and Video Accessibility Act of 2010.

24 See e.g. Art. 12 of Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive).

REFERENCES

  • Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. 2015. “Exposure to Ideologically Diverse News and Opinion on Facebook.” Science 348 (6239): 1130–1132.
  • Barberá, Pablo, John T. Jost, Jonathan Nagler, Joshua A. Tucker, and Richard Bonneau. 2015. “Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?” Psychological Science 26 (10): 1531–1542. doi: 10.1177/0956797615594620.
  • BBC. 2015. The Future of News. London: BBC.
  • BBC. 2017. BBC Annual Plan for 2017/18. London: BBC.
  • Bobok, Dalibor. 2016. “Selective Exposure, Filter Bubbles and Echo Chambers on Facebook.” Master thesis, Central European University.
  • Boczkowski, Pablo J., and Eugenia Mitchelstein. 2013. The News Gap: When the Information Preferences of the Media and the Public Diverge. Cambridge, MA: MIT Press.
  • Bodó, Balázs, 2018. “Means, Not an End (of the World) – The Customization of News Personalization by European News Media”. Prepared for the Algorithms, Automation, and News Conference, Munich, 22-23 May 2018; Amsterdam Law School Research Paper No. 2018-09; Institute for Information Law Research Paper No. 2018-01. Available at SSRN: https://ssrn.com/abstract=3141810 or http://dx.doi.org/10.2139/ssrn.3141810
  • Borgesius, Frederik, Damian Trilling, Judith Möller, Balázs Bodó, Claes H. de Vreese, and Natali Helberger. 2016. “Should We Worry about Filter Bubbles? An Interdisciplinary Inquiry into Self-Selected and Pre-Selected Personalised Communication.” Internet Policy Review 5 (1).
  • Boyd, Dana. 2015. It’s Complicated: The Social Lives of Networked Teens. New Haven: Yale University Press.
  • Bozdag, Engin V. 2015. “Bursting the Filter Bubble: Democracy, Design, and Ethics.” PhD diss., Delft University of Technology.
  • Castells, Manuel. 2008. “The new public sphere: Global civil society, communication networks, and global governance.” The Annals of the American Academy of Political and Social Science 616 (1): 78–93.
  • Coleman, Stephen. 2017. “Journalism and the Public-Service Model: In Search of an Ideal.” In The Oxford Handbook of Political Communication, edited by Kate Kenske and Kathleen Hall Jamieson. Oxford: Oxford University Press.
  • Council of Europe, Recommendation No R(99)1 of the Committee of Ministers to Member States on Measures to Promote Media Pluralism (Strasbourg, 1999).
  • Council of Europe, Recommendation CM/Rec(2018)1 of the Committee of Ministers to member States on media pluralism and transparency of media ownership (Strasbourg, 2018).
  • Craufurd, Rachel and Tambini Damian. 2012. “Measuring Media Plurality in the United Kingdom: Policy Choices and Regulatory Challenges”. Journal of Media Law, 4 (1): 35–63.
  • Csigó, Péter. 2016. The Neopopular Bubble. Budapest: CEU Press.
  • Dilliplane, Susanna. 2011. All the News You Want To Hear: The Impact of Partisan News Exposure on Political Participation. Public Opinion Quarterly 75 (2): 287–316.
  • Donders, Karen and Ben Van Rompuy. 2012. “Competition Law, Sports, and Public Service Broadcasting: The Legal Complexity and Political Sensitivity of Measuring Market Distortion and Public Value”. Journal of Media Law 4 (2): 213–228.
  • Duggan, Maeve, and Aaron Smith. 2016. “The Political Environment on Social Media.” October 25. www.pewinternet.org/2016/10/25/the-political-environment-on-social-media/
  • Dutton, William H, Bianca C. Reisdorf, Elizabeth Dubois, and Grant Blank. 2017. “Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States”. Michigan: Michigan State University.
  • EBU. 2016. Connecting to a Networked Society. Geneva: European Broadcasting Union.
  • Eskens, Sarah, Natali Helberger, and Judith Moeller. 2017. “Challenged by News Personalisation: Five Perspectives on the Right to Receive Information.” Journal of Media Law 9 (2): 259–284. doi: 10.1080/17577632.2017.1387353
  • Ferrer-Conill, Raul, and Edson C. Tandoc Jr. 2018. “The Audience-Oriented Editor: Making sense of the Audience in the Newsroom.” Digital Journalism 6 (4): 1–18.
  • Flaxman, Seth R, and Justin M. Rao. 2016. “Filter Bubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80 (S1): 298–320. doi: 10.1093/poq/nfw006.
  • Fletcher, Richard, and Rasmus Kleis Nielsen. 2017. “Are People Incidentally Exposed to News on Social Media? A Comparative Analysis.” New Media & Society 20 (7): 2450–2468.
  • Friedman, Batya, and Helen Nissenbaum. 1997. “Software Agents and User Autonomy.” In Proceedings of the First International Conference on Autonomous Agents - AGENTS ’97, New York, New York, USA: ACM Press, 466–469.
  • Gauch, Susan, Mirco Speretta, Aravind Chandramouli, and Alessandro Micarelli. 2007. “User Profiles for Personalized Information Access.” In The Adaptive Web: Methods and Strategies of Web Personalization, edited by Peter Brusilovsky, Alfred Kobsa, and Wolfgang Nejdl, 54–89. Berlin: Springer.
  • Garrett, R. Kelly. 2013. “Selective Exposure: New Methods and New Directions.” Communication Methods and Measures 7 (3–4): 247–256. doi: 10.1080/19312458.2013.835796.
  • Habermas, Jurgen. 1989. The Structural Transformation of the Public Sphere. Cambridge: Polity.
  • Haim, Mario, Andreas Graefe, and Hans-Bernd Brosius. 2017. “Burst of the Filter Bubble? Effects of Personalization on the Diversity of Google News.” Digital Journalism 6 (3): 1–14.
  • Hallin, Daniel C., and Paolo Mancini. 2004. Comparing Media Systems: Three Models of Media and Politics. Cambridge: Cambridge University Press.
  • Hart, William, Dolores Albarracín, Alice H. Eagly, Inge Brechan, Matthew J. Lindberg, and Lisa Merrill. 2009. “Feeling Validated versus Being Correct: A Meta-Analysis of Selective Exposure to Information.” Psychological Bulletin 135 (4): 555–588. doi: 10.1037/a0015701.
  • Helberger, Natali. 2012. “Exposure Diversity as a Policy Goal.” Journal of Media Law 4: 65–92.
  • Helberger, Natali. 2015. “Public Service Media: Merely Facilitating or Actively Stimulating Diverse Media Choices? Public Service Media at the Crossroad.” International Journal of Communication 9: 17.
  • Helberger, Natali. 2016. “Policy implications from algorithmic profiling and the changing relationship between newsreaders and the media.” Javnost, 23 (2): 188–203. doi: 10.1080/13183222.2016.1162989
  • Helberger, Natali and Magdalena Wojcieszak. 2018. “Exposure Diversity: Contemporary Challenges and Research Opportunities”. In: Mediated Communication, a Volume of the Handbooks of Communication Science, edited by Napoli Philip. Berlin: De Gruyter Mouton.
  • Hindman, Matthew. 2017. “Journalism Ethics and Digital Audience Data.” In Remaking the News: Essays on the Future of Journalism Scholarship in the Digital Age, edited by Pablo J. Boczkowski and C. W. Anderson, 177–195. Cambridge, MA: MIT Press.
  • High Level Group on fake news and online disinformation. 2018. “A Multi-Dimensional Approach to Disinformation – Report of the Independent High Level Group on Fake News and Online Disinformation.” Brussels.
  • Huckfeldt, Robert, Paul E. Johnson and John Sprague. 2002. Political environments, political dynamics, and the survival of disagreement. The Journal of Politics 64 (1): 1–21.
  • Iyengar, Shanto, and Sean J. Westwood. 2015. “Fear and Loathing across Party Lines: New Evidence on Group Polarization.” American Journal of Political Science 59 (3): 690–707. doi: 10.1111/ajps.12152.
  • Jannach, Dietmar, Markus Zanker, Alexander Felfernig, and Gerhard Friedrich. 2010. Recommender Systems: An Introduction. New York, NY: Cambridge University Press.
  • Knobloch-Westerwick, Silvia, and Steven B. Kleinman. 2012. “Preelection Selective Exposure: Confirmation Bias Versus Informational Utility.” COMMUNICATION RESEARCH 39 (2): 170–193. doi: 10.1177/0093650211400597.
  • Koene, Ansgar, Elvira Perez, Christopher James Carter, Ramona Statache, Svenja Adolphs, Claire O’Malley, Tom Rodden, and Derek McAuley. “Ethics of Personalized Information Filtering.” In Proceedings of International Conference on Internet Science, pp. 123–132. Springer, Cham, 2015.
  • Kwon, K. Hazel, Shin-Il Moon and Michael A. Stefanone. 2015. “Unspeaking on Facebook? Testing network effects on self-censorship of political expressions in social network sites.” Quality and Quantity: International Journal of Methodology 49 (4): 1417–1435.
  • Jakubowicz, Karol. 2007. “Public Service Broadcasting in the 21st Century.What Chance for a New Beginning?”. In: From Public Broadcasting Service to Public Service Media, edited by Lowe, Gregory Ferrell and Bardoel, Jo. Nordicom, Götemborg: 19–50.
  • Lee, Francis L. F. 2016. “Impact of Social Media on Opinion Polarization in Varying Times.” Communication and the Public 1 (1): 56–71. doi: 10.1177/2057047315617763.
  • Linzer, Drew A, and Jeffrey Lewis. 2016. “poLCA: Polytomous Variable Latent Class Analysis.” R package version 1.4. http://dlinzer.github.com/poLCA
  • Livingstone, Sonia. 2009. Children and the Internet: Great Expectations, Challenging Realities. Oxford: Polity Press.
  • Mancini, Paolo. 2013. “Media Fragmentation, Party System, and Democracy.” The International Journal of Press/Politics 18 (1): 43–60. doi: 10.1177/1940161212458200.
  • McDonald, Aleecia M., and Lorrie Faith Cranor. 2010. “Americans’ Attitudes about Internet Behavioral Advertising Practices.” In Proceedings of the 9th Annual ACM Workshop on Privacy in the Electronic Society, 63–72. ACM.
  • Messing, Solomon, and Sean J. Westwood. 2014. “Selective Exposure in the Age of Social Media: Endorsements Trump Partisan Source Affiliation When Selecting News Online.” Communication Research 41 (8): 1042–1063. doi: 10.1177/0093650212466406.
  • Munson, Sean A., and Paul Resnick. 2010. “Presenting Diverse Political Opinions: How and How Much.” Proc. CHI 2010, 1457–1466. doi: 10.1145/1753326.1753543.
  • Mutz, Diana C. 2002. “Cross-Cutting Social Networks: Testing Democratic Theory In Practice.” American Political Science Review 96 (1): 111–126.
  • Napoli, Philip M. 2011. “Exposure Diversity Reconsidered.” Journal of Information Policy 1. 2011: 246–259. doi: 10.5325/jinfopoli.1.2011.0246.
  • Negroponte, Nicholas. 1995. Being Digital. 1st ed. New York: Knopf.
  • Neuberger, Christoph, and Frank Lobigs. 2010. Die Bedeutung des Internets im Rahmen der Vielfaltssicherung. Berlin: Vistas.
  • New York Times. 2014. New York Times: Innovation. New York: The New York Times.
  • Newman, Nic, Richard Fletcher, Antonis Kalogeropoulos, David A. L. Levy, and Rasmus Kleis Nielsen. 2017. “Reuters Institute Digital News Report 2017”. Reuters Institute for the Study of Journalism. Oxford: University of Oxford.
  • Newman, Nic, Richard Fletcher, Antonis Kalogeropoulos, David A. L. Levy, and Rasmus Kleis Nielsen. 2018. “Reuters Institute Digital News Report 2018”. Reuters Institute for the Study of Journalism. Oxford: University of Oxford.
  • O’Hara, Kieron, and David Stevens. 2015. “Echo Chambers and Online Radicalism: Assessing the Internet’s Complicity in Violent Extremism.” Policy and Internet 7 (4): 401–422.
  • Ohlheiser, Abby. 2016. “This Is How Facebook’s Fake-News Writers Make Money – The Washington Post.” The Washington Post, November 18. https://www.washingtonpost.com/news/the-intersect/wp/2016/11/18/this-is-how-the-internets-fake-news-writers-make-money/?utm_term=.9870b80b0a8c.
  • Oulasvirta, Antti, and Jan Blom. 2007. “Motivations in Personalisation Behaviour.” Interacting with Computers 20 (1): 1–16.
  • Paapst, Mathieu H, and Trix Mulder. 2017. Media Pluralism Monitor 2016. Monitoring Risks for Media Pluralism in the EU and Beyond - Country Report: Netherlands. Florence: Centre for Media Pluralism and Media Freedom.
  • Pariser, Eli. 2011. The Filter Bubble: What the Internet is Hiding from You. UK: Penguin.
  • Powers, Elia. 2018. “Selecting Metrics, Reflecting Norms: How Journalists in Local Newsrooms Define, Measure, and Discuss Impact.” Digital Journalism 6 (4): 454–471.
  • Prensky, Marc. 2001. “Digital Natives, Digital Immigrants Part 1.” On the Horizon 9 (5): 1–6.
  • Price, Vincent, Joseph N. Cappella, and Lilach Nir. 2002. “Does disagreement contribute to more deliberative opinion?” Political Communication 19 (1): 95–112.
  • Quattrociocchi, Walter, Antonio Scala, and Cass R. Sunstein. 2016. “Echo Chambers on Facebook.” Available at SSRN https://ssrn.com/abstract=2795110
  • Roessler, Beate. 2005. The Value of Privacy. Cambridge: Polity Press.
  • Schönbach, Klaus. 2007. “‘The Own in the Foreign’: Reliable Surprise—An Important Function of the Media?” Media, Culture and Society 29 (2): 344–353.
  • Sears, David O., and Jonathan L. Freedman. 1967. “Selective Exposure to Information: A Critical Review.” Public Opinion Quarterly 31 (2): 194–213.
  • Sørensen, Jannick Kirk. 2013. “PSB Goes Personal: The Failure of Personalised PSB Web Pages.” MedieKultur: Journal of Media and Communication Research 29 (55): 43–71.
  • Splichal, Slavko. 2017. “Does History Matter? Grasping the Idea of Public Service Media at Its Roots”. In: From Public Broadcasting Service to Public Service Media, edited by Gregory Ferrell Lowe and Jo Bardoel, 237–256. Göteborg: Nordicom.
  • Sunstein, Cass R. 2002. “The Law of Group Polarization.” Journal of Political Philosophy 10 (2): 175–195.
  • Sunstein, Cass R. 2009. Republic. Com 2.0. Princeton, NJ: Princeton University Press.
  • Stroud, Natalie Jomini. 2011. Niche News: The Politics of News Choice. Oxford: Oxford University Press.
  • Tambini Damian, 2017. “How Advertising Fuels Fake News” Media Policy Project, February 24. http://blogs.lse.ac.uk/mediapolicyproject/2017/02/24/how-advertising-fuels-fake-news/
  • Tandoc Jr, Edson C., and Ryan J. Thomas. 2015. “The Ethics of Web Analytics: Implications of Using Audience Metrics in News Construction.” Digital Journalism 3 (2): 243–258.
  • Thurman, Neil, Judith Möller, Damian Trilling, and Natali Helberger. 2018. My Friends, Editors, Algorithms, and I: A Multi-level Analysis of Audience Attitudes to News Selection, 68th ICA Annual Conference, Prague, Czech Republic, 24–28 May 2018.
  • Trefis Team. 2016. “How Big is The Fake News Problem for Facebook?” Forbes December 21. https://www.forbes.com/sites/greatspeculations/2016/12/21/how-big-is-the-fake-news-problem-for-facebook/#157167c95bd1.
  • Turow, Joseph, Jennifer King, Chris Jay Hoofnagle, Amy Bleakley, and Michael Hennessy. 2009. “Americans Reject Tailored Advertising and Three Activities that Enable it.” Available at SSRN. http://ssrn.com/paper=1478214
  • Valentino, Nicholas A., Antoine J. Banks, Vincent L. Hutchings, and Anne K. Davis. 2009. “Selective Exposure in the Internet Age: The Interaction between Anxiety and Information Utility.” Political Psychology 30 (4): 591–613.
  • Van den Bulck, Hilde, and Hallvard Moe. 2017. “Public Service Media, Universality and Personalisation through Algorithms: Mapping Strategies and Exploring Dilemmas.” Media, Culture and Society 40 (6): 875–892.
  • Van der Sloot, Bart. 2018. “Decisional privacy 2.0: the procedural requirements implicit in Article 8 ECHR and its potential impact on profiling.” International Data Privacy Law 7 (3): 190–201.
  • Vīķe‐Freiberga, Vaira, Herta Däubler‐Gmelin, Ben Hammersley, and Luís Miguel Poiares Pessoa Maduro. 2013. “A Free and Pluralistic Media to Sustain European Democracy”, Report of the High Level Group on Media Freedom and Pluralism. Brussels: European Commission.
  • Viner, Katherine. 2017. “A mission for journalism in a time of crisis.” The Guardian, November 16. https://www.theguardian.com/news/2017/nov/16/a-mission-for-journalism-in-a-time-of-crisis.
  • Webster, James G. 2010. “User Information Regimes: How Social Media Shape Patterns of Consumption.” Northwestern University Law Review 104 (2): 593–612.
  • Wojcieszak, Magdalena. 2010. ‘Don’t Talk to Me’: Effects of Ideologically Homogeneous Online Groups and Politically Dissimilar Offline Ties on Extremism.” New Media and Society 12 (4): 637–655.
  • Wojcieszak, Magdalena. 2011. “Deliberation and attitude polarization.” Journal of Communication 61: 596–617.
  • Wojcieszak, Magdalena, and Hernando Rojas. 2011. “Hostile public effect: Communication diversity and the projection of personal opinions onto others.” Journal of Broadcasting and Electronic Media 55 (4): 543–562.
  • Yeung, Karen. 2017. “‘Hypernudge’: Big Data as a Mode of Regulation by Design.” Information Communication and Society 20 (1): 118–136.