12,026
Views
52
CrossRef citations to date
0
Altmetric
Original Articles

Interested in Diversity

The role of user attitudes, algorithmic feedback loops, and policy in news personalization

ORCID Icon, , &
 

Abstract

Using survey evidence from the Netherlands, we explore the factors that influence news readers’ attitudes toward news personalization. We show that the value of personalization depends on commonly overlooked factors, such as concerns about a shared news sphere, and the depth and diversity of recommendations. However, these expectations are not universal. Younger, less educated users have little exposure to non-personalized news, and they also show little concern about diverse news recommendations. We discuss the policy implications of our findings. We show that quality news organizations that pursue reader loyalty and trust have a strong incentive to implement personalization algorithms that help them achieve these particular goals by taking into account diversity expecting user attitudes and providing high quality recommendations. Diversity-valuing news readers are thus well placed to be served by diversity-enhancing recommender algorithms. However, some users are in danger of being left out of this positive feedback loop. We make specific policy suggestions regarding how to address the issue of diversity-reducing feedback loops, and encourage the development of diversity-enhancing ones.

DISCLOSURE STATEMENT

No potential conflict of interest was reported by the authors.

FUNDING

The research was conducted as part of the PERSONEWS ERC-2014-STG, European Research Council project, [grant no: 638514]. PI: Prof. Dr. N. Helberger.

Notes

1 We use the term “algorithmic agent” to suggest that machine learning methods enable firms to delegate decision making to algorithms in a growing number of activities from credit scoring to selecting relevant information. The use of the term “agent” does not imply that these algorithms operate autonomously, without human oversight. On the contrary, as we explain in the paper, such oversight, either direct (provided by those who develop and oversee the algorithms) or indirect (provided by users who feed data into these systems) is an essential part of the system.

2 Nearly all of the companies leading the list of the most visited internet websites (https://www.alexa.com/topsites, last visited on March 23, 2018) employ algorithmic information personalization one way or another. While almost all major e-commerce websites, search engines, and social media websites utilize personalization techniques, it is easy to encounter personalized information even on non-personalized websites if they carry ads served by third party advertising networks. Advertising networks – such as Google’s AdSense network, which currently has almost a 70% market share – decide which ads to show based on who the user is, and thus would also qualify as an algorithmic information personalization service. See: https://www.datanyze.com/market-share/advertising-networks, last visited on March 23, 2018.

3 Article 19 Universal Declaration of Human Rights: ‘Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.’; Article 19(2) International Covenant on Civil and Political Rights: ‘Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.’

4 In both the US and the EU, the fundamental right to privacy has been interpreted as containing a right to ‘decisional privacy’: the right to autonomously make life-defining choices. See Van der Sloot (Citation2018); Roessler (Citation2005).

5 Article 21(1) Universal Declaration of Human Rights: ‘(1) Everyone has the right to take part in the government of his country, directly or through freely chosen representatives.’; Article 25 International Covenant on Civil and Political Rights: ‘Every citizen shall have the right and the opportunity, without any of the distinctions mentioned in article 2 and without unreasonable restrictions: (a) To take part in the conduct of public affairs, directly or through freely chosen representatives.’

6 While the technical details of information personalization are well known (Jannach et al. Citation2010), there is very little research on how these technologies are implemented in various settings, such as news (Bodó Citation2018).

7 In one of its recent recommendations, the Council of Europe warned that “[selective exposure to media content and the resulting limitations on its use can generate fragmentation and result in a more polarized society” (Council of Europe 2018) thereby repeating earlier warnings by e.g. the High Level Expert Groups on Media Pluralism (Vīķe‐Freiberga et al. Citation2013) and Fake News (High Level Group on fake news and online disinformation Citation2018).

8 More recent discussions suggest that besides topics, higher-level factors, such as ideological preferences, might also be considered as the basis for filter bubbles; however, such factors are notoriously hard to establish and measure, especially outside of relatively simple, binary ideological systems like the US.

9 An extensive discussion and conceptualization of this malleable concept would exceed the scope of this article, see instead: Helberger and Wojcieszak (Citation2018).

10 E.g. European Court of Human Rights, Refah Partisi and Others v Turkey, 13 February 2003, paras. 87, 88, 89.

11 A growing number of recommendation algorithms seek to break filter bubbles. Examples include Huffington Post’s Flipside and Wall Street Journal’s Red Feed, Blue Feed; independent initiatives such as Read Across the Aisle; Escape your Bubble (Chrome), and the Swedish Filterbubbland; as well as sophisticated recommender projects by, for example, the New York Times, Blendle, and the Dutch Volkskrant.

12 By relying on standardized survey data, we could compare attitudes towards personalization across a large and representative sample of the population. This approach allowed us to statistical tests to ascertain the relationship between individual characteristics and opinions, and attitudes towards news personalization. Yet, standardizing questions means that we cannot capture the full causal mechanism that links these variables together. For this purpose, future research should aim to gain a deeper understanding of what it is that links expectations of diversity and acceptance of news personalization, for example in focus groups or using big data research. By relying on self-reported attitudes and behaviors we also rely on the capability of respondents to accurately recall and express their attitudes and behavior in a questionnaire. Therefore, our results should be interpreted keeping in mind that respondents misunderstanding or misremembering could have impacted the findings.

13 Cronbach’s alpha between the three items was .75, M: 3.42; SD 1.54. Higher values indicate higher levels of agreement.

14 We used the poLCA package (Linzer and Lewis, Citation2013) in R to conduct the analysis.

15 The three-class model had the best fit, but did not distinguish between News omnivores and Moderate news users.

16 We tested the results with different reference groups to the same effect.

17 The inclusion of cluster membership in the OLS model instead of the news exposure variables did not produce statistically significant results for cluster membership.

18 In this section, we use the term “algorithm” to refer to different approaches to provide personalized content recommendations based on user profiles built on engagement history. For our discussion, the choice of a particular algorithmic approach, and its parametrization, is less important than the fact that user engagement is collected, preserved, and used by a particular subset of algorithmic recommenders that rely on user histories (rather than, for example, the recommended contents semantic proximity) for making recommendations. For a detailed discussion of the different algorithmic models, including collaborative, content-based, knowledge-based, and hybrid models see, for example, Jannach et al. (Citation2010).

19 We arguably follow an individualistic approach when we interpret users’ engagement signals as a representation of their intrinsic values, interests, attitudes, and beliefs. In contrast, Csigó (Citation2016) argues that users’ signals are also shaped by their guesstimates about what would be popular among their peers, and as such, the signals are an inseparable mix of individual preferences and guesses about what choices would be seen favorably by those peers who see and interpret those signals.

20 European Court of Human Rights, Dink v. Turkey App nos 2668/07 and 4 others (ECtHR, 14 September 2010), para 137.

21 See for example the European Audiovisual Media Service Directive (Directive 2010/13/EU): in Recital 81 it lays down the presumption that ‘technological developments give users increased choice and responsibility in their use of audiovisual media services’, without considering whether this is the case for all media users.

22 See among others Articles 12 and 27 Audiovisual Media Service Directive; Schedule 5, clause 60, Australian Broadcasting Services Act 1992.

23 See among others Article 7 Audiovisual Media Service Directive; Article 21 UN Convention on the Rights of Persons with Disabilities.; US Twenty-First Century Communications and Video Accessibility Act of 2010.

24 See e.g. Art. 12 of Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive).

Additional information

Funding

The research was conducted as part of the PERSONEWS ERC-2014-STG, European Research Council project, [grant no: 638514]. PI: Prof. Dr. N. Helberger

Notes on contributors

Balázs Bodó

Balázs Bodó (author to whom correspondence should be addressed), Institute for Information Law (IViR), University of Amsterdam (UvA), The Netherlands. E-mail: [email protected]. ORCID http://orcid.org/0000-0001-5623-5448

Natali Helberger

Natali Helberger, Institute for Information Law, University of Amsterdam, The Netherlands. E-mail: [email protected]

Sarah Eskens

Sarah Eskens, Institute for Information Law, University of Amsterdam, The Netherlands. E-mail: [email protected]

Judith Möller

Judith Möller, Amsterdam School of Communication Research, University of Amsterdam, The Netherlands. E-mail: [email protected]