0
Views
0
CrossRef citations to date
0
Altmetric
Original Teaching Ideas - Single

Bursting the belief in filter bubbles: A single-class activity to enhance critical thinking on algorithmic personalization

ORCID Icon
Received 18 Dec 2023, Accepted 28 May 2024, Published online: 17 Jul 2024

Abstract

Courses

This activity is suited for courses on online communication, political communication, media sociology, and concepts/theories in communication. It can be applied at different levels (bachelor, master, continuing education) and in classes of different sizes (from about 15 to several hundred students).

Objectives

After having completed this activity, students will be able to explain the concepts “information intermediaries,” “algorithmic personalization,” and “filter bubble”; synthesize the state of research on filter bubbles; and critically and realistically discuss societal risks and chances associated with these phenomena.

Introduction and rationale

In his book The Filter Bubble: What the Internet Is Hiding from You (Pariser, Citation2011), the Internet activist Eli Pariser draws up a horror scenario about the negative effects of algorithmic personalization—that is, the algorithmic tailoring of content on platforms such as search engines and social media to the individual interests and opinions of individual users (Stegmann et al., Citation2022). According to Pariser, algorithmic personalization puts each user in their personal information universe—the “filter bubble”—where they will only find content selected based on their previous online behavior that matches their interests and reinforces their opinions. The far-reaching societal consequences might be fragmentation and polarization.

The catchy filter bubble metaphor can lead to the impression that users are helplessly at the mercy of algorithms and disconnected from society by them. In its simplicity, it sounds very convincing, which might explain why it has become a commonly used buzzword in the public and academic discourse about the dangers of the Internet. However, simulation studies (e.g. Haim et al., Citation2018; Jürgens et al., Citation2014) have so far not been able to create filter bubbles in the sense of extreme, individualized, isolated information environments as described by Pariser (Citation2011). Thus, the fears raised by him appear to be exaggerated. The metaphor oversimplifies reality, since it ignores several mechanisms that prevent the development of filter bubbles: First, while the metaphor presupposes that people have narrow interests and only want content matching these, many people are interested in diverse content (Bodó et al., Citation2019). Second, people can only be isolated from society through algorithms if algorithm-driven platforms are their only gateway to information, but most users have rather broad information repertoires, including non-algorithmic sources (Newman et al., Citation2023). Third, the content people obtain on search engines is far less personalized and more diverse than often assumed (Steiner et al., Citation2022). Fourth, on social media, many users have large networks comprised of many rather casual acquaintances (weak ties), often with different interests and opinions, which increases the diversity of the content they get (Stegmann et al., Citation2022). Moreover, the filter bubble metaphor originates from and clearly reflects the US context, with its division into two political camps—an exceptional scenario difficult to transfer to other contexts (e.g. compromise-oriented multiparty systems and a lower degree of societal political polarization such as found in many European countries).

The filter bubble exemplifies fears related to the Internet that many people accept unquestioningly but that, on closer inspection, turn out to be exaggerated. It is a good starting point for training students’ ability to reflect critically around such phenomena and to develop a realistic picture of the risks (and possibilities) of new technologies. To be able to deal with these risks appropriately, both as individuals and as a society, students must be empowered to analyze these by means of systematic empirical research rather than relying solely on anecdotal evidence. Based on active learning (Bonwell & Eison, Citation1991), this single-class activity aims to do just that. Its overarching intended learning outcome is to enhance students’ ability to understand, realistically assess, and critically discuss the effects of algorithmic personalization.

The activity

The activity is designed for a teaching unit of approximately 90 minutes. Each student must have an electronic device with Internet access. Since personalized search-engine results pages (SERPs) are a core requirement for filter bubbles to arise, students must use their personal Google accounts, which work best on their personal devices, in order to create personalized SERPs during the activity. The activity will not work with generic Google accounts because SERPs created by such accounts will not be personalized. Slides that teachers can use for the activity are given in the Appendix.

Theoretical foundations (25 minutes)

The unit starts by laying the theoretical foundations by introducing students to the concepts of information intermediaries and algorithmic personalization. The intended learning outcome is that students can explain these two concepts.

Information intermediaries can be described as “brokers of information that position themselves between producers and consumers while altering the flow of information” (Jürgens & Stark, Citation2017, p. 398). The most prominent examples are social media (e.g. Facebook, Instagram), search engines (e.g. Google, Bing), and news aggregators (e.g. Google News, Reddit). Intermediaries are based on algorithms and fulfill three functions: they select, filter, and personalize content (Jürgens & Stark, Citation2017). The latter—algorithmic personalization—means tailoring content to the interests of individual users based on collected data on the users’ previous online behavior so that each user gets different content. Providing users with personalized content they like is part of the intermediaries’ business model, since if users like the content, they stay longer on the platforms, which, in turn, can collect even more data on the users and capitalize these for personalized advertising—their main source of revenue.

Now the students engage with what they learnt so far, employing the think-pair-share (TPS) method (Lyman, Citation1981; for guiding questions, see Appendix): First, they think individually about potential positive and negative consequences of algorithmic personalization for users (microlevel) and society (macrolevel). Second, they discuss their thoughts in small groups of three to four students (pair). Third, the groups share the key points they discussed with the entire class. I have used this method in classes of different sizes (15–200 students). In the literature, it has been described as particularly suitable for classes of several hundred students (Cooper & Robinson, Citation2000). In classes of up to about 30 students, I usually have the students share their key points orally. In larger classes, where this would take too much time and many students feel uncomfortable with raising their voices publicly, I use digital tools (e.g. Mentimeter, Padlet).

The teacher can guide students to come up with their thoughts on the microlevel and macrolevel consequences of algorithmic personalization by asking questions (e.g. “Now you have mentioned several negative consequences, can you think of anything positive that might result from algorithmic personalization?” “If individuals end up in their own small world due to algorithmic personalization, as you say, what do you think that means for society?”). Afterwards, the teacher summarizes the student answers but does not comment on them (e.g. does not describe a concern as exaggerated).

Trying to create filter bubbles (30 minutes)

The intended learning outcomes of this step are that the students can explain the filter bubble metaphor and recognize that it oversimplifies reality. They watch the beginning of Eli Pariser’s TED talk “Beware online ‘filter bubbles’”Footnote1 in which he shows the SERPs that two of his friends received when googling the same term on a current controversial political issue. Both SERPs seem to differ strongly, even though Pariser states that both friends were similar in many respects. He interprets this anecdotal evidence as proof that the Google algorithm has placed his friends in filter bubbles, leading him to conclude that algorithmic personalization poses a threat to societal integration. After the video, the students get the opportunity to raise their thoughts (orally or by means of a digital tool). Again, the teacher summarizes the thoughts without commenting on them.

Now the students try out what Pariser describes in the video. Several students google the same search term and compare their SERPs (for an example, see the Appendix). Again, using the TPS method, they first google a search term on a current controversial political issue individually and get an overview of their SERPs and what kind of search results it contains. Second, groups of three to four students compare their SERPs. Third, the teacher asks the entire group to raise their hands if the SERPs of their group members varied (1) to a very strong degree, as described by Pariser in the video; (2) to a minor degree but not fundamentally (e.g. varying ranks of the same search results); or (3) hardly or not at all. My observation after having applied this activity in various classes is that the vast majority of students chose option 3, few chose option 2, but none chose the filter bubble scenario 1.

Debriefing (35 minutes)

The debriefing contextualizes the anecdotal evidence from the activity with a simulation study that was not able to create filter bubbles (Haim et al., Citation2018). Its intended learning outcomes are that the students can explain how systematic empirical studies test the existence of filter bubbles, synthesize that these studies have refuted the existence of filter bubbles so far, and critically discuss how this metaphor oversimplifies the consequences of algorithmic personalization.

The teacher explains the methodological design and results of the study. Simulation studies in this field train (news) search engines on the distinct interests of a number of fictional “agents” by simulating human online behavior for some time. Afterwards, the “agents” search for ambivalent search terms that should lead to differing SERPs if the “agents” were in filter bubbles. However, comparing the resulting SERPs regularly shows that all “agents” receive very similar SERPs (as the students did in the activity). The teacher makes clear that no simulation study was able to prove the existence of filter bubbles so far, which makes Pariser’s concerns seem exaggerated.

Now the teacher introduces the students to the critique toward the filter bubble metaphor (see introduction), which helps them to understand why the idea that algorithmic personalization would cause filter bubbles is far too simple. Nevertheless, algorithmic personalization brings along other risks that the teacher now makes the students aware of. For example, it can lead to one-sided information (although not taking the form of complete isolation) and foster group polarization and the spread of disinformation. This stimulates the students to reflect about that the content users receive always results from an interplay of human behavior and algorithmic decisions. We can, at least to some extent, protect ourselves from only receiving one-sided content by consciously signaling to the algorithms through our behavior that we want diverse content.

Afterwards, the students are asked to write their personal take-home messages using a digital tool—for example by asking them, “What is the most important thing you learnt today?”—a method known as the one-minute paper. The teacher should provide direct feedback on these thoughts (formative assessment). The unit closes with take-home messages from the teacher (see Appendix).

Appraisal

Below, I give a summary of my students’ comments and answers in various classes in which I have used this activity. My students’ contributions clearly show that most of them went through a learning process during the activity. In the first TPS, most of my students focused on negative consequences similar to the filter bubble metaphor. They discussed concerns such as that algorithmic personalization can narrow people’s information and perspectives and isolate people from society. Few of my students highlighted the positive consequences of algorithmic personalization (e.g. receiving relevant information very quickly). After the video, many of my students felt these concerns were confirmed. After having tried to create filter bubbles and having been introduced into the simulation study, my students said the activity made them realize that they were neither in filter bubbles nor helpless at the mercy of the algorithms, which they experienced as surprising, eye-opening, and relieving.

A formative assessment already included in the activity is the immediate feedback from the teacher to the one-minute paper at the end. Another formative assessment that could also be used as an assignment is a reflection note in which students provide peer feedback to one another (see Appendix for examples of tasks). Similar tasks might be used in a school exam as a summative assessment, but given the intended learning outcomes, formative assessment is more recommendable than summative assessment.

While this activity can be applied in both physical and synchronous online teaching, I have observed that the former works better, since working in small groups is easier for students in a physical classroom. The activity cannot be used in asynchronous online teaching due to the centrality of the TPS method for achieving the learning outcomes. Another limitation is that I have tried the activity only in a European context, but the current state of research on filter bubbles suggests that the activity can be implemented relatively context independently.

Supplemental material

Supplemental Material

Download ()

Notes

1 https://www.youtube.com/watch?v=4w48Ip-KPRs&t=282s. Stop at 4:36 when Pariser says, “You don’t decide what gets in [the filter bubble]. And more importantly, you don’t actually see what gets edited out.”

References and suggested readings

  • Bodó, B., Helberger, N., Eskens, S., & Möller, J. (2019). Interested in Diversity: The role of user attitudes, algorithmic feedback loops, and policy in news personalization. Digital Journalism, 7(2), 206–229. https://doi.org/10.1080/21670811.2018.1521292
  • Bonwell, C. C., & Eison, J. A. (1991). Active learning: Creating excitement in the classroom. School of Education and Human Development, George Washington Univ..
  • Cooper, J. L. & Robinson, P. (2000). Getting started: Informal small-group strategies in large classes. New Directions for Teaching and Learning, 81, 17–24. https://doi.org/10.1002/tl.8102
  • Haim, M., Graefe, A., & Brosius, H.-B. (2018). Burst of the filter bubble? Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330–343. https://doi.org/10.1080/21670811.2017.1338145
  • Jürgens, P., & Stark, B. (2017). The power of default on reddit: A general model to measure the influence of information intermediaries. Policy & Internet, 9(4), 395–419. https://doi.org/10.1002/poi3.166
  • Jürgens, P., Stark, B, & Magin, M. (2014). Gefangen in der Filter Bubble? Search Engine Bias und Personalisierungsprozesse bei Suchmaschinen [Caught in the filter bubble? Search engine bias and personalization of search engines]. In B. Stark, D. Dörr & S. Aufenanger (Eds.), Die „Googleisierung“ der Informationssuche. Suchmaschinen zwischen Nutzung und Regulierung [The “Googleization“ of information search. Search engines between usage and regulation] (pp. 98-135). de Gruyter.
  • Lyman, F. (1981). The responsive classroom discussion. In A. S. Anderson (Ed.), Mainstreaming digest: A collection of faculty and student papers (pp. 109–113). University of Maryland Press.
  • Newman, N., Fletcher, R., Eddy, K., Robertson, C. T., & Nielsen, R. K. (2023). Reuters Institute Digital News Report 2023. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf
  • Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin Press.
  • Stegmann, D., Magin, M., & Stark, B. (2022). Filter bubbles. In A. Ceron (Ed.), Encyclopedia of technology and politics (pp. 210–216). Edward Elgar Publishing LTD.
  • Steiner, M., Magin, M., Stark, B., & Geiß, S. (2022). Seek and you shall find? A content analysis on the diversity of five search engines’ results on political queries. Information, Communication and Society, 25(2), 217–241. https://doi.org/10.1080/1369118X.2020.1776367