4,695
Views
1
CrossRef citations to date
0
Altmetric
Articles

Learning in and about a filtered universe: young people’s awareness and control of algorithms in social media

, &
Pages 701-713 | Received 15 Jun 2022, Accepted 27 Aug 2023, Published online: 06 Sep 2023

ABSTRACT

Whereas ‘Web 2.0 technology’ has pushed the learning agenda towards connectivity and boundary crossing, in the current ‘new new media ontology’ the fear that algorithms might block our avenues to knowledge and connections prevails. In response to this, media scholars have argued that knowledge based on the algorithmic experiences of users is key to reformulating the agenda of critical media education. In this study of the algorithmic experiences of secondary education students in the Netherlands, we want to contribute to building such knowledge, making use of the concepts of algorithmic imagination, power and critical evaluation. Results show students build situational, practical-experiential knowledge of algorithmic workings that is closely in line with the features of the interface of the social media platforms they use. Implications for media literacy include providing students with system-level awareness and agency, including insights into the societal-political consequences of algorithmic workings.

Introduction

Changing technologies, changing learning paradigms

While only two decades ago, new digital technologies were predominantly seen as offering us endless possibilities in terms of connectivity and creating new forms of openness and global exchange, such thinking about technology now sounds naïve. Instead of seeing technology as offering us a digital infrastructure that allows us to connect, exchange, transfer, cross boundaries, etc., currently, the fear exists that algorithmic filtering causes unpredictable and invisible boundaries between people, and might result in unwanted and unperceived forms of selection and segregation. Moreover, such changes in thinking about technology influences how we imagine learning (Loveless and Williamson Citation2013). Learning paradigms such as ‘connected learning’ (Ito et al. Citation2013), ‘personalized e-learning’ (O'Donnell et al. Citation2015) or ‘global learning’ (Gibson, Rimmington, and Landwehr-Brown Citation2008) are seen as inspired by a reading of technology as (global) networked connectivity (Haan de et al. Citation2014; Loveless and Williamson Citation2013). Such learning paradigms draw upon the idea of an imagined future in which learners, with the help of new digital infrastructures, are able to gather an endless amount of resources, to seek out mindlike others, cross boundaries or be challenged to develop global perspectives. As Ünlüsoy, Leander, and de Haan (Citation2022) argue, this connectivity paradigm is reflected in the Web 2.0 ideology of learning, in which crossing boundaries and hypertextual connections are key to how we think about learning.

Currently, new technologies and how they are imagined and experienced do not seem to encourage such visionaries. Instead, research warns us that platform logics based on surveillance and maximizing the attention of the user, intervene in traditional pedagogies (Perrotta et al. Citation2021), or that critical education is in danger as a result of the logics and practices of datafication and automation (Sefton-Green and Pangrazio Citation2021). The optimism that technology with its new potential for connectivity would provide us with new possibilities for development and learning, now seems replaced by a fear that technology will ‘take away’ or ‘take over’ learning and development. For instance, algorithms might prevent us accessing particular knowledge and insights and lock us into ‘filter bubbles’ (Pariser Citation2011) or ‘echo chambers’ (Jamieson and Cappella Citation2008), that limit access to information and opinion formation in ways that are outside our control and consciousness. Beer (Citation2009), referring to the ‘new new media ontology’ that represents a shift away from Web 2.0 technology, makes the point that increasingly the human mind acts in conjunction with technologies. In such a dynamic, humans are not the only ones that act as ‘knowing’ or ‘learning’, because software also gathers information about humans and predicts our behaviour, as also has been recently argued in educational research (e.g., Perrotta and Selwyn Citation2020). Drawing on Graham (Citation2004), Beer also points to the hidden forms of power such a dynamic creates. While under the old conception of technology, critical voices centred around the idea of digital divides in terms of lack of access by parts of the population, currently research on digital divides increasingly focuses, instead of on difference in terms of access, on the role algorithmic awareness can play in creating digital divides (Gran, Booth, and Bucher Citation2020; Hargittai and Micheli Citation2019). Such notions of technology blocking access, locking us up, and exerting forms of hidden power, stand in stark contrast to the earlier discourse on technology as offering new possibilities, connectivity, openness and boundary crossing.

Reformulating critical media literacy

In response to these concerns, media studies and education scholars who critically observe the influence of datafication and new technologies on learning, have reflected upon the educational response to these developments or, more specifically, what critical media literacy should entail in response to these developments. Learning about such technologies is fundamentally different from the situation in earlier eras. Given this, one can question if it is possible to teach about technologies that are mostly hidden from the user and that constantly adapt in interaction with user behaviour (Pangrazio and Sefton-Green Citation2020; Sefton-Green and Pangrazio Citation2021).

Despite such doubts, scholars have started to argue how media literacy should be redefined given this ‘new new media ontology’. As Jacques et al. (Citation2020) state, media literacy in the current context moves beyond learning how to use these technologies, and turns to the critical analyses of interfaces and data systems themselves. Media education should turn to an analysis of the technological systems in which students immersed and to their continuous interaction with these systems (Jacques et al. Citation2020; Perez Vallejos et al. Citation2017).

The developments described above have prompted an emerging line of interventions that engage with the associated challenges as well as with these emerging reformulations of media literacy. Interventions have been developed to assist people in gaining a better understanding of algorithmic filtering, for example through a game in which teenagers can share their daily experiences of algorithms (Jacques et al. Citation2020), a game making students aware of the hidden workings of algorithms in the production of news (Roozenbeek and Van der Linden Citation2019) or a visualization of users’ social connections on Twitter to make participants aware of the political echo chamber they are in and challenge them to follow accounts that oppose their own political views (Gillani et al. Citation2018).

However, such interventions are still emerging. There is a need for further research and development regarding critical literacy skills (e.g., Sefton-Green and Pangrazio Citation2021). In order to design such interventions for young people, developers should first understand what young people already know about the working of algorithms and what literacy processes they develop in interaction with platforms and their algorithmic functions (Koenig Citation2020). In the following section, we review current insights on users’ experiences with algorithms, with a particular focus on what is known about young users.

Understanding and experiencing algorithms: key concepts

First of all, research has focused on the extent to which people are aware that the digital content they are provided with is algorithmically selected (Eslami et al. Citation2015; Klawitter and Hargittai Citation2018; Swart Citation2021). This so-called algorithmic awareness can be defined as ‘knowing that a dynamic system is in place that can personalize and customize the information that a user sees or hears’ (Hargittai et al. Citation2020, 771). Research suggests that algorithmic awareness varies among users, and can only partly be explained by background factors such as education or age, and that the strongest predictor for algorithmic awareness is the frequency and diversity of users’ prior experience with algorithms (Hargittai et al. Citation2020; Swart Citation2021). In line with this, research shows that algorithmic awareness is often experiential and contextual, as it is built and understood through use in particular media contexts, which makes it hard to transfer to other media contexts (Cotter and Reisdorf Citation2020; Swart Citation2021). For instance, Swart (Citation2021) found that the fact that users were aware of the tailoring of advertisements in their social media apps, did not make them more aware of the possible personalization of non-sponsored stories in their timeline. Except for the research by Fletcher and Nielsen (Citation2019) on how users navigate news in social media, that suggests that younger users have both a more developed and explicit algorithmic awareness and more favourable attitudes towards platforms, results so far do not provide a clear picture of what might be specific of the algorithmic awareness of younger users.

Research on algorithmic awareness has developed further, addressing particular aspects of such awareness such as Bucher’s (Citation2017) ground-breaking work that focuses on users’ (affective) understanding of algorithms in everyday life. While asking under what circumstances users become aware of, and experience algorithms, she has focused on how users make sense of algorithms given that they are largely invisible. Bucher uses the term algorithmic imagination, which she defines as ‘the way in which people imagine, perceive and experience algorithms and what these imaginations make possible’ (Bucher Citation2017, 31). Bucher illustrates the development of algorithmic imagination with the example of a user who is surprised that her Facebook algorithm seems to think that she is both pregnant and (wrongly assumes) she is also single, given the advertisements on her feed. Apparently, this user has formed a ‘theory’ on how her Facebook algorithm works, which was violated. Her initial ‘theory’ apparently included a notion that her media behaviour is being tracked, and that the algorithm is able to classify her personal-marital status correctly. Algorithmic imaginations are thus (unconscious) theories or hypotheses that users have of algorithms, that also impact upon how these users act in response to such imaginations. Bucher points to the fact that instead of focusing on the hidden nature of algorithms, we might start to focus on ‘the ways in which they are being articulated, experienced and contested in the public domain’ (p.40). To the best of our knowledge, there is no research that specifically addresses the algorithmic imagination of young users.

In line with Bucher’s notion that users act upon these algorithmic imaginations, research has focused on how users actively influence the algorithmic workings of platforms to engage in conscious, instrumental interactions with algorithms or, in other words, how they ‘game the system’ (Cotter Citation2019). In a study of how influencers deal with the power algorithms have on their visibility, Cotter discovered that these influencers formulated certain tactics to increase their online visibility. In this sense, Cotter claims users also exert algorithmic power. To the best of our knowledge, not much is known about the specific strategies of young users in terms of influencing the algorithmic workings of their social media platforms, except for the Fletcher and Nielsen (Citation2019) study mentioned above in which it was found that younger users talked more explicitly about how they tried to influence news selection on their social media in comparison to older users.

Lastly, an important part of how users understand algorithms is the social and personal significance of algorithmic experiences and users’ critical evaluation thereof. Bucher (Citation2018) claims that part of what she calls algorithmic literacy should, next to a basic understanding of their workings, entail a critical evaluation of the influence algorithms have over us as individuals, as well as what impact they have on society. Such a critical evaluation should entail an understanding of how collective and individual agency is impacted through the interaction with algorithmically-driven platforms. This may imply the notion that algorithms can impact upon our privacy, or the concern that algorithms might create or strengthen digital divides by providing information and opinions for us that only confirm what we already know, which could result in ideological polarization (Flaxman, Goel, and Rao Citation2016). Or, as has been argued from a critical race theory perspective, the concern that filtering mechanisms build upon concepts that include certain forms of bias which might strengthen racism or sexism (Benjamin Citation2019).

The current study situates itself within this beginning tradition on how users experience the algorithm workings of their social media platforms, focusing specifically on young users in an attempt to create insights and develop guidelines for the development of educational interventions that engage with the changes in our media landscape and the associated challenges we described above. As Zarouali, Boerman, and de Vreese (Citation2021) has argued, there is a need for research that investigates how algorithmic awareness relates to digital literacy and competences. This study focuses on the awareness of young users with regard to algorithmic workings of social media applications. We will use algorithmic awareness as an umbrella term, but focus particularly on the concepts of algorithmic imagination, algorithmic power and critical evaluation as described above. We ask:

  • Are secondary school students aware of the algorithmic workings of their social media apps, and how do they imagine such workings? (algorithmic imagination)

  • What do they do to influence, resist or work around such algorithmic workings? (algorithmic power)

  • How do they evaluate and reflect upon the (ethical) effects of such algorithmic workings? (critical evaluation).

Methods and background of the project

This study is part of a design-based research project in which an educational app is designed that promotes algorithmic awareness, knowledge, agency and ethical reflection on the effects of algorithms for students in secondary education in the Netherlands, see https://www.uu.nl/en/organisation/development-education-of-youth-in-diverse-societies/the-filter-bubble-app. This project came forth from a need expressed by teachers to teach about the role of social media in producing one-sided or biased information and opinions, associated with the idea of filter bubbles (Pariser Citation2011). To be able to design such an educative app in line with youth’s direct experiences, and existing knowledge of algorithmic filtering and its possible consequences, more insight had to be obtained with regard to these topics. To do so interviews were conducted with secondary school students. We are particularly interested in how algorithmic awareness functions when students are looking for particular content on their social media platforms. To provoke students’ awareness of algorithmic workings, as well as awareness of the need to critically assess them, we specifically asked respondents to find content on a topic on which a variety of opinions might exist within their peer group.

Sample

In this study 18 secondary school students (five boys, thirteen girls), coming from six Dutch pre-vocational secondary schools participated. The respondents were between 12 and 16 years of age (see ), and indicated that they were frequent users of Tiktok, Whatsapp, Youtube, Instagram, Discord, Snapchat or Reddit. Recruitment was done by teachers and youth workers involved in the research project. The criteria for selection were that students had to have some experience with social media and a willingness to share their experience in this research. Although this procedure does not allow us to claim representativity, based on earlier contact between the students from the school and the researchers, it is estimated that their experience in terms of social media was average for this group. Based on these inclusion criteria all teachers or youth workers selected one, two or three pairs of students to participate, after which they requested the students and their parents to provide consent.

Table 1. Gender, age, school level of the respondents.

Measurement instrument & procedure

The interviews were conducted in pairs of students (a duo interview). This was done to make the respondents feel more at ease, while it also allowed the researchers to include a comparative perspective in the interview. These two assignments detailed below were used as input for the conversation on algorithmic filtering to obtain insight into the three main concepts of our study: (1) algorithmic imagination (How do the respondents explain their experience with and understanding of algorithmic filtering?), (2) algorithmic power (How do the respondents describe their own influence on the filtering of their online content?), and (3) critical evaluation (How do the respondents evaluate and reflect upon the (ethical) effects of such algorithmic workings?).

Assignment one was a walk-through exercise. This approach allows the respondents to engage directly with a social media app’s interface to understand how it guides users and shapes their experiences, while it also facilitates reflection on user experiences (Light, Burgess, and Duguay Citation2016; see also Swart (Citation2021) who recommended this method to research the experiencing of algorithmic filtering). During this activity the respondents were asked to individually undertake a search for content on a controversial topic. That is, of a topic chosen in consultation with the students with regard to which the students expected people to have differing opinions, such as with regard to COVID-19 restrictions. After asking permission of the respondents to observe the process, they searched for these topics on their own phone in what was for them a frequently-used social media application. In the second assignment of the duo-interviews, respondents were presented with two vignettes of people whose opinions grew further apart because of their ‘filter bubbles’, illustrating the presence of online ‘echo chambers’.

The duo interviews took approximately 40 min. The interviews were audio-recorded, and included detailed conversations about the use of the interface, as well as reflections on it following the interview guidelines.

Data analysis

The transcribed interviews were coded using Nvivo12. A thematic approach was used to code the data, based on the theoretical framework and research questions of this study. The analysis was conducted to search for texts relating to the concepts and themes underlying the research questions. This involved focusing on algorithmic imagination, algorithmic power and critical evaluation, according to how we presented these concept in our introduction. In our analyses, we paid attention to how these concepts were ‘consequential’ for students in thinking about, dealing with, and opinionating about the selection of the content on their social media applications.

Ethical review procedure

Ethical approval was provided by the Ethical Review Board of the Faculty of Social and Behavioural Sciences of Utrecht University (reference number 20-0644, FETC). Information about the research and the voluntary nature of participation was discussed with all respondents. Because all respondents were 16 years of age or younger, parental consent was necessary and was provided. During the research, the respondents were asked to share their personal social media content with the researcher during the walk-through exercise to enable the discussion of algorithmic filtering. To overcome possible ethical issues arising from this assignment, social media content was not recorded, and data was handled anonymously by giving the respondents fictious names, a feature of which the respondents were made aware of. By asking the respondents to come up with a controversial topic themselves in assignment 1, we tried to ensure that they felt comfortable discussing that specific topic.

Results

Algorithmic awareness and imagination

In line with Buchers’ view that awareness must be seen in the light of how users make sense of algorithms while imagining their workings, a first theme we addressed in the data was algorithmic imagination; what does the respondents everyday experiences with the algorithm entail, and what is their understanding of its working? (Bucher Citation2017). We did so by exploring the circumstances under which users are aware of algorithmic filtering, and how they talked about their understanding of the workings of algorithms. During the walk-through exercise, while discussing the content on their social media page, respondents mentioned that they had become aware of the phenomenon of filtering through interacting with the platform. For example, they stated that when they ‘followed’ or ‘liked’ content they were then presented with more of that particular content. Likewise, when they followed certain people, channels or accounts online, they were often automatically presented by their social media platform with specific content of these pages. Geneva, for instance, explained her algorithm adapts itself after she ‘likes’ certain content on TikTok: ‘Ehh, for example, if you like a video about Israel or Palestine, you immediately get a whole ‘For You Page’ with Israel and Palestine.’ Such statements concerning the interactive nature of the platform were often directly related to how they used the features the platforms made available to them, such as ‘like’, ‘save’, ‘follow’ and ‘not-interested’ buttons. Using these features was described by the respondents as a way to let the platform know their preferred or non-preferred content. This was the dominant understanding students had of how algorithms work.

Likewise, other algorithmic experiences and related understanding of algorithmic filtering the respondents shared, were expressed in relation to features of the interfaces of their social media platform such as searching, sharing, clicking on and looking for an extended period of time at content. Respondents stated that they had learned from experience that using these platform features was a way to communicate their preferences to the platform. However, similarly to the ‘buttons’ these features seemed to be designed in such a way as to make it easy for the user to intuitively find out how they work. Both imaginations of algorithmic filtering can be considered in terms of ‘if I do this’ – ‘then X happens’ experiential-practical knowledge.

Besides these types of experiential knowledge with regard to algorithmic filtering, respondents did occasionally express algorithmic understandings not directly related to their own personal social media use. A few respondents had ideas on the workings of filtering mechanisms that were not related to their own actions on the platform, but to the preferences or actions of others. These respondents indicated, for example, that they often got to see content that a lot of other people seemed to like or follow, or the preferred content of people they knew.

Next to expressing such well-defined ‘if-then’ imaginations with regard to algorithmic functions, students expressed doubts as to why they were presented with certain content, even though they were aware that some form of content selection occurred. For example, students indicated that they might not see a variety of content/opinions online, but were not entirely sure why this was the case. Like Osra, who explained that she was aware that she mostly sees people supporting a certain argument, but she does not know why/how this content is filtered: ‘Ehh, people who ehh, mostly support (my opinion), that's what I encounter most, but, I think that is because that's more what I want to see, I don't know why I only encounter that.’

Even though all respondents provided examples of experiences with algorithmic filtering as described above, there were many indications that their imagination of the workings of algorithms was situational. Firstly, in the interview in which we discussed the representation of a selected controversial topic on the respondents’ social media platform, a number of respondents did not realize that this content might be filtered. They expressed the view that they thought that all users see the same online content, which shows that in some instances they are unaware of filtering mechanisms. Secondly, the awareness of online filtering was often not used to interpret the nature of online content they were presented with, including certain opinions. We found that students mostly thought that what they see online represents ‘what people think’, without much attention being paid to a potential selection effect. What they see online is then interpreted as what is happening in the ‘real world’. For example, Muzna, when discussing with the interviewer what she saw on her screen on TikTok about the Israel-Palestinian conflict, and after stating that she is more pro-Palestine, told the interviewer that she thought not much is happening in Israël, since she does not see any posts from people about this:

Interviewer:

But do you ever get messages from people who are more for Israel, do you sometimes see that too?

Muzna:

No, because not much happens in Israel.

The interviewer tried to confirm if Muzna thought that what she saw online was a representation of what happens in reality:

Interviewer:

So you think […] that it mainly depends on what happens there, that this causes what you see?

Muzna:

Yes

As in this example, students did not express any consciousness of a possible selection effect of the opinions they encountered online, and the possible impact of their online social networks on such selection. They did not name the possible influence of algorithmic filtering, nor their own actions that might influence the content presented to them.

Thirdly, our findings showed that respondents did not recognize the effect of a so-called ‘echo chamber’ in the vignette exercise (see the method section). The possibility that by being confronted with a particular selection of opinions one’s own opinion might be impacted by such a selection, was only recognized by one pair of respondents.

In conclusion, even though all respondents were able to demonstrate an awareness of the interactive nature of content filtering, they did not apply this knowledge when they judged the nature of the content they were presented with on their social media platform more generally. The majority of the respondents indicated that they saw a variety of opinions on their social media platforms and they stated that they were presented with multi-sided content.

Critical evaluation

Before we deal with algorithmic power, we will first report how respondents evaluate algorithmic filtering and (what) they thought the possible consequences of filtering could be, as we believe this helps to understand how students experience the power they have over algorithms.

The majority of the respondents had a positive view with regard to on algorithmic filtering, mainly because the algorithm caused them to be presented with their preferred content and/or prevented them from seeing content they did not prefer. Simply put, algorithms make it easy to be presented with what you like. As for example, Yanna explains: ‘Yes, I find it (algorithmic filtering by the platform) useful. Now I do not have to put effort in looking things up myself’.

Among those positive evaluations, there were also a few respondents who also critiqued the influence of algorithms. These critiques were mostly based on the idea that they felt uncomfortable with the fact that the internet ‘has so much information on them’ and that they felt spied upon by the platform. Another critique of a respondent was that due to filtering, one is quickly presented with content from people in ones’ social network, and this gave her the uncomfortable idea that her social network keeps an eye on her at all times. These evaluations always implied consequences at the levels of the individual, and did not take the influence of algorithmic filtering into account at a societal level. When directly prompted to reflect on the chances that online content could influence peoples’ opinions, respondents indicated that this could be a possibility. Geneva, for instance, said that she found such influencing annoying, as everyone should be presented with the same content: ‘Because I feel that they (social media platforms) should show the same (content) everywhere’. Overall, respondents did not elaborate in their answers when prompted with this possibility, and did not engage in further critical reflection on the matter when invited to do so.

Moreover, the respondents did not seem to be worried about algorithm filtering because they felt they were able to manipulate their algorithm to select their preferred content and, as such, were of the opinion that they were able to exert power over it, which brings us to our third concept, algorithmic power.

Algorithmic power

We also aimed to obtain insight into how users actively use algorithmic imaginations to influence the algorithmic workings of platforms in such a way as to engage in conscious, instrumental interactions with algorithms, or make use of their ‘algorithmic power’ (Cotter Citation2019). As explained above, the respondents often expressed enjoying being in charge of their online content, and had the feeling they could influence the algorithm. They do so mostly while using the pre-designed features of the platform as described in the section on imagination. Several respondents specifically expressed this link between experiencing power over the selection of their online content and the use of platform features. Derk, for instance, when asked if he wanted to change the way algorithmic filtering works responded: ‘I would not like to change it (the algorithm) because I just have a sort of power to change it by ‘liking’ things, so … ’.

Interestingly, respondents would not often exert power over their content by searching for specific content in the application. When we started the information-seeking assignment as part of the walk-through method, the majority of the students commented that, normally, when they use social media platforms to look for content, they would go to their home screen and just look at what is presented to them by the platform. They would scroll through their home screen and click on content which would provide them with more content on a certain topic, or would lead them to a certain account, where again they would scroll through the content. So, somewhat in contrast to the assignment we had given them, they indicated that they did not use social media often to actively search for content. They explained that they mostly enjoyed the content that was automatically presented to them by the social media platform. When we nevertheless asked the students to actively search for a topic of their choice on their social media platform, the majority used the search function of the platform, while a minority tried to search for the topic by scrolling down their home screens.

According to the above, it seems that most respondents make use of the filtering mechanisms that are explicitly or intuitively made available to them by the platform to exert power over their online content. There was only one respondent, Derk, who mentioned that he knows a way around the algorithm by going incognito.

Derk:

… say, if you go to Youtube and you have a new account, Youtube throws a lot of different things on the table and looks at what you like. And then it uses what you like to decide what you get. Look, if I go on ‘incognito’, then I do not have an account, then I go to Youtube, now I have to type … .

Interviewer:

Has it now forgotten everything about you?

Derk:

Yes, it says so. When you go incognito it does not remember anything, cookies are turned off, cookies are actually the algorithm.

Another way Derk tries to exert power over his algorithmic filtering is by consciously trying not to influence his algorithm too much, for example by not liking certain content because he wants to be presented with a variety of content and opinions. This conscious ‘avoiding’ and working around the algorithm and the features that are designed to feed the algorithm, is precisely the opposite of what we found the majority of our respondents do. Their experience of agency is based precisely on making use of such pre-designed features.

Discussion

In this section, we will discuss the meaning of our results in terms of current debates on the algorithmic experience of young users. In line with our research questions, we will particularly focus on the notions of awareness, imagination, power and evaluation of algorithms. In addition, we would like to raise the issue with regard to what such algorithmic awareness of young people means for their ability to understand and judge the nature of the content they see on their social media platforms. As a next step, we will reflect on the implications of our findings for the reformulation of (critical) media literacy.

Our results show that these young users have a basic awareness of algorithmic functionality, as it is defined by Koenig (Citation2020): ‘the understanding that there is some complex, computational procedure behind data selection that ‘identifies, codes and decodes data collected through user engagement’ (5). The experiences with their social media platforms clearly point to an awareness of a user-system dynamic that they can train and that reworks their actions with the platform. In addition, the respondents were able to express how they imagine the working of algorithms. What was striking was that these algorithmic imaginings were expressed predominantly in the form of ‘if I do X-then Y happens’ statements which again were predominantly expressed in terms of their interactions with the features of the interfaces provided by their social media platforms and their corresponding algorithmic functionalities.

In line with what most studies on algorithmic awareness and imagination state (Cotter and Reisdorf Citation2020; Swart Citation2021), this study shows that also for this young group of users algorithmic imaginations are expressed as ‘experiential’ knowledge (i.e., not as theoretical) that is closely tied to personal experiences with regard to the algorithmic workings of their own media applications.

Not surprisingly, and in line with the fact that we are dealing with experiential knowledge, our study also confirms another important characteristic of algorithmic awareness and imagination: the fact that such awareness and imagination is situational, and understood in relation to certain contexts. Consequently, it is hard to transfer and apply such knowledge to other contexts (Swart Citation2021). A unique characteristic of the set-up of our study, in contrast to most studies on algorithmic awareness and imagination, was that we asked students to apply their knowledge on algorithmic awareness to the possibility of receiving one-sided information on controversial issues, and the risk of being confronted with limited perspectives and opinions on such issues. Surprisingly, as our results show, students rarely use their algorithmic awareness and imaginations in terms of how they reason about such risks. Even though they were aware that their content was being filtered by their social media platforms, they still claimed that they were presented with ‘the whole story’, and with multi-sided content. Although this finding connects to the situated and non-transferable nature of algorithmic awareness in line with what other studies have found (Cotter and Reisdorf Citation2020; Swart Citation2021), we believe this result also must be seen in light of the following. Apparently, such basic algorithmic awareness can function separate from critical reflection on the societal effects of algorithmic workings, which is in line with Koenig’s (Citation2020) distinction between basic algorithmic literacy and societal and self-reflective forms of algorithmic literacy. It can be speculated that such societal and self-reflective forms of algorithmic literacy are more common in adult users, and thus that these findings are typical for younger users, and that reflectivity in this domain will increase with age. However, given the situatedness of algorithmic awareness and imaginations reported in the literature, also for adults, we must be cautious in assuming that such self-reflective insights develop ‘naturally’ in the case of adult users.

Our findings with regard to what we have termed critical evaluation can provide further insight into this issue. In contrast to Bucher’s (Citation2017) point that users come to imagine algorithmic workings based on when algorithms interrupt what users expect them to do, or contradict unconscious hypotheses users have of the workings of algorithms, our users mostly seem to form their imaginations of algorithms based on their ‘ smooth’ and positive experiences with the workings of algorithms. Our results confirm Fletcher and Nielsen’s (Citation2019) findings that young users had favourable attitudes towards platforms. As noted in our results section, for the young users in our study, such a positive evaluation seemed to be closely related to a sense of control over, and trainability of, the algorithm. Together with the notion that our users have only a functional understanding of the algorithmic workings of their social media platforms, which is almost exclusively expressed in terms of their use of the features and functions as designed by their platforms, the question must be raised as to whether such positive evaluations are not a result of a limited understanding of the implications of the algorithmic workings. Or worse, it must be considered that such positive evaluations might be the result of a deliberate decision on the part of platform developers to create their interfaces in such a way that a sense of agency is produced, while at the same time offering the user only a limited spectrum of options. In other words, our respondents might feel that they experience algorithmic power, whereas they might be effectively lured into platform affordances, which at another level must be read as forms of surveillance and control serving commodification as Zuboff (Citation2015) has argued. This is fundamentally different from the kind of control Cotter (Citation2019) describes, in which users try to influence the algorithmic workings of platforms to ‘game’ the system.

Although we have already cautioned of the need to view some of our results in terms of the particular target population in our study, we believe the predominantly positive experiences of these young users, as well as their sense of agency, might be true for young users that engage in similar media practices as our target group. However, again we must be careful to see age as an explanatory factor and consider the kind of media experiences these users are familiar with and have been growing up with in the past years. One could speculate that the environments the younger generations grow up in are often offering automatically-selected content based on algorithmic filtering without users specifically asking for content: e.g., Tiktok’s ‘for you’ page. In such environments, algorithmically-driven personal selection of content, instant presentation of content, as well as interacting with the platform to ‘manage’ such selection is common, in contrast to media environments in which users actively search for information. This might influence their easy ‘acceptance’ of algorithmic filtering and generate certain assumptions and knowledge with regard to algorithmic awareness, imagination and power, including a positive sense of agency.

Implications for media literacy

What might be the implications of our findings for a reformulation of media literacy, also in the context of the transition to the ‘new new media ontology’ we described in our introduction?

First of all, our study shows that the experiential knowledge young users obtain from their engagement with social media platforms does not guarantee full algorithmic literacy (Powers Citation2017; Swart Citation2021). The students in this study did learn how to effectively ‘manage’ or personalize their algorithmically-driven social media platforms through frequent use. However, they did not engage in a perspective that considers the interfaces and data systems and their algorithmically-driven workings ‘themselves’ from a meta perspective (Jacques et al. Citation2020), in a critical analysis of their interaction with the system (Jacques et al. Citation2020; Perez Vallejos et al. Citation2017) or consider the societal-political consequences of algorithmic workings. Although critical media literacy programmes can build upon the experiential knowledge of students, they should broaden their algorithmic awareness to a more system-based perspective of the workings of platforms and data systems, which would allow them to evaluate their own interaction with platforms in the context of the functioning of these platforms and data systems in our society. The teaching of different types of algorithms (e.g., Karimi, Jannach, and Jugovac Citation2018) can be helpful in terms of building such insights.

Furthermore, another important task of such programmes is to widen the views of students to the collective, societal and political dimensions of the workings of algorithms (see also Sefton-Green and Pangrazio (Citation2021) who have plead for such a socio-political analysis in media literacy programmes). Our analyses showed that although students have a basic understanding of the working of algorithms, they did not easily translate such knowledge into how our society might be affected by, for instance, the automatic selection of information. Educating students that platform algorithms also make judgements, select and sort information in such a way that they might create unwanted personal or societal effects such as ideological polarization or unconscious bias, then seems a key mission of such media literacy programmes.

Given our finding that these young users experience agency over their platforms without critically assessing the algorithmic workings of platforms, this study strengthens the need to provide students with algorithmic power. Students need to learn to critically assess the power dynamic between the algorithmic workings of platforms and their own agency, and learn to develop strategies to enlarge their power as users and as citizens. In line with the fact that humans in the current era more than before ‘act together’ with technologies (Beer Citation2009), and technology als acts as knowing and learning (for instance, by gathering data about users and forming hypothesis on what content they would like to see), in order to keep control over their own learning, it becomes key to teach young learners how and where they can gain more agency to act together with technologies to their own benefit.

Finally, as mentioned in our introduction, the connectivity paradigm has brought particular challenges to our attention around coping with diversity, crossing boundaries, and dealing with improvisation and unpredictability. In the current era such challenges seem absent as they are mostly automated, already solved for us as algorithms take over part of these coping and boundary problems when they create customized niches for us, providing us with just the right level of sameness and diversity to keep us interested. As our study illustrates, much of this goes unnoticed on the part of users. This means that in the current era, critical media literacy needs to include the ‘unpacking’ of how such niches are constructed and automated, in order to make it possible for users to assess what boundary crossing and diversity challenges remain. Thus, next to learning how to bridge boundaries, the issue becomes also how to control, gain awareness and exercise agency over the automated bridges and boundaries created by data systems and platforms.

Limitations

We should be aware that our research took place with a specific sample: students from the pre-vocational track. It is expected that algorithmic awareness and knowledge are not equally divided over different groups in society (Gran, Booth, and Bucher Citation2020), which is also likely to be the case between students from different schooling tracks. Apart from the limitations that come with the small sample size, our findings might therefore not be generalizable to the larger secondary school population nor to youth in general. However, even if part of these findings might be particular to this group of students, our study fills in an important gap in the literature related to the kind of knowledge young people generate spontaneously based on their social media use. Such knowledge again generates interesting propositions and points of attention that need to be considered for the reformulation of new media literacy programmes that aim to address the technological changes in our current era.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Dynamics of Youth, Utrecht University.

References

  • Beer, D. 2009. “Power Through the Algorithm? Participatory Web Cultures and the Technological Unconscious.” New Media & Society 11 (6): 985–1002. https://doi.org/10.1177/1461444809336551
  • Benjamin, R. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Books.
  • Bucher, T. 2017. “The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms.” Information, Communication & Society 20 (1): 30–44. https://doi.org/10.1080/1369118X.2016.1154086
  • Bucher, T. 2018. If … . Then: Algorithmic Power and Politics. Oxford: Oxford University Press.
  • Cotter, K. 2019. “Playing the Visibility Game: How Digital Influencers and Algorithms Negotiate Influence on Instagram.” New Media & Society 21 (4): 895–913. https://doi.org/10.1177/1461444818815684
  • Cotter, K., and B. C. Reisdorf. 2020. “Algorithmic Knowledge Gaps: A New Dimension of (Digital) Inequality.” International Journal of Communication 14: 745–765.
  • Eslami, M., A. Rickman, K. Vaccaro, A. Aleyasen, A. Vuong, K. Karahalios, K. Hamilton, and C. Sandvig. 2015. “I Always Assumed That I Wasn’t Really That Close to [her]”:Reasoning About Invisible Algorithms in News Feeds.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April, edited by B. Begole, 153–162. New York : Association for Computing Machinery.
  • Flaxman, S., S. Goel, and J. M. Rao. 2016. “Filter Bubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80 (S1): 298–320. https://doi.org/10.1093/poq/nfw006
  • Fletcher, R., and R. K. Nielsen. 2019. “Generalised Scepticism: How People Navigate News on Social Media.” Information, Communication & Society 22 (12): 1751–1769. https://doi.org/10.1080/1369118X.2018.1450887
  • Gibson, K., G. Rimmington, and M. Landwehr-Brown. 2008. “Developing Global Awareness and Responsible World Citizenship with Global Learning.” Roeper Review 30 (1): 11–23. https://doi.org/10.1080/02783190701836270
  • Gillani, N., A. Yuan, M. Saveski, S. Vosoughi, and D. Roy. 2018. “Me, My Echo Chamber, and I: Introspection on Social Media Polarization.” Proceedings of the 2018 World Wide Web Conference, 823–831.
  • Graham, S. 2004. “The Software-Sorted City: Rethinking the ‘Digital Divide’.” In The Cybercities Reader, edited by S. Graham, 324–334. London: Routledge.
  • Gran, A.-B., P. Booth, and T. Bucher. 2020. “To be or Not to be Algorithm Aware: A Question of a New Digital Divide?” Information, Communication & Society, 1–18.
  • Haan de, M., K. Leander, A. Ünlüsoy, and F. Prinsen. 2014. “Challenging Ideals of Connected Learning: The Networked Configurations for Learning of Migrant Youth in the Netherlands.” Learning, Media and Technology 39 (4): 507–535. https://doi.org/10.1080/17439884.2014.964256
  • Hargittai, E., J. Gruber, T. Djukaric, J. Fuchs, and L. Brombach. 2020. “Black Box Measures? How to Study People’s Algorithm Skills.” Information, Communication & Society, 1–12.
  • Hargittai, E., and M. Micheli. 2019. “Internet Skills and Why They Matter.” In Society and the Internet. How Networks of Information and Communication are Changing Our Lives, edited by M. Graham, and W. H. Dutton, 109–126. Oxford University Press.
  • Ito, M., K. Gutiérrez, S. Livingstone, B. Penuel, J. Rhodes, K. Salen, J. Schor, J. Sefton-Green, and C. Watkins. 2013. Connected Learning: An Agenda for Research and Design. Irvine, CA: Digital Media and Learning Research Hub.
  • Jacques, J., J. Grosman, A. Collard, Y. Oh, A. Kim, and H. Jeong. 2020. “In the Shoes of an Algorithm: A Media Education Game to Address Issues Related to Recommendation Algorithms.” The Journal of Education 3 (1): 37–62. https://doi.org/10.31058/j.edu.2020.31005
  • Jamieson, K. H., and J. N. Cappella. 2008. Echo Chamber: Rush Limbaugh and the Conservative Media Establishment. Oxford: Oxford University Press.
  • Karimi, M., D. Jannach, and M. Jugovac. 2018. “News Recommender Systems-Survey and Roads Ahead.” Information Processing and Management 54 (6): 1203–1227.
  • Klawitter, E., and E. Hargittai. 2018. ““It’s Like Learning a Whole Other Language”: The Role of Algorithmic Skills in the Curation of Creative Goods.” International Journal of Communication 12: 3490–3510.
  • Koenig, A. 2020. “The Algorithms Know Me and I Know Them: Using Student Journals to Uncover Algorithmic Literacy Awareness.” Computers and Composition 58: 102611. https://doi.org/10.1016/j.compcom.2020.102611
  • Light, B., J. Burgess, and S. Duguay. 2016. “The Walktrough Method: An Approach to the Study of Apps.” New Media and Society 20 (3): 881–900.
  • Loveless, A., and B. Williamson. 2013. Learning Identities in a Digital age: Rethinking Creativity, Education and Technology. Londen: Routledge.
  • O'Donnell, E., S. Lawless, M. Sharp, and V. Wade. 2015. “A Review of Personalised e-Learning: Towards Supporting Learner Diversity.” International Journal of Distance Education Technologies 13 (1): 22–47. https://doi.org/10.4018/ijdet.2015010102
  • Pangrazio, L., and J. Sefton-Green. 2020. “The Social Utility of ‘Data Literacy’.” Learning, Media and Technology 45 (2): 208–220. https://doi.org/10.1080/17439884.2020.1707223
  • Pariser, E. 2011. The Filter Bubble: What the Internet is Hiding from You. London: Penguin.
  • Perez Vallejos, E., A. Koene, V. Portillo, L. Dowthwaite, and M. Cano. 2017. “Young People’s Policy Recommendations on Algorithm Fairness.” Proceedings of the 2017 ACM on Web Science Conference - WebSci ‘17.
  • Perrotta, C., K. N. Gulson, B. Williamson, and K. Witzenberger. 2021. “Automation, APIs and the Distributed Labour of Platform Pedagogies in Google Classroom.” Critical Studies in Education 62 (1): 97–113. https://doi.org/10.1080/17508487.2020.1855597
  • Perrotta, T., and N. Selwyn. 2020. “Deep Learning Goes to School: Toward a Relational Understanding of AI in Education.” Learning, Media and Technology 45 (3): 251–269. https://doi.org/10.1080/17439884.2020.1686017
  • Powers, E. 2017. “My News is Filtered? Awareness of News Personalization Among College Students.” Digital Journalism 5 (10): 1315–1335.
  • Roozenbeek, J., and S. Van der Linden. 2019. “Fake News Game Confers Psychological Resistance Against Online Misinformation.” Palgrave Communications 5 (1): 1–10. https://doi.org/10.1057/s41599-019-0279-9
  • Sefton-Green, J., and L. Pangrazio. 2021. “The Death of the Educative Subject? The Limits of Criticality Under Datafication.” Educational Philosophy and Theory 54 (12): 1–10.
  • Swart, J. 2021. “Experiencing Algorithms: How Young People Understand, Feel About, and Engage with Algorithmic News Selection on Social Media.” Social Media + Society 7 (2): 1779. https://doi.org/10.1177/20563051211008828.
  • Ünlüsoy, A., K. M. Leander, and M. de Haan. 2022. “Rethinking Sociocultural Notions of Learning in the Digital era: Understanding the Affordances of Networked Platforms.” E-Learning and Digital Media 19 (1): 78–92. https://doi.org/10.1177/20427530211032302
  • Zarouali, B., S. C. Boerman, and C. H. de Vreese. 2021. “Is This Recommended by an Algorithm? The Development and Validation of the Algorithmic Media Content Awareness Scale (AMCA-Scale).” Telematics and Informatics 62: 101607. https://doi.org/10.1016/j.tele.2021.101607
  • Zuboff, S. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30 (1): 75–89.