4,912
Views
13
CrossRef citations to date
0
Altmetric
Articles

Students Evaluating and Corroborating Digital News

ORCID Icon & ORCID Icon
Pages 549-565 | Received 20 Apr 2020, Accepted 27 Jan 2021, Published online: 22 Mar 2021

ABSTRACT

In this study, we investigate how 2,216 Swedish upper secondary school students’ performances of sourcing, evaluating evidence, and corroborating digital news relate to their background, educational orientation attitudes, and self-rated skills. We used a combined online survey and performance test to investigate students’ abilities to evaluate online news. Findings confirm and challenge previous research results about civic online reasoning. The most prominent effect on performance is the appreciation of credible news. This attitude is related to students’ abilities to source news, evaluate texts and images, and corroborate a misleading climate change website. We also found a digital civic literacy divide between students on theoretical and vocational programs with different knowledge, skills, and attitudes. Noting the democratic challenge of misinformation, we call for more research on how education can support digital civic literacy in general and specific ways.

Introduction

In a world of information disorder and infodemics, researchers and authorities call for actions against misinformationFootnote1 (European Commision, Citation2018; Wardle & Derakhshan, Citation2017; Zarocostas, Citation2020). In this call, education is described as a key defense against disinformation (European Commision, Citation2018). Since automated fact-checking and legislation hold many limitations, researchers note that citizens, and not least young people growing up in a digital world, need updated knowledge, skills, and attitudes to use new media wisely (Carlsson, Citation2019; Mihailidis, Citation2018). The ability to read and evaluate digital information has been described as a “survival skill” for citizens in a digital world (Eshet, Citation2004). Today, international and national guidelines for education emphasize the importance of students’ media and information literacy in the fight against misinformation. Nevertheless, many students struggle to determine the credibility of online information (Breakstone et al., Citation2019; Ku et al., Citation2019; Nygren & Guath, Citation2019).

From a psychological point of view, there are many reasons why it is difficult for people to determine the credibility of online information. People may follow misleading visual cues and use flawed heuristics to assess the content’s trustworthiness (Fogg et al., Citation2003; Metzger & Flanagin, Citation2013; Sundar et al., Citation2007). In addition, assessments may be influenced by confirmation biases (Kahan, Citation2017; Nickerson, Citation1998). However, it has also been noted that young people’s knowledge, skills, and attitudes may support their abilities to determine the credibility of digital news (Nygren & Guath, Citation2019).

Considering that accurate information is a pivotal part of democracy, it is of great importance to acquire a better understanding of adolescents’ abilities to assess the credibility of online news. Not least since it is difficult to correct false and biased perceptions, even when people discover that they have been misinformed (De Keersmaecker & Roets, Citation2017; Lewandowsky et al., Citation2012). Civic online reasoning offers a prescriptive theory of how one ought to fact-check online information (McGrew et al., Citation2017). Civic online reasoning has been noted among professional fact-checkers (Wineburg & McGrew, Citation2019) and is defined as “the ability to effectively search for, evaluate, and verify social and political information online” (McGrew et al., Citation2018, p. 1). Based upon theories of civic online reasoning, students’ abilities to determine the credibility of online information have been assessed (Breakstone et al., Citation2019; McGrew et al., Citation2018). However, previous research have investigated how civic online reasoning relates, only to a limited extent, to students’ backgrounds and self-rated skills and attitudes (Nygren & Guath, Citation2019). In the current study, we investigate how 2,216 Swedish upper secondary school students’ education, self-rated skills, and attitudes are related to civic online reasoning to better understand the challenges of misinformation.

Previous Research

News consumption can support democratic engagement and create an understanding of society, but it is also associated with segregation, in terms of access to and interest in news among Swedish adolescents (Kruikemeier & Shehata, Citation2017; Lindell & Hovden, Citation2018). Similarly, research have identified a digital divide in productive use of digital media pertaining to inequalities associated with gender and class rather than access to digital technology (Hargittai, Citation2001; van Dijk, Citation2020). Access to computers and digital media does not automatically bridge this divide, since students with poor socio-economic status may spend more time online on entertainment and simple tasks than peers with better support from home (Deursen & Dijk, Citation2014; Hatlevik et al., Citation2015).

Determining whom to trust online is very complex and intellectually challenging since source critical thinking has multiple interlinked dimensions (Nygren et al., Citation2020; Sundar et al., Citation2007). The ability to successfully navigate false, biased, and credible information has been linked to cognitive abilities and flexible thinking, whereas motivated reasoning and confirmation bias are related to less efficient navigation (Flanagin et al., Citation2018; Pennycook & Rand, Citation2018). People can also be more or less aware of malicious online problems, giving them a one-sided perspective on information as being credible (Metzger et al., Citation2015). Naïve internet users may trust almost anyone and are easily deceived, while more informed users are more skeptical when dealing with news and issues of security and privacy (Ku et al., Citation2019; Vraga & Tully, Citation2019).

The importance of literacy and academic literacy has been underscored in previous research (Cummins & Swain, Citation2014; Moje, Citation2007; Shanahan & Shanahan, Citation2012). Students’ abilities to determine credibility has been linked to their disciplinary literacy, for example, their subject specific knowledge and habits of mind (Nygren & Guath, Citation2019; Nygren, Haglund, et al., Citation2019). In theory and practice, scholars have outlined different aspects of disciplinary literacy that are necessary for evaluating information in different academic subjects. This research highlights how expertise in a discipline includes content knowledge, skills, and attitudes (Nygren, Haglund, et al., Citation2019; Shanahan et al., Citation2011; Wineburg, Citation1991, Citation1998). Among experts, it has been noted that fact-checkers are particularly skilled at using digital resources to double check information (i.e., read laterally); moreover, they are able to corroborate information in more skillful ways than trained historians (Wineburg & McGrew, Citation2019). It has also been noted that people who are skilled at manipulating images are more adept at identifying them (Shen et al., Citation2019). Other research indicate that students with educational orientation in the arts are better at identifying manipulative information (Nygren & Guath, Citation2019).

Performance studies on teenagers’ abilities to evaluate digital news are rare. However, a growing number of studies highlight how young people struggle to determine credibility in digital environments (Breakstone et al., Citation2019; Ku et al., Citation2019; McGrew et al., Citation2018; Nygren & Guath, Citation2019). It is evident that young people may not be as skilled as they think they are at navigating online information (Enochsson, Citation2018; Nygren & Guath, Citation2019). Even with access to a great amount of information, it may be difficult for young people to determine what sources to trust and how to make use of resources like Wikipedia and navigate socio-scientific controversy in digital environments (Blikstad-Balas, Citation2016; Flanagin et al., Citation2018; Haider & Sundin, Citation2019; Solli, Citation2019). Students may struggle to find and verify relevant information online, since they confuse popularity rankings with quality, for instance, they tend to click on top-listed hits on Google (Pan et al., Citation2007; Sundin & Carlsson, Citation2016). Experts, on the other hand, often practice click restraint and corroborate information in ways that are more sophisticated (Wineburg & McGrew, Citation2019).

Previous explorative research indicate that the appreciation of access to credible news and a recognition of the difficulty to evaluate sources characterize a constructive mind-set when determining the credibility of digital news. Conversely, over-confidence in one’s ability in searching for and reviewing information online has been noted as indicative of the opposite (Nygren & Guath, Citation2019). One possibility is that self-reflection is related to science curiosity (Kahan et al., Citation2017), which has been linked to better ability to assess the credibility of online information. The diverging ability to assess the credibility of online news has been described as a digital divide, which, in turn, has been linked to the concept of news seekers and news avoiders (Nygren & Guath, Citation2019; Strömbäck et al., Citation2013). Young people with different attitudes toward news may or may not learn to appreciate and navigate credible news media (Lindell & Hovden, Citation2018; Nygren & Guath, Citation2019). Attitudes toward the importance of news differ among students in theoretical and vocational programs in Sweden, that is, following the news as students in vocational programs is considered as a low prestige activity, whereas the opposite may be true for students in theoretical programs (Lindell, Citation2018). Despite the importance of the ability to determine the credibility of news, studies investigating the link between students’ abilities and their backgrounds, education, attitudes, and self-rated skills are lacking.

Theoretical Considerations

Media and information literacy (MIL) can be conceptualized as an umbrella term that includes multiple literacies that are useful and necessary to navigate online information (Carlsson, Citation2019; Koltay, Citation2011a, Citation2011b). MIL can be defined as knowledge needed to access, evaluate, analyze, and create information online and includes a number of digital literacies, such as photo-visual literacy and information literacy (Aufderheide, Citation1993; Eshet, Citation2004; Hobbs, Citation2010; Leaning, Citation2019; Livingstone, Citation2004). Advocates of MIL underscore how all citizens need to be able to access, evaluate, and draw conclusions regarding social and political topics based upon digital news.

Abilities to combine multiple literacies have been described as transliteracy “the ability to read, write and interact across a range of platforms, tools and media” (Thomas et al., Citation2007). Theories of transliteracy have recently been linked to civic online reasoning, and researchers note how skilled fact-checkers possess transliterate abilities that ordinary people also need to navigate digital news (Frau-Meigs et al., Citation2020). Civic online reasoning is a prescriptive theory, developed by the Stanford History Education Group, stemming from empirical studies identifying how professional fact-checkers are more skilled than historians and students when evaluating online information (Wineburg & McGrew, Citation2019). Specifically, the theory looks at three dimensions that fact-checkers focus on when evaluating the credibility of information, namely how fact-checkers (a) identify the source of information (sourcing), (b) identify if the information is based upon reliable information (evidence), and (c) verify information by using multiple independent sources (corroboration). The three dimensions relate to questions investigating: (a) Who is behind the information? (b) What is the evidence?, and (c) What do other sources say? (McGrew et al., Citation2017, Citation2018). The importance of scrutinizing the source, evaluating the presented evidence, and corroborating claims can be linked to theories on information literacy from the 1980s (Breivik & Gee, Citation1989). In this study, we use the theory of civic online reasoning to better understand some of the complexities of determining the credibility of digital news.

Present Study

In light of previous research, it is imperative to investigate the question: How do students’ abilities to determine the credibility of digital news relate to their backgrounds, education, attitudes, and self-reported abilities? To our knowledge, previous research have not investigated this search- and fact-checking ability in relation to background and self-reports.

We partially replicated the relationships that were found in a small-scale exploratory study of 483 Swedish students’ civic online reasoning (Nygren & Guath, Citation2019) in a larger and more representative sample of Swedish adolescents, including students in vocational programs. Departing from dimensions of civic online reasoning (Wineburg & McGrew, Citation2019) we measured performance as (a) sourcing: awareness of source bias, (b) corroboration: corroborating information by comparing multiple sources, and (c) evaluating evidence: assessment of the use of evidence. In line with previous research and the Swedish study by Nygren and Guath (Citation2019), testing students’ skills in relation to background, attitudes, and self-rated abilities, we hypothesized the following:

  • H1: participants speaking Swedish at home will perform better on items measuring skills of sourcing. H1 can be linked to previous research highlighting how bilingual students with immigrant backgrounds may have special challenges in school performance (Cummins & Swain, Citation2014; Schulz et al., Citation2010).

  • H2: the higher the participants rate the importance of access to credible news, the better they will perform on items measuring skills of evaluating evidence. H2 is associated with research highlighting the importance of science curiosity (Kahan et al., Citation2017) and source trust (Haider & Sundin, Citation2019).

  • H3: participants with an educational orientation toward arts will perform better on items measuring skills of corroboration and evaluating evidence. H3 may be associated with research highlighting the importance of domain specific skills when analyzing manipulated images (Shen et al., Citation2019) and disciplinary literacy (Shanahan et al., Citation2011).

  • H4: higher self-rated abilities to search and fact-check information will be associated with poorer performance on items measuring skills of evaluating evidence. H4 links to research noting how adolescents may overestimate their information literacy skills (Enochsson, Citation2018).

  • H5: higher ratings regarding reliability of internet information will be associated with poorer performance on items measuring skills of evaluating evidence. H5 is associated with research identifying how skeptical attitudes toward online information may be productive (Vraga & Tully, Citation2019).

  • H6: indicating girl as gender will be associated with better performance on items measuring skills of corroboration. H6 links to research highlighting how girls often outperform boys in school (Bedard & Cho, Citation2010; Schulz et al., Citation2010).

In the present study, we used test items for all three dimensions of civic online reasoning; however, we did not include all of the test items used in Nygren and Guath (Citation2019) in order to allow for a new test item of corroboration that is more in line with what fact-checkers do when they corroborate information using digital resources (Wineburg & McGrew, Citation2019).Footnote2 The exclusion of items was also based upon considerations about fitting the survey into the context of a large-scale citizen science project.

Method

Participants

We asked students in Sweden to fill out a survey with self-rated questions, combined with a performance test prior to a mass-experiment, with a focus on online fact-checking. In total, 2,356 participants, aged 16–19, from 33 municipalities, recruited via their teachers, filled out the survey and completed a test in the classroom using a link shared by the teacher. However, 140 participants were excluded from the analysis: 50 participants indicated a non-specified program, 54 participants did not wish to state their gender identity, and 36 identified as a gender other than a girl or a boy. The number of participants in each cohort was 1,003 in upper secondary first year; 761 in second year; and 452 in third year. Participation was voluntary, with informed consent obtained from all of the participants. In line with ethical guidelines, we did not collect any traceable data.Footnote3 All 2,356 participants agreed to participate; after excluding the above-mentioned participants, we analyzed 2,216 (1,229 girls, 987 boys), 94% of the responses.

Participants studied a wide range of programs, but primarily theoretical higher education preparatory programs such as Social Science (32%), Natural Science (18%), Economics (16%), Technology (10%), Arts (3%), and Humanities (1%). In addition, students from vocational programs participated in Child and Recreation (4%), Vehicle and Transport (3%), Building and Construction (3%), Natural Resource Use (2%), Electricity and Energy (2%), Care and Nursing Program (2%), and Business and Administration (2%) (see ). The distribution of responses roughly match the national statistics at the time data were collected and students per program was: Social Science 19%; Natural Science 15%; Economics 15%; Technology 10%, Arts 7%, Humanities 1%, Child and Recreation 3%; Vehicle and Transport 4%; Building and Construction 4%; Natural Resource Use 3%; Electricity and Energy 5%; Care and Nursing 3%; and Business and Administration 3%.Footnote4 In the analyses, we merged the smaller upper-secondary programs with the larger groups, resulting in seven different groups, two vocational: (a) Vocational Construction Trades (n = 202); (b) Vocational Human (n = 236), and five higher-education preparatory theoretical programs: (c) Arts and Humanities (n = 96); (d) Economics (n = 353); (e) Natural Science (n = 395); (f) Social Science (n = 718); and (g) Technology (n = 216) (see also Appendix A). Different programs in Sweden attract students with different backgrounds, interests, and ambitions. Programs usually host a diverse group of students; however, vocational programs usually attract more students with lower grades and parents from the working class, while theoretical programs attract students with higher grades and middle-class parents (Forsberg, Citation2015).

Material and Procedure

Objective Abilities

The survey and performance test consisted of 16 questions whose contents and measured variables are presented in below. We used test items from Nygren and Guath (Citation2019) that were inspired by McGrew et al. (Citation2017, Citation2018) and piloted in collaboration with teachers and researchers in education and psychology. In addition to the items previously used in research to test skills of sourcing and evaluating evidence, we designed a test item to measure corroboration, with inspiration from McGrew et al. (Citation2018) prompting students to corroborate a biased website. The site that we directed students to (www.klimatupplysningen.se) has been noted as deceptive pseudoscience by researchers (Häggström, Citation2008) and described as climate change denial by Swedish public service (SVT) and other established news media. In 2010, it was given the award of “Misleader of the Year” (Folkbildning, Citation2010).Footnote5 We piloted this item with a total of 173 students to safeguard validity and reliability. In the classrooms, we collected feedback from pilot-participants; thereafter, we analyzed their open-ended justifications for their responses to make sure that they made sense of the items. We found that students could understand the item and that they could successfully corroborate the item.

Table 1. Overview of questions and measured constructs in the online survey. The measured constructs refer to abilities that have been identified as important for the detecting of false news online. The questions are referred to in the text with the names in parentheses.

The items measuring sourcing – students’ abilities to separate ads from news – were two screenshots from two popular evening papers (www.expressen.se and www.aftonbladet.se), with a complicated mix of ads and news. The item measuring students’ abilities to evaluate evidence was a manipulated photo (by Kai Bastard) showing a smoking girl with black veins.Footnote6 The item measuring corroboration was a website about climate change, arguing that there is no evidence for climate change, and illustrating this with a histogram with a flat regression line. Instructions prompted students to visit the website and corroborate the information online.

Attitudes and Self-rated Abilities

The self-rated skills and attitudes questions were the same as those used in Nygren and Guath (Citation2019), which were designed to capture basic attitudes that are part of digital literacy and civic online reasoning. Questions measure two attitudes: (a) “internet info reliability” and (b) “credibility importance”. Internet info reliability is intended to measure the participants’ general attitude regarding the reliability of online information. The attitude is related to the link between productive critical thinking, news habits, and skepticism about online information (Ku et al., Citation2019; Vraga & Tully, Citation2019). Additionally, the attitude may be connected to cognitive abilities and flexible thinking related to peoples’ abilities to navigate online information (Flanagin et al., Citation2018; Pennycook & Rand, Citation2018). Credibility importance is intended to measure the importance the participant attaches to trusting credible sources and scientific evidence. Knowing what sources to trust has been noted as central in online navigation and a key aspect of infrastructural meaning-making (Haider & Sundin, Citation2019). This question also taps into the concept of science curiosity (Kahan et al., Citation2017) and news seekers (Strömbäck et al., Citation2013), characterizing people who seek information with a curious mindset directed toward credible information and news. People with science curiosity have been described as less vulnerable to partisan polarization. The attitudinal questions were also measured with a 5-point scale. The question about internet info reliability was phrased as: “How much of the information on the internet that you consume do you regard as reliable?” and the question about credibility importance was phrased as: “How important is it for you to consume news that is credible?”

Finally, also in line with Nygren and Guath (Citation2019), we measured two self-rated skills: (a) fact-checking ability and (b) search ability. Fact-checking ability intends to measure people’s self-rated ability to determine credibility, while search ability intends to measure their self-rated ability to find credible information online and corroborate information. Previous results (Nygren & Guath, Citation2019) indicate that it may also measure overconfidence (Kruger & Dunning, Citation1999), in the sense that people with poor skills in an area often tend to overestimate their actual skill. The self-rated skills were all measured with a 5-point scale. The question about source criticism was phrased as: “How good do you think you are at being source critical about the information you find on the internet?” The question about fact-checking ability was phrased as: “How good are you at finding the information you are looking for on the internet?”

The survey also included a number of background variables: (a) gender, (b) program of study, and (c) language spoken at home. An overview of the questions and measured constructs is provided in .

Results

Analysis of Results

In order to investigate how the civic online reasoning skills were related to performance, we made logistic regressions with fact-checking ability, search-ability, gender, upper secondary program, language at home, internet info reliability, and credibility importance as independent variables.

The dependent variable in logistic regressions is categorical (here, correct/incorrect), and the coefficients describe the log odds for a correct answer, with a positive coefficient denoting an increased log odds for a correct answer and vice versa. For the items measuring sourcing skills (Aftonbladet and Expressen), we coded answers as responses to multiple choice questions. All selected choices had to be correct or else the answer was coded as incorrect. This coding was in line with a recent study by Nygren et al. (Citation2020) measuring the same abilities. We used a function that selected the model with the lowest Akaike information criterion (AIC), adding each predictor sequentially (R Core Team, Citation2019). Here, we only report the significant results in the text; for a full specification of the models (including model fit), we refer to Appendix B, Tables .

Overview

All self-rated abilities were on a 5-point scale. Beginning with self-rated abilities, the participants considered themselves as being quite apt at source criticism on the internet (fact-checking: M = 3.7, SD = 0.8) and even better at finding information on the internet (search ability: M = 4.1, SD = 0.7). Participants considered it important to consume credible news (M = 4.1, SD = 1.0), and they considered information on the internet to be slightly more reliable than average (M = 2.9, SD = .7).

The self-rated abilities were somewhat misaligned with the objective abilities, because sourcing was 25% correct on Aftonbladet and 11% on Expressen, and only 3% answered both sourcing items correctly. In addition, 66% answered correctly on smoking (measuring evidence), and 49% answered correctly on climate change (measuring corroboration).

Sourcing: Detecting Sponsored Material

For Expressen, there was a larger log odds for a correct answer [b = 0.68, z = 2.30, p = .022] when attending Arts and Humanities programs compared with baseline (Social Science).Footnote7

For Aftonbladet, there were four significant effects: for a unit increase on the rating of fact-checking ability, there was a larger log odds for a correct answer [b = 0.22, z = 2.94, p = .003]; for a unit increase on the rating of credibility importance there was a larger log odds for a correct answer [b = 0.28, z = 4.47, p < .001]; for upper secondary school program, there was a larger log odds for a correct answer when attending Natural Science [b = 0.41, z = 2.87, p = .004] and Technology programs [b = 0.58, z = 3.20, p = .001] compared with baseline Social Science. Moreover, there was a smaller log odds for a correct answer when attending vocational Construction and Trades programs [b = − 0.61, z = − 2.58, p = .01] compared with baseline Social Sciences; and when speaking Swedish at home, there was a larger log odds for a correct answer [b = 0.43, z = 3.19, p = .001].

Evidence: Scrutinizing Comments and Images

For correctly evaluating evidence, there were six significant predictors: (a) for a unit increase on the rating of search ability, there was a larger log odds for a correct answer [b = 0.22, z = 3.08, p = .002]; (b) for a unit increase in the rating of credibility importance, there was a larger log odds for a correct answer [b = 0.17, z = 3.20 p = .001]; (c) when indicating gender “boy,” there was a larger log odds for a correct answer [b = 0.50, z = 4.73, p < .001] compared with baseline “girl”; (d) for a unit increase on the rating of internet info reliability, there was a smaller log odds for a correct answer [b = − 0.20, z = − 3.04, p = .002]; (e) for upper secondary school program, there was a smaller log odds for a correct answer when attending vocational Human programs [b = − 0.42, z = − 2.25, p = .024] or the theoretical Economic program [b = − 0.36, z = − 2.33, p = .020] compared with baseline Social Science, and there was a larger log odds for a correct answer when attending Natural Science program [b = 0.87, z = 6.45, p < .001] or Technology program [b = 0.49, z = 2.84, p = .004] compared with Social Science; (f) and when speaking Swedish at home, there was a larger log odds for a correct answer [b = 0.39, z = 3.21, p = .001].

Corroboration: Verify Information Laterally

For corroborating the text on climate change correctly, there were three significant predictors: (a) for a unit increase on credibility importance, there was a larger log odds [b = − 0.14, z = 2.78, p = .005] for a correct answer; (b) for a unit increase on internet info reliability there was a smaller log odds for a correct answer [b = −0.22, z = −3.62, p < .001]; and (c) for attending the Natural Science program (compared with baseline Social Science), there was larger log odds [b = 0.44, z = 3.35, p < .001] for a correct answer.

Discussion

Results from this study measuring adolescents’ civic online reasoning skills in relation to background variables, attitudes, and self-rated abilities showed that valuing credible news, studying at the Natural Science program, and speaking Swedish at home were often related to better performance. Rating information on the internet as reliable or studying in a vocational program were related to poorer performance. In addition, the current study provides new insights on how background, education, and self-rated abilities affect students’ performance when corroborating misleading online information about climate change. We find that determining the credibility of this website was a challenge for a majority of the students; however, students with subject specific knowledge in the Natural Science program seemed better at navigating this online information. In addition, students who appreciated reliable news were better at debunking the climate change denial. The results partly corroborate previous results in the smaller explorative study by Nygren and Guath (Citation2019). We discuss the results in relation to our hypotheses and conclude with recommendation for future research.

Language at Home

Speaking Swedish at home had a positive effect on performance on sourcing (Aftonbladet), in line with H1 and the previous study by Nygren and Guath (Citation2019). In addition, speaking Swedish at home was also related to better performance on the item testing skills to evaluate evidence (smoking). We speculate that the relationship is linked to basic literacy (which, in turn, is connected to information literacy), literacy skills among bilingual students (Cummins & Swain, Citation2014), and news habits among different groups of teenagers (Lindell, Citation2018; Lindell & Hovden, Citation2018). We do not find this effect on corroboration (climate change denial), which highlights that other aspects are important when navigating online – like subject specific knowledge and appreciating reliable news. Further investigation is needed in order to provide concrete instructions on how this can be addressed in multicultural classrooms.

Credibility Importance

Results underscore the importance of the appreciation of credible news (credibility importance). In line with H2, we find that importance of credibility relates to better performance on items testing skills of evaluating evidence. This result echoes findings in previous research and expands the explanatory value of credibility importance to also include skills of sourcing and corroboration. We find that students with higher ratings on the importance of access to credible news performed better on all three dimensions of civic online reasoning – sourcing, evidence, and corroboration. A possible explanation for the association between credibility importance and performance may be that valuing credible news is linked to science curiosity (Kahan et al., Citation2017), helping students to evaluate information with a better focus on facts beyond motivated reasoning.

In line with previous research, we also speculate that credibility importance may be linked to students’ trust in sources (Haider & Sundin, Citation2019) and habits of following public service TV and listening to the radio (Nygren & Guath, Citation2019). It is also possible that there is a link between regarding access to credible news as important and media habits. This is supported by research investigating the link between social media habits and source critical ability (Ku et al., Citation2019). We further speculate that this group may contain news seekers (Strömbäck et al., Citation2013), who find credible news in public service. Those students may be part of a culture where news is perceived as high status (e.g., Lindell, Citation2018; Lindell & Hovden, Citation2018). Our findings emphasize the importance of further investigation of what constitutes credibility importance and how the attitudes can be stimulated in education.

Educational Orientation

Results also point at the importance of educational orientation for the performance. Partly in line with H3, we find that students enrolled in the Arts programs performed well on items testing skills of sourcing and evaluating evidence. We find that students in programs for Technology, Art and Humanities performed well on items testing abilities of sourcing and evaluation of evidence, but not on the item testing abilities to corroborate online information.

In contrast to H3, we find that students in the Natural Science program performed significantly better than other students on the item measuring corroboration (climate change denial). They may have more knowledge about this issue than other students, helping them when reading laterally. Theories of disciplinary literacy predict that students in Natural Sciences have subject specific knowledge that makes it easier for them to search for credible information and determine the credibility of information relating to natural sciences (Nygren, Haglund, et al., Citation2019; T. Shanahan & Shanahan, Citation2012). Students with an orientation toward the Natural Sciences also performed well on other items measuring sourcing and evidence, highlighting possibly a more general ability.

The fact that Technology students were able to debunk a manipulated image may be linked to their knowledge about digital image manipulation. This interpretation is supported by previous research, highlighting how debunking manipulated images may relate to technological expertise (Shen et al., Citation2019). We also speculate that the abilities of students in the Arts and Humanities programs to debunk misleading texts and images may be related to disciplinary literacy.

We also find that students in vocational programs and students in the theoretical Economy program performed significantly worse on items testing skills to identify the source and evaluate evidence. This may indicate a digital divide (van Dijk, Citation2020), further discussed below.

Search Ability and Fact-checking Ability

Previous findings regarding over-confidence problems among teenagers, noted by Nygren and Guath (Citation2019), were not supported in this replication. Contrary to our hypothesis (H4), we found that students claiming to be good at fact-checking performed well on items testing their abilities to identify the source, and those claiming to be good at searching for information performed better when evaluating evidence. The reason for these results may be that we used a larger and more representative sample of participating students; notably, we included students with vocational education. The updated curriculum in Sweden, emphasizing the importance of digital literacy, may also have influenced the results (Regeringen, Citation2017). At the time of the survey, there was also a lively discussion about problems with “fake news” and people’s problems with determining credibility.

Our findings also relate to previous research highlighting how self-reflection and skills have complex links. Interestingly, different self-rated abilities were related to different abilities; fact-checking ability had a positive effect on sourcing (Aftonbladet), whereas search ability had a positive effect on evidence (smoking). Fact-checking ability measures the self-rated ability for source criticism, which is a central skill when sourcing, whereas search-ability measures the self-rated ability to find and consume credible information. Nygren and Guath (Citation2019) did not identify this relation in their study. The finding is positive in two respects: (a) it gives an indication that the items measure what they intend to measure and (b) it shows that the students possess a degree of self-knowledge. In related research, it was found that students who are good at debunking the smoking image often do this by comparing the content of the image with what they know from other sources of information (Nygren et al., Citation2020), which may be related to search ability.

It is also noteworthy that we did not find any effects of the self-rated skills of fact-checking or search ability on corroboration, which may indicate that students who are able to corroborate information like fact-checkers are not aware of their skills to search and evaluate online information. Another interpretation is that the effects of credibility importance, internet info reliability, and education on this item are far greater, hence overshadowing any effects of self-rated abilities.

Internet Info Reliability

The present results are in line with H5 showing that higher ratings of reliability of internet information are associated with poorer performance on items measuring skills of evaluating evidence. Findings echo those in the previous explorative study regarding the negative relationship between performance and ratings of internet information as reliable (Nygren & Guath, Citation2019). Beyond previous research, we also find a negative relationship between trusting online information and abilities to corroborate misleading information about climate change. We speculate that the negative relationship between trusting internet information and skills of determining credibility may be explained by the fact that naïve internet users are easier to manipulate than those who are vigilant about misinformation. Previous research have noted how naïve attitudes may be problematic when reading news online (Vraga & Tully, Citation2019), which is confirmed by research indicating a relationship between critical ability and skepticism about news algorithms (Ku et al., Citation2019).

Another possible explanation comes from research emphasizing the challenge of combatting confirmation bias when evaluating credibility online (Metzger & Flanagin, Citation2015). Participants may perceive one-sided biased arguments as more credible, since they confirm their current views, thus avoiding cognitive dissonance, and consequently falling victim to confirmation bias (Metzger et al., Citation2015). This, in turn, may be attributed to a lack of flexible thinking among participants rating internet information as credible; they are less able to see beyond the face value and their own biases (Flanagin et al., Citation2018).

However, previous research also show that high credibility ratings of internet information can be linked to self-reported verification behavior. People who report that they often corroborate information also report that they trust online information, which is in contrast to what we found in this study (Flanagin & Metzger, Citation2000). We believe that it is possible that this difference can be explained by the fact that we test abilities and relate them to self-reported measures, while previous research only used self-reported measurements. Evidently, more research is needed on why people in general trust and distrust online information, and how this relates to their actual abilities to navigate credible, biased, and false information.

Gender

In contrast to our hypothesis (H6), we did not find that girls were better at corroborating information. The only effect of gender was on evaluating evidence (smoking) where girls more than boys were fooled by this manipulative image and text. Even if girls often outperform boys (Bedard & Cho, Citation2010; Schulz et al., Citation2010), we found the opposite. We speculate that the result may be related to the fact that the smoker in the image is a girl, attracting compassion rather than critical distance among female participants. This is confirmed by research showing that images evoke more compassion in women than in men (Groen et al., Citation2013; Mercadillo et al., Citation2011).

We also speculate that this can be explained by habits of navigating manipulated body images. The newsfeeds of Swedish teenage girls are filled with celebrity gossip and misleading images of beauty (Nygren, Brounéus, et al., Citation2019). In addition, girls in Sweden spend more time on social media than boys, who spend more time on gaming (Statens medieråd, Citation2019). Images in social media may impact young women, giving them a biased view of the body (Holland & Tiggemann, Citation2016; Marengo et al., Citation2018; Walker et al., Citation2019). We speculate that the results can be related to previous research highlighting how teenage girls may have poor mental health literacy in a digital world of selfies, plastic surgery, and manipulated images (Lonergan et al., Citation2019).

Researchers suggest that media literacy training may be useful to better navigate in social media with images of manipulated bodies (Khazir et al., Citation2016; McLean et al., Citation2017). Yet another interpretation is that the results are related to participants’ smoking habits, but we did not ask about this in the survey. Further research is needed to better understand this gender difference and how this relates to media literacy.

Limitations

Our findings highlight the complexity of determining credibility. However, the study has important limitations since we only tested a few aspects of this complexity with a limited number of test-items. In the performance test, we used four items that were intended to measure the three facets of civic online reasoning. Ideally, we would have used multiple items for measuring the different skills; however, since this was part of a mass experiment, the time frame did not allow us to include more items. This has implications for the validity of the performance test, for instance, does it capture the facets of the civic online reasoning and is it predictive of performance on each skill it intends to measure? The validity weaknesses are also reflected in the reliability of the test – do similar items within a facet of civic online reasoning measure the same construct? Our results on sourcing, where we included two items, indicate that they do not, since performance was predicted by different variables. Future research should include more items in each category, and preferably map the attitudes and self-rated measures with established scales on each topic. We are currently gathering data, with the intention of mapping our self-rated variables with validated questionnaires. This would make it possible to tie them to validated constructs and, consequently, say something more substantial about the predictors. Another limitation of this study is the fact that interested teachers signed up to participate and students filled in questionnaires and tests in a classroom environment, and their vigilance regarding manipulations may therefore have been higher than otherwise when encountering online information. Performance may also have been affected by the fact that this was a low stakes test that did not affect their grades. To what extent our findings translates into teenagers’ personal use of online information on a national scale is beyond the scope of this study. What we do see are some important challenges in a school setting that are greater in some groups of students. We find what seems to be a digital civic literacy divide.

Conclusion: A Digital Civic Literacy Divide

In this study, we tap into the complexity of determining credibility in digital environments. Departing from theories of transliteracy and civic online reasoning, we find a digital divide between young people regarding their knowledge, skills, and attitudes. This divide may, in turn, be described as a divide of digital civic literacy. We define digital civic literacy as knowledge, skills, and attitudes necessary to navigate multimodal online information of importance to citizens in critical and productive ways. Our results suggest that students with a high digital civic literacy have subject specific knowledge and disciplinary knowledge closely related to issues in the news; furthermore, they know where to find credible information and know that texts and images may be manipulated. Moreover, they are able to determine the credibility of news using reasoning skills to evaluate information and skills to navigate online in productive ways. In addition, they have attitudes appreciating credible news and a vigilant attitude toward internet information in general. Finally, they display self-reflection in their attitudes regarding their abilities to fact-check and search information.

In contrast, students with low digital civic literacy do not appreciate credible news to the same extent, which may be attributed to a naïve trust in online information. The fact that students with lower digital civic literacy often attend vocational programs and do not speak Swedish at home highlights how the digital divide is a social and educational divide that needs to be addressed in future research and education. We also need a better understanding of how education may support students to appreciate credible news, learn to be vigilant toward online information, and be self-reflective about fact-checking.

Acknowledgements

We are truly grateful to all the students accepting to participate in this study as well as their teachers who helped by distributing the survey. We would also like to direct a special thanks to Jenny Wiksten Folkeryd, Ebba Elwin, Fredrik Brounéus, Kerstin Ekholm, Maria Lindberg, and members of the Stanford History Education Group for their valuable input and support in the process.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by VINNOVA: [Grant Number 2018-01279].

Notes

1 In this article, we define misinformation as inaccurate, manipulative, or false information, including disinformation, which is deliberately designed to mislead people.

2 See more details about this test item below in the Methods section.

4 SiRiS, National Statistics from Swedish National Agency for Education, https://www.skolverket.se/skolutveckling/statistik/sok-statistik-om-forskola-skola-och-vuxenutbildning [accessed 2019 01 12].

5 Thus, a lateral reading of this site should make students skeptical. Previous items testing skills of corroboration have prompted students to compare two competing sources of information, but not prompting them to use digital resources to double-check information (Nygren & Guath, Citation2019). Noting the importance of using digital resources in lateral reading (Wineburg & McGrew, Citation2019), we chose to replace the test items used in Nygren and Guath (Citation2019) with this item, emphasizing the combination of searching and evaluating information to corroborate claims.

7 The results are not affected by the category that is chosen as baseline. In our case, it was automatically coded as baseline in the software (R) which we used to analyze the data.

References

  • Aufderheide, P. (1993). Media literacy. A report of the national leadership conference on media literacy. ERIC.
  • Bedard, K., & Cho, I. (2010). Early gender test score gaps across OECD countries. Economics of Education Review, 29(3), 348–363. https://doi.org/10.1016/j.econedurev.2009.10.015
  • Blikstad-Balas, M. (2016). “You get what you need”: A study of students’ attitudes towards using Wikipedia when doing school assignments. Scandinavian Journal of Educational Research, 60(6), 594–608. https://doi.org/10.1080/00313831.2015.1066428
  • Breakstone, J., Smith, M., Wineburg, S., Rapaport, A., Carle, J., Garland, M., & Saavedra, A. (2019). Students’ civic online reasoning: A national portrait. https://purl.stanford.edu/gf151tb4868
  • Breivik, P. S., & Gee, E. G. (1989). Information literacies for the twenty first century. MacMillan.
  • Carlsson, U. (Ed.). (2019). Understanding media and information literacy (MIL) in the digital age. A question of democracy. Department of Journalism, Media and Communication (JMG).
  • Cummins, J., & Swain, M. (2014). Bilingualism in education: Aspects of theory, research and practice. Routledge.
  • De Keersmaecker, J., & Roets, A. (2017). ‘Fake news’: Incorrect, but hard to correct. The role of cognitive ability on the impact of false information on social impressions. Intelligence, 65, 107–110. https://doi.org/10.1016/j.intell.2017.10.005
  • Deursen, A. J. V., & Dijk, J. A. V. (2014). The digital divide shifts to differences in usage. New Media & Society, 16(3), 507–526. https://doi.org/10.1177/1461444813487959
  • Enochsson, A.-B. (2018). Teenage pupils’ searching for information on the Internet. Paper presented at the ISIC: The Information Behaviour Conference, Krakow, Poland.
  • Eshet, Y. (2004). Digital literacy: A conceptual framework for survival skills in the digital era. Journal of Educational Multimedia and Hypermedia, 13(1), 93–106.
  • European Commision. (2018). Action Plan against Disinformation. Joint communication to the European parliament, the European council, the council, the European economic and social committee and the committee of the regions. Brussels. https://ec.europa.eu/information_society/newsroom/image/document/2018-49/action_plan_against_disinformation_26A2EA85-DE63-03C0-25A096932DAB1F95_55952.pdf
  • Flanagin, A. J., & Metzger, M. J. (2000). Perceptions of internet information credibility. Journalism & Mass Communication Quarterly, 77(3), 515–540. https://doi.org/10.1177/107769900007700304
  • Flanagin, A. J., Winter, S., & Metzger, M. J. (2018). Making sense of credibility in complex information environments: The role of message sidedness, information source, and thinking styles in credibility evaluation online. Information, Communication & Society, 1–19. https://doi.org/10.1080/1369118X.2018.1547411.
  • Fogg, B. J., Soohoo, C., Danielson, D. R., Marable, L., Stanford, J., & Tauber, E. R. (2003). How do users evaluate the credibility of Web sites? A study with over 2,500 participants. Proceedings of the 2003 conference on Designing for user experiences.
  • Folkbildning, V. O. (2010). Årets förvillare 2010. https://www.vof.se/utmarkelser/tidigare-utmarkelser/arets-forvillare-2010/
  • Forsberg, H. (2015). Kampen om eleverna: Gymnasiefältet och skolmarknadens framväxt i Stockholm, 1987–2011. PhD. Diss. Acta Universitatis Upsaliensis.
  • Frau-Meigs, D., Nygren, T., Corbu, N., & Santoveña Casal, S. (2020). Combatting online disinformation by improving digital visual literacy. Proceedings of the 2020 conference of the International Association for Media and Communication Research. Unpublished paper.
  • Groen, Y., Wijers, A. A., Tucha, O., & Althaus, M. (2013). Are there sex differences in ERPs related to processing empathy-evoking pictures? Neuropsychologia, 51(1), 142–155. https://doi.org/10.1016/j.neuropsychologia.2012.11.012
  • Häggström, O. (2008). Vetenskap och pseudovetenskap: exemplet Stockholmsinitiativet. Folkvett(4). https://www.vof.se/folkvett/ar-2008/nr-4/vetenskap-och-pseudovetenskap-exemplet-stockholmsinitiativet/
  • Haider, J., & Sundin, O. (2019). Invisible search and online search engines: The ubiquity of search in everyday life. Routledge.
  • Hargittai, E. (2001). Second-level digital divide: Mapping differences in people's online skills. arXiv preprint cs/0109068.
  • Hatlevik, O. E., Guðmundsdóttir, G. B., & Loi, M. (2015). Digital diversity among upper secondary students: A multilevel analysis of the relationship between cultural capital, self-efficacy, strategic use of information and digital competence. Computers & Education, 81, 345–353. https://doi.org/10.1016/j.compedu.2014.10.019
  • Hobbs, R. (2010). Digital and media literacy: A plan of action. A white paper on the digital and media literacy recommendations of the knight commission on the information needs of communities in a democracy. ERIC.
  • Holland, G., & Tiggemann, M. (2016). A systematic review of the impact of the use of social networking sites on body image and disordered eating outcomes. Body Image, 17, 100–110. https://doi.org/10.1016/j.bodyim.2016.02.008
  • Kahan, D. (2017). Misconceptions, misinformation, and the logic of identity-protective cognition.
  • Kahan, D., Landrum, A., Carpenter, K., Helft, L., & Hall Jamieson, K. (2017). Science curiosity and political information processing. Political Psychology, 38, 179–199. https://doi.org/10.1111/pops.12396
  • Khazir, Z., Dehdari, T., Majdabad, M. M., & Tehrani, S. P. (2016). Psychological aspects of cosmetic surgery among females: A media literacy training intervention. Global Journal of Health Science, 8(2), 35–45.
  • Koltay, T. (2011a). The media and the literacies: Media literacy, information literacy, digital literacy. Media, Culture & Society, 33(2), 211–221. https://doi.org/10.1177/0163443710393382
  • Koltay, T. (2011b). New media and literacies: Amateurs vs. Professionals. First Monday, 16(1).
  • Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121
  • Kruikemeier, S., & Shehata, A. (2017). News media use and political engagement among adolescents: An analysis of virtuous circles using panel data. Political Communication, 34(2), 221–242. https://doi.org/10.1080/10584609.2016.1174760
  • Ku, K. Y., Kong, Q., Song, Y., Deng, L., Kang, Y., & Hu, A. (2019). What predicts adolescents’ critical thinking about real-life news? The roles of social media news consumption and news media literacy. Thinking Skills and Creativity, 33, 100570. https://doi.org/10.1016/j.tsc.2019.05.004
  • Leaning, M. (2019). An approach to digital literacy through the integration of media and information literacy. Media and Communication, 7(2), 4–13. https://doi.org/10.17645/mac.v7i2.1931
  • Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/10.1177/1529100612451018
  • Lindell, J. (2018). Smaken för nyheter: klasskillnader i det digitala medielandskapet. Nordicom.
  • Lindell, J., & Hovden, J. F. (2018). Distinctions in the media welfare state: Audience fragmentation in post-egalitarian Sweden. Media, Culture & Society, 40(5), 639–655. https://doi.org/10.1177/0163443717746230
  • Livingstone, S. (2004). Media literacy and the challenge of New information and Communication technologies. The Communication Review, 7(1), 3–14. https://doi.org/10.1080/10714420490280152
  • Lonergan, A. R., Bussey, K., Mond, J., Brown, O., Griffiths, S., Murray, S. B., & Mitchison, D. (2019). Me, my selfie, and I: The relationship between editing and posting selfies and body dissatisfaction in men and women. Body Image, 28, 39–43. https://doi.org/10.1016/j.bodyim.2018.12.001
  • Marengo, D., Longobardi, C., Fabris, M. A., & Settanni, M. (2018). Highly-visual social media and internalizing symptoms in adolescence: The mediating role of body image concerns. Computers in Human Behavior, 82, 63–69. https://doi.org/10.1016/j.chb.2018.01.003
  • McGrew, S., Breakstone, J., Ortega, T., Smith, M., & Wineburg, S. (2018). Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory & Research in Social Education, 1–29 https://doi.org/10.1080/00933104.2017.1416320
  • McGrew, S., Ortega, T., Breakstone, J., & Wineburg, S. (2017). The challenge that’s bigger than fake news: Civic reasoning in a Social media environment. American Educator, 41(3), 4–9.
  • McLean, S. A., Wertheim, E. H., Masters, J., & Paxton, S. J. (2017). A pilot evaluation of a social media literacy intervention to reduce risk factors for eating disorders. International Journal of Eating Disorders, 50(7), 847–851. https://doi.org/10.1002/eat.22708
  • Mercadillo, R. E., Díaz, J. L., Pasaye, E. H., & Barrios, F. A. (2011). Perception of suffering and compassion experience: Brain gender disparities. Brain and Cognition, 76(1), 5–14. https://doi.org/10.1016/j.bandc.2011.03.019
  • Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics, 59, 210–220. https://doi.org/10.1016/j.pragma.2013.07.012
  • Metzger, M. J., & Flanagin, A. J. (2015). Psychological approaches to credibility assessment online. The Handbook of the Psychology of Communication Technology, 32, 445–466. https://doi.org/10.1002/9781118426456.ch20
  • Metzger, M. J., Hartsell, E. H., & Flanagin, A. J. (2015). Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Communication Research. https://doi.org/10.1177/0093650215613136
  • Mihailidis, P. (2018). Civic media literacies: Re-imagining engagement for civic intentionality. Learning, Media and Technology, 43(2), 152–164. https://doi.org/10.1080/17439884.2018.1428623
  • Moje, E. B. (2007). Developing socially just subject-matter instruction: A review of the literature on disciplinary literacy teaching. Review of Research in Education, 31(1), 1–44. www.jstor.org/stable/20185100 https://doi.org/10.3102/0091732X07300046001
  • Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175
  • Nygren, T., Brounéus, F., & Svensson, G. (2019). Diversity and credibility in young people’s news feeds: A foundation for teaching and learning citizenship in a digital era. Journal of Social Science Education, 18(2), 87–109. https://doi.org/10.4119/jsse-917
  • Nygren, T., Folkeryd, J., Liberg, C., & Guath, M. (2020). Hur motiverar gymnasieelever sina bedömningar av trovärdiga och vilseledande digitala nyheter? Nordidactica: Journal of Humanities and Social Science Education, 2020(2), 153–178.
  • Nygren, T., & Guath, M. (2019). Swedish teenagers’ difficulties and abilities to determine digital news credibility. Nordicom Review, 40(1), 23–42. https://doi.org/10.2478/nor-2019-0002
  • Nygren, T., Haglund, J., Samuelsson, C. R., Af Geijerstam, Å, & Prytz, J. (2019). Critical thinking in national tests across four subjects in Swedish compulsory school. Education Inquiry, 10(1), 56–75. https://doi.org/10.1080/20004508.2018.1475200
  • Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google we trust: Users’ decisions on rank, position, and relevance. Journal of Computer-Mediated Communication, 12(3), 801–823. https://doi.org/10.1111/j.1083-6101.2007.00351.x
  • Pennycook, G., & Rand, D. G. (2018). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, https://doi.org/10.1016/j.cognition.2018.06.011.
  • R Core Team. (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/
  • Regeringskansliet. (2017). Stärkt digital kompetens i läroplaner och kursplaner. https://www.regeringen.se/pressmeddelanden/2017/03/starkt-digital-kompetens-i-laroplaner-och-kursplaner/
  • Schulz, W., Ainley, J., Fraillon, J., Kerr, D., & Losito, B. (2010). ICCS 2009 international report: Civic knowledge, attitudes, and engagement among lower-secondary school students in 38 countries. ERIC.
  • Shanahan, C., Shanahan, T., & Misischia, C. (2011). Analysis of expert readers in three disciplines. Journal of Literacy Research, 43(4), 393–429. https://doi.org/10.1177/1086296X11424071
  • Shanahan, T., & Shanahan, C. (2012). What is disciplinary literacy and why does it matter? Topics in Language Disorders, 32(1), 7–18. https://doi.org/10.1097/TLD.0b013e318244557a
  • Shen, C., Kasra, M., Pan, W., Bassett, G. A., Malloch, Y., & O’Brien, J. F. (2019). Fake images: The effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media & Society, 21(2), 438–463. https://doi.org/10.1177/1461444818799526
  • Solli, A. (2019). Handling socio-scientific controversy: Students’ reasoning through digital inquiry. PhD Diss. University of Gothenburg.
  • Statens medieråd, S. (2019). Ungar & medier 2017. https://statensmedierad.se/publikationer/ungarochmedier/ungarochmedier2019.3347.html
  • Strömbäck, J., Djerf-Pierre, M., & Shehata, A. (2013). The dynamics of political interest and news media consumption: A longitudinal perspective. International Journal of Public Opinion Research, 25(4), 414–435. https://doi.org/10.1093/ijpor/eds018
  • Sundar, S. S., Knobloch-Westerwick, S., & Hastall, M. R. (2007). News cues: Information scent and cognitive heuristics. Journal of the American Society for Information Science and Technology, 58(3), 366–378. https://doi.org/10.1002/asi.20511
  • Sundin, O., & Carlsson, H. (2016). Outsourcing trust to the information infrastructure in schools: How search engines order knowledge in education practices. Journal of Documentation, 72(6), 990–1007. https://doi.org/10.1108/JD-12-2015-0148
  • Thomas, S., Joseph, C., Laccetti, J., Mason, B., Mills, S., Perril, S., & Pullinger, K. (2007). Transliteracy: Crossing divides. First Monday.
  • van Dijk, J. (2020). The digital divide. John Wiley & Sons.
  • Vraga, E. K., & Tully, M. (2019). News literacy, social media behaviors, and skepticism toward information on social media. Information, Communication & Society, 1–17. https://doi.org/10.1080/1369118X.2019.1637445.
  • Walker, C. E., Krumhuber, E. G., Dayan, S., & Furnham, A. (2019). Effects of social media use on desire for cosmetic surgery among young women. Current Psychology. https://doi.org/10.1007/s12144-019-00282-1.
  • Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an interdisciplinary framework for research and policymaking.
  • Wineburg, S. (1991). On the reading of historical texts: Notes on the breach between school and academy. American Educational Research Journal, 28(3), 495–519. https://doi.org/10.3102/00028312028003495
  • Wineburg, S. (1998). Reading Abraham Lincoln: An expert/expert study in the interpretation of historical texts. Cognitive Science, 22(3), 319–346. https://doi.org/10.1207/s15516709cog2203_3
  • Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record, 121(11), 1–40, Article 2. https://www.tcrecord.org/Content.asp?ContentId=22806
  • Zarocostas, J. (2020). How to fight an infodemic. The Lancet, 395(10225), 676. https://doi.org/10.1016/S0140-6736(20)30461-X

Appendices

Appendix A

Table A1. List of categories based upon programs of participants (n = 2216).

Appendix B

Table B1. Estimates of Best Fitting Logistic Regression Model for Correct/Incorrect Coded Multiple Choice Answers on Expressen with Coefficients Denoting the Log Odds of Answering Correct for a Unit Increase of the Ordinal Predictors and the Expected Log Odds for Each Category Compared with the Baseline Category for the Categorial Predictors.

Table B2. Estimates of Best Fitting Logistic Regression Model for Correct/Incorrect Coded Multiple Choice Answers on Aftonbladet, with Coefficients Denoting the Log Odds of Answering Correct for a Unit Increase of the Ordinal Predictors and the Expected Log Odds for Each Category Compared with the Baseline Category for the Categorial Predictors.

Table B3. Estimates of Best Fitting Logistic Regression Model for Correct/Incorrect on Smoking, with Coefficients Denoting the Log Odds of Answering Correct for a Unit Increase of the Ordinal Predictors and the Expected Log Odds for Each Category Compared with the Baseline Category for the Categorial Predictors.

Table B4. Estimates of Best Fitting Logistic Regression Model for Correct/Incorrect on Climate Change, with Coefficients Denoting the Log Odds of Answering Correct for a Unit Increase of the Ordinal Predictors and the Expected Log Odds for Each Category Compared with the Baseline Category for the Categorial Predictors.