354
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Reading Comprehension Skills and Prior Topic Knowledge Serve as Resources When Adolescents Justify the Credibility of Multiple Online Texts

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Received 20 Dec 2023, Accepted 01 May 2024, Published online: 15 May 2024

Abstract

This study sought to understand how well students (n = 274; Mage = 12.45) were able to identify the author, the main claim, and the supporting evidence (identification performance) and to justify the author’s expertise, the author’s benevolence, and the quality of the evidence (justification performance) while reading multiple online texts. The study also examined the contribution of prior topic knowledge and basic reading skills (word recognition and reading comprehension) to students’ identification and justification performance. Students read two more and two less credible online texts about sugar effects on health. After reading each text, they responded to multiple-choice items that measured the identification and justification performance. Justifying credibility seemed more challenging for students than identifying the claim, evidence, and author. Word recognition and reading comprehension were statistically significant predictors of identification performance, whereas prior knowledge and reading comprehension were statistically significant predictors of justification performance. The findings offer new insights into the relationship between basic reading skills and credibility evaluation that can inform both theory and instruction.

Introduction

Navigating the current online textual landscape, characterized by the rapid spread of misinformation, algorithmic bias, and persuasive argumentation (Kozyreva, Lewandowsky, & Hertwig, Citation2020), requires critical reading. To be able to read online texts critically, readers need to identify elements crucial to credibility (viz., the main argument and the source of the text) and evaluate the quality of the argument and the trustworthiness of the source (e.g., Barzilai, Thomm, & Shlomi-Elooz, Citation2020). To make accurate evaluative judgments of text credibility, readers need to understand why specific types of evidence can or cannot support authors’ claims well and why the source can or cannot be regarded as trustworthy. In other words, critical readers can justify their evaluative judgments.

While several previous studies have examined how adolescents justify their overall text evaluations (e.g., Forzani, Corrigan, & Kiili, Citation2022; Potocki et al., Citation2020), we aimed to gain a more nuanced understanding of adolescents’ justifications by examining how they justify the source’s expertise and benevolence as well as the quality of evidence supporting the text’s main claim. Further, we examined two essential cognitive resources - prior topic knowledge and basic reading skills - that may contribute to adolescents’ identification (i.e., identification of source, claim, and evidence) and justification performance (e.g., Forzani, Citation2018; Kanniainen, Kiili, Tolvanen, Aro, & Leppänen, Citation2019). Regarding reading comprehension, we also sought to gain a nuanced understanding by examining what kinds of reading comprehension processes might contribute to adolescents’ identification and justification performance. These examinations of adolescents’ justification skills and underlying cognitive resources can contribute to building a highly needed theoretical understanding of credibility evaluation, which, in turn, may help educators design instruction to support students’ critical reading skills in online contexts.

Content-Based and Source-Based Evaluation of Information

Several models or frameworks suggest that readers can engage in content- and source-based evaluation to determine whether they can trust offline or online information (Barzilai et al., Citation2020; Forzani et al., Citation2022; Stadtler & Bromme, Citation2014). For example, the content-source-integration model by Stadtler and Bromme (Citation2014) suggests that readers can make first- or second-hand credibility judgments when encountering conflicting information in multiple texts. When making first-hand judgments, readers compare text content to their prior knowledge to determine whether the claims stated in the text are valid, whereas second-hand judgments are based on source information (e.g., source’s expertise, intentions) that readers evaluate to determine whether they can trust the source of the text. Readers tend to rely on their prior knowledge or beliefs when the text content is more familiar to them, whereas in evaluating more unfamiliar topics, readers put more emphasis on source information (e.g., Bråten, McCrudden, Stang Lund, Brante, & Strømsø, Citation2018; McCrudden, Stenseth, Bråten, & Strømsø, Citation2016).

Barzilai et al. (Citation2020) have elaborated on the notion of first- and second-hand evaluation. Their bi-directional model of first and second-hand evaluation strategies specifies three types of first-hand evaluation strategies: knowledge-based validation, discourse-based evaluation, and corroboration. Consistent with the content-source-integration model (Stadtler & Bromme, Citation2014), knowledge-based validation refers to comparing text content to one’s prior knowledge and beliefs, which is usually a routine part of comprehension (Richter, Münchow, & Abendroth, Citation2020). In discourse-based evaluation, readers draw on discourse features to determine how well knowledge is justified or communicated. For example, readers can evaluate the strength of the argument (McCrudden & Barns, Citation2016; Münchow, Tiffin-Richards, Fleischmann, Pieschl, & Richter, Citation2023) or the quality of evidence (List, Citation2023; List, Du, & Lyu, Citation2021). Finally, readers can engage in corroboration (see Wineburg, Citation1991) to examine whether other texts verify or contradict the content of the currently processed text.

Drawing on Stadtler and Bromme (Citation2014), Barzilai et al. (Citation2020) also specified different kinds of second-hand evaluation strategies targeting source trustworthiness. By using sourcing strategies, readers can evaluate, for example, the expertise or benevolence of the source of the text (e.g., Thomm & Bromme, Citation2016). Importantly, Barzilai et al. (Citation2020) highlighted that first- and second-hand evaluation strategies are not used in isolation; instead, readers can employ them reciprocally, meaning that evaluation of information can affect evaluation of sources, and vice versa.

Students’ Content-Based and Source-Based Justifications

Identifying components of arguments (i.e., claim, reasons, and evidence) and sources of information is a prerequisite for content- and source-based evaluation. Readers may struggle to identify the main claim in naturalistic texts (Diakidoy, Ioannou, & Christodoulou, Citation2017; Larson, Britt, & Larson, Citation2004), although the simplicity and explicitness of the structure of the argument can facilitate the identification of the argument (Christodoulou & Diakidoy, Citation2020). And, while identifying source features in short, linear texts may be straightforward, it seems to depend on students’ decoding skills (Potocki et al., Citation2020). In online contexts, identifying the author might be more challenging, however, as the location of author information may vary, or students may confuse the author with other source information, such as affiliation (Coiro et al., Citation2015). For example, 17% of middle school students (n = 773) have been shown to have difficulties identifying the author of a web page (Coiro et al., Citation2015). Failing to identify the main claim, the supporting reasons, and the author may impede students in using sources to qualify the validity of the claims (Perfetti, Rouet, & Britt, Citation1999) and evaluate the quality of presented arguments (cf. Christodoulou & Diakidoy, Citation2020).

Students’ skills in using content- and source-based evaluation strategies have been studied by employing think-aloud methodology (Barzilai, Tzadok, & Eshet-Alkalai, Citation2015; Mason, Boldrin, & Ariasi, Citation2010; McGrew, Citation2021), or by asking students to justify their credibility evaluations or credibility rankings in writing (Braasch, Bråten, Strømsø, & Anmarkrud, Citation2014; Coiro et al., Citation2015; Potocki et al., Citation2020). Think-aloud studies have suggested that students’ spontaneous credibility evaluation during online inquiry is rather limited (Barzilai & Zohar, Citation2012; Kammerer, Gottschling, & Bråten, Citation2021) or superficial, even when students have been prompted to evaluate online information (McGrew, Citation2021). For example, Barzilai and Zohar (Citation2012) examined sixth graders’ (n = 38) evaluation strategies by means of think-alouds and retrospective interviews. In that study, students completed two tasks: a search task on the open internet and a reading task, including three pre-selected, conflicting websites. When reading websites, students typically considered the content or the form of the website. However, source trustworthiness was only evaluated in 39% of the websites, and attention to scientific evidence was even more scarce.

Similar results have been found among high school students when examining students’ evaluation of online information about social and political topics (McGrew, Citation2021). McGrew (Citation2021) found that students (n = 18) often based their credibility judgments on topical relevance or superficial features (e.g., date, absence of author, or hyperlinks) but hardly consider the trustworthiness of the sources. In addition, when students checked whether the author provided some evidence, they did not evaluate its quality.

Although think-aloud methodology can reveal students’ epistemic thinking in action (Barzilai & Zohar, Citation2012; Ferguson, Bråten, & Strømsø, Citation2012), students may vary in how comfortable and capable they are thinking aloud (Pressley & Afflerbach, Citation1995). Because of the resource-intensive nature of this methodology, the sample sizes are usually relatively small compared to many studies analyzing students’ written justifications. Moreover, similar patterns have been found across studies on written justifications and think-aloud studies: many students lack efficient evaluation strategies (e.g., Breakstone et al., Citation2021; Coiro et al., Citation2015; Potocki et al., Citation2020). For example, Potocki et al. (Citation2020), who examined evaluation skills among 5th-, 7th-, and 9th-graders and undergraduates (n = 245), found that especially in the lower grades, students emphasized more content than source information when justifying their credibility evaluations. Similarly, Coiro et al. (Citation2015) found that many middle school students’ (n = 773) justifications for the author’s expertise, the author’s point of view, and the overall credibility of online texts were quite often unacceptable or superficial.

It is worth noticing that previous studies also have found considerable inter-individual differences among students, indicating that some students are able to justify the credibility across various perspectives and also display thorough reasoning while evaluating the texts (e.g., Barzilai & Zohar, Citation2012; Forzani et al., Citation2022; Hämäläinen, Kiili, Räikkönen, & Marttunen, Citation2021). Forzani et al. (Citation2022) highlighted qualitative differences between better and poorer evaluators (n = 410) by examining 7th graders’ responses when asked to justify the credibility of one web page during the completion of an online inquiry task. Interestingly, better evaluators were more likely to refer to the source (89% of the better evaluators) than poorer evaluators (14% of poorer evaluators). A similar portion of better (13% of better evaluators) and poorer evaluators (16% of poorer evaluators) evaluated the content beyond just referring to it. Notably, better evaluators seemed to display more sophisticated reasoning in their responses.

Although students’ written justifications seem to be a fruitful method for capturing variations in students’ justification skills, there may be several factors that can affect the results. First, insufficient writing skills may hinder some students from expressing their thinking (cf. McCarthy et al., Citation2022). Accordingly, tasks requiring written explanations seem to be more challenging than tasks in which students do not need to explain their reasoning (Sparks, van Rijn, & Deane, Citation2021). Second, explaining one’s reasoning is a challenging literacy task that requires behavioral and cognitive engagement, and not all students are willing to invest their full effort (Goldhammer et al., Citation2014; List & Alexander, Citation2018). To overcome these potential issues, this study assessed students’ justification ability with multiple-choice items. We focused on understanding how students justify both the credibility of the source (expertise and benevolence) and the content (i.e., the quality of evidence).

Prior Knowledge and Basic Reading Skills in Justifying the Credibility

Previous research suggests that several cognitive skills or resources contribute to students’ credibility evaluation (see Anmarkrud, Bråten, Florit, & Mason, Citation2022 for a systematic review of individual differences in sourcing), including prior topic knowledge and reading skills. Prior topic knowledge and reading skills can be considered closely related, as the construction of a coherent text representation presumably relies heavily on readers’ prior knowledge (Kintsch, Citation1988). Kintsch’s (Citation1988, Citation1998; see also McNamara & Magliano, Citation2009) construction–integration model of text comprehension describes how comprehension occurs through an interaction between textual content and readers’ prior knowledge. The construction and integration processes result in representations that are called a textbase and a situational model, respectively. The textbase refers to the underlying meaning of explicit information in the text, whereas the situation model refers to the meaning of the text that readers construct by drawing inferences that go beyond the concepts explicitly stated in the text and integrating the textbase with their prior knowledge.

Consequently, prior knowledge also has been considered an important resource for credibility evaluation (Brand-Gruwel, Kammerer, van Meeuwen, & van Gog, Citation2017; Lucassen, Muilwijk, Noordzij, & Schraagen, Citation2013). Previous studies have found that students’ prior topic or domain knowledge is positively related to their credibility evaluations (e.g., Braasch et al., Citation2014; Forzani, Citation2018; Kammerer et al., Citation2021). However, not all studies have observed statistically significant associations between prior knowledge and credibility evaluation (Hämäläinen et al., Citation2021; Mason et al., Citation2018).

Further, the role of basic reading skills in credibility evaluation has drawn researchers’ attention. Previous studies have examined associations between credibility evaluation and one or two aspects of basic reading skills. For example, Dyoniziak, Potocki, and Rouet (Citation2023) did not find any association between word reading fluency and eight graders’ (n = 90) credibility evaluation performance measured with items that required discrimination between the most and least credible websites and justification of source credibility (see also, Braasch et al., Citation2014). However, a positive association between word reading fluency and students’ ability to infer sources’ intentions was observed. Hämäläinen et al. (Citation2021) found a positive association between reading fluency and the ability to justify credibility among upper secondary school students. Also, learners with reading difficulties seem to struggle more with credibility evaluation than do other learners (Kanniainen et al., Citation2022).

Whereas the results are somewhat mixed with respect to lower-level reading skills, prior research suggests that reading comprehension is essential to students’ credibility evaluation, at least for younger students. A large-scale study (n = 1431 seventh graders) by Forzani (Citation2018) showed that reading comprehension contributed to students’ credibility evaluation after controlling for prior knowledge and gender. Notably, when Kanniainen et al. (Citation2019) examined the role of both reading fluency (measured with three tests) and reading comprehension in sixth graders’ (n = 426) evaluation of online texts, reading comprehension was the only statistically significant predictor after considering for students’ prior knowledge, gender, spelling, and nonverbal reasoning. These findings highlight that although lower-level reading skills may be a prerequisite for skillful credibility evaluation of texts, comprehension skills probably matter more.

However, reading comprehension is a complex skill that includes different types of comprehension processes, such as understanding words, locating information, identifying main ideas and authors’ purposes, making inferences at various levels, integration, interpretation, and evaluation, not all of which are equally demanding for the reader (Afflerbach, Cho, & Kim, Citation2015). For example, different comprehension processes are required based on the difficulty of the text and the task (Afflerbach et al., Citation2015) or depending on the text genre (Gersten, Fuchs, Williams, & Baker, Citation2001). Reading comprehension is typically assessed with batteries that include a combination of test items aiming at covering many different comprehension processes (Afflerbach, Citation2017), with higher performance also reflecting student’s ability to use higher-level comprehension processes in reading (cf. OECD, 2019).

While much is known about the different comprehension processes that underlie reading comprehension as a composite (e.g., Ahmed et al., Citation2016; LARCC & Logan, Citation2017), less is known about associations between the different processes. To the best our knowledge, no study has examined associations between different comprehension processes required in multiple-choice reading comprehension tests and justification skills involved in credibility evaluation. Therefore, in addition to examining how word recognition and reading comprehension were associated with adolescents’ identification (i.e., identification of the author, claim, and evidence) and justification performance, we also explored how different reading comprehension items might contribute to their identification and justification performance.

Research Questions

The research questions of this study were as follows:

  1. How well can students identify the author, main claim, and supporting evidence in online texts?

  2. How well can students justify the author’s expertise, the author’s benevolence, and the quality of evidence?

  3. To what extent do students’ prior topic knowledge and basic reading skills predict their identification and justification performance?

  4. Are different types of reading comprehension items differentially associated with students’ identification and justification performance after controlling for prior knowledge and word recognition?

Methods

Participants

Participants were 274 Finnish sixth graders (Mage = 12.45; SD = 0.32); 53.3%, 44.9% boys, 2.6% responded other or did not respond. Most students spoke Finnish (91.2%) at home. Altogether, 498 guardians (272 as Guardian 1 and 226 as Guardian 2) reported their educational background when completing the consent form, with 59% having a higher education degree (i.e., from a university or a university of applied sciences). In the Finnish population, 44% of citizens have a higher education degree, indicating that guardians with a higher education degree were somewhat over-represented in our sample (Official Statistics of Finland, Citation2020). The ethical statement was received by the Ethics Committee of the Tampere Region (No. 38/2019).

Critical Online Reading Task

Students completed a critical online reading task created with the Critical Online Reading Research Environment (Kiili, Räikkönen, Bråten, Strømsø, & Hagerman, Citation2023). In this task, students read four researcher-designed online texts about sugar’s effects on children’s hyperactivity (Texts A and B) and memory (Texts C and D) (see ). Two of the texts (B and C) were more credible, presenting information in accordance with current scientific knowledge (i.e., main claim and supporting evidence). In contrast, two less credible texts (A and D) were not in accordance with current scientific knowledge. In addition, the texts were manipulated regarding the author’s expertise, the author’s benevolence, and the publication venue.

Figure 1. Online texts.

Figure 1. Online texts.

When completing the task, students received guidance from fact-checker Max, who gave them the task assignment and provided some feedback. Max asked students to read one text at a time. After reading each text, students were asked to respond to identification, evaluation, and justification items that appeared one by one on the right side of the text. Identification items asked students to identify the author, the main claim, and the supporting evidence among three alternatives for each. After students had locked in their responses, Max provided feedback by revealing the correct answer. These identification items and the related feedback ensured that all students would evaluate the correct author and evidence. We formed a sum score for identification performance (maximum score: 12 p.). The reliability estimate (McDonald’s omega ω) for students’ identification performance score was .55.

Evaluation items asked students to evaluate the author’s expertise, the author’s benevolence, and the quality of evidence, respectively, on a 6-point scale. After each evaluation, students were asked to justify their evaluation (justification items) by selecting one response among four alternatives (see for justification alternatives). In formulating of the justification alternatives, we utilized open-ended responses from upper secondary school students (Kiili et al., Citation2022) and sixth graders (a pilot study). We created a sum score based on 12 justification items (three for each text) to form a justification performance variable. The maximum score was 12 points, and the reliability estimate (McDonald’s omega ω) for students’ justification performance score was .66.

The order of the items is shown in using the newspaper article (Text A) as an example. Students were randomly assigned to read the texts in two different orders. Group 1 read the texts in the order Text A, Text B, Text C, and Text D (coded as 0), whereas Group 2 read the texts in the order Text B, Text A, Text D, and Text C (coded as 1).

Figure 2. Flow of the task with example items from the newspaper article.

Figure 2. Flow of the task with example items from the newspaper article.

Other Measures

Word recognition

Word recognition was measured with a time-limited word chain test (Holopainen, Kairaluoma, Nevala, Ahonen, & Aro, Citation2004). The test consisted of 25 word chains each of which contained four words written without any space between the words. Students had 90 s to identify as many words as possible by drawing a vertical line between the words. Students’ score was the number of correctly identified words (max score 100). According to the test manual, test–retest reliability varies between .70 and .84.

Reading comprehension

Reading comprehension was measured with a test included in a Finnish standardized reading test battery (Lerkkanen, Eklund, Löytynoja, Aro, & Poikkeus, Citation2018). Students read a 1.5 page long informational text about cave paintings and responded to 12 multiple-choice items. All except one item had four alternatives with one correct response. The one item asked students to order eight statements in the order they appeared in the text. Students had the text available when responding to the items. They had 30 min to complete the task and could use the recess if needed. The items required students to either locate information from the text, make inferences, or evaluate text content. The maximum score was 12 points, and the reliability estimate (McDonald’s omega ω) for the reading comprehension score was .65.

Prior topic knowledge

The prior knowledge test included 12 true–false items about sugar (sample item: There are different types of sugar) and its effects (sample item: A specific sugar is essential for the functioning of the brain). The items were reviewed by a former health science teacher and a medical expert. The reviews resulted in small modifications of some expressions and the replacement of two slightly ambiguous items that were further checked by the medical expert. The revised items were used to test upper secondary school students’ prior knowledge (Kiili et al., Citation2022), and the test was also piloted with sixth graders. Based on the piloting, three items were modified for the sixth graders. The test-retest reliability was examined with 58 readers and the score was .64.

Procedure

We accessed the school classes through Microsoft Teams. In the first lesson, students completed the basic reading tasks, and in the second lesson, the credibility evaluation task. The researcher briefly introduced the tasks and showed a video with more detailed instructions for each task. Using the videos enabled us to keep the instructions constant across the classrooms. Teachers handled classroom management and communicated with a researcher via chat or microphone. Students completed the credibility evaluation task on a computer. They accessed the task with a code and completed it at their own pace (mean time on task was 20 min and 11 s, SD = 5:31).

Statistical Analyses

The statistical analyses were conducted by using SPSS 27. We conducted four sequential regression analyses (Tabachnick & Fidell, Citation2013). Regression Analyses 1 and 2 examined how prior topic knowledge, word recognition, and reading comprehension were associated with students’ identification and justification performance, respectively. As we were interested in the contribution of each variable, we conducted each analysis in three steps. In Step 1, we entered prior topic knowledge because it can be considered a fundamental component in reading comprehension (Kintsch, Citation1998). We entered word recognition (Step 2) before reading comprehension (Step 3) because the former can be considered lower-level reading skills relative to the latter. This order allowed us to examine the predictability of reading comprehension for students’ performance over and above their prior topic knowledge and word recognition.

Regression Analyses 3 and 4 were conducted to clarify which of the reading comprehension test items were associated with students’ identification and justification performances. The two first steps in the regression analysis were the same as in Regression Analyses 1 and 2. In Step 3, we entered all 12 reading comprehension items separately.

There were some missing data for all variables. The missing data (see ) was due to absence from school (16 students were absent from one of the two lessons), an incomplete reading comprehension task (10 students), a misunderstanding of the word recognition task instruction (1 student), and a technical issue (1 student). Missing data were excluded pairwise. Little’s (Citation1988) test indicated that missingness was completely random (χ2(13) = 6.27, p = .936).

Results

Identification of the Author, Main Claim, and Evidence

As shown in , most students were able to identify the author. However, some students confounded the author with the publisher or platform. Notably, 17.7% mixed up the author (journalist) with a person interviewed in the text. Further, students performed slightly better in identifying the main claim (varied from 71% to 86%) than evidence (53% to 92%). Students struggled to understand that the journalist relied on an expert statement in the newspaper article (53%) and that the CEO relied on a customer survey (56%).

Table 1. Responses to the items requiring identification of the author from three options.

Justifications for the Author’s Expertise

Justifying the author’s expertise required students to pay attention to source information and consider its relevance to the topic of the online text (see ). Students struggled most in justifying the journalist’s expertise. Only about one third (30.6%) of the students justified the journalists’ expertise with his profession, whereas one third paid attention to the amount of information in the text and one fourth mixed up the author with an embedded source (i.e., a doctor interviewed in the article). In contrast, more than half of the students (57.0%) justified the researchers’ expertise with relevant source information. When justifying the mother’s and the CEO’s expertise, 54.3% of the students recognized that the mother was not a health expert, whereas the corresponding percentage for the CEO was only 33.6%. Many students justified the CEO’s expertise with his work related to sugar products.

Table 2. Responses to the justification items concerning the author’s expertise.

Justifications for the Author’s Benevolence

Justifying the author’s benevolence required students to consider the author’s intentions. As shown in , students performed best in justifying the mother’s persuasive intention. Although the commercial online text included marketing statements, only 63.0% of the students justified their benevolence judgments with commercial intentions. Notably, more than half of the students struggled to understand that the researcher and the journalist were in professions that ideally require the sharing of unbiased information.

Table 3. Responses to the justification items concerning the author’s benevolence.

Justifications for the Quality of Evidence

Justifying the quality of evidence required students to consider how well the author supports the main claim with evidence (). It was challenging for students to understand that the mother could not verify the causal claim with her own observation: only one-fourth of the students questioned the validity of her observation as evidence. In contrast, almost half of the students (47.9%) questioned the validity of the company’s customer survey as evidence. Further, half of the students understood the value of research studies as evidence. It is worth noticing that some students seemed to think that evidence should be produced by the author. That is, 22.7% selected the option that questioned the quality of the evidence because the interviewed expert did not rely on her own experiences. Additionally, 17.4% chose the option that questioned the research evidence because the author did not conduct the reported research.

Table 4. Responses to the justification items concerning the quality of evidence.

Contributions of Prior Knowledge and Basic Reading Skills to Students’ Identification and Justification Performance

presents descriptive statistics for the variables included in regression analyses 1 and 2. presents the results of the regression analysis predicting students’ identification performance (identification of the author, claim, and evidence). In Step 1, prior topic knowledge explained 5% of the variance in identification performance, F(1, 245) = 13.69, p < .001. In Step 2, the model, including prior knowledge and word recognition, explained 16% of the variance, F(2, 244) = 23.82, p < .001. After including reading comprehension in the final model in Step 3, 34% of the variance in identification performance were explained, F(3, 243) = 41.10, p < .001. Only word recognition and reading comprehension were statistically significant predictors in the final model. Thus, the better students’ basic reading skills were, the better was also their identification performance.

Table 5. Descriptive statistics and inter-correlations for variables included in the regression analyses.

Table 6. Results of sequential regression analysis for variables predicting students’ identification (author, claim, and evidence) performance (N = 246).

presents the results of the regression analysis predicting students’ justification performance. In Step 1, prior topic knowledge explained 9% of the variance in students’ justification performance, F(1, 245) = 25.13, p < .001. After adding word recognition to the model (Step 2), the model explained 16% of the variance, F(2, 244) = 22.62, p < .001, with both prior topic knowledge and word recognition being statistically significant predictors.When reading comprehension was included in the equation in Step 3, the final model explained 35% of the variance in justification performance, F(3, 243) = 43.75, p < .001. Only prior knowledge and reading comprehension were statistically significant predictors in the final model. This means that the more prior knowledge students had about the text topic and the better reading comprehenders they were, the better they also wherein justifying the credibility.

Table 7. Results of sequential regression analysis for variables predicting students’ justification performance (N = 246).

Associations between Specific Reading Comprehension Items and Identification and Justification Performance

All reading comprehension items were positively correlated with identification and justification performance (see Appendix for Spearman correlations). We ran two additional regression analyses to investigate which kinds of comprehension items predicted students’ identification and justification performance. In these analyses, Steps 1 and 2 were the same as in the regression analyses presented above (see and ). However, instead of using the comprehension sum score in Step 3, we entered all twelve reading comprehension items into the model.

Regarding the identification performance, the final model was statistically significant, F(14, 233) = 10.16, p < .001, explaining 38% of the variance. Four reading comprehension items (Item 1: β = .20, p < .001; Item 4: β = .13, p = .024; Item 9: β = .16, p = .004; Item 11: β = .15, p = .014) were statistically significant predictors. These items asked students to identify the text genre (1) and the purpose of the text (9), and to locate a detail presented in one particular sentence (11), in addition to the eight sentences that students were asked to number in the order they appeared in the text (4). Items 1, 9, and 11 were relatively easy. Of all students, 91%, 84%, and 78%, respectively, provided a correct response. Item 4 was the most challenging in the test: only 32% of the students responded correctly.

Regarding the justification performance, the final model was statistically significant, explaining 37% of the variance, F(14, 233) = 11.28, p < .001. In this model, prior knowledge (β = .13, p = .018) and five reading comprehension items (Item 3: β = .17, p = .005; Item 4: β = .18, p = .002; Item 5: β = .16, p = .003; Item 7: β = .24, p < .001; Item 8: β = .12, p = .026) were statistically significant predictors. Of these items, only Item 4 predicted both identification and justification performance. Two other items predicting justification performance required students to infer a word’s meaning using morphological knowledge (7) or the text context (8). One item required students to make a bridging inference across two consecutive sentences (3). Finally, one item (5) asked students to indicate which of four statements was not true. Students needed to locate the corresponding ideas across the text (all in different paragraphs) and compare the statements to the text content. The statements and corresponding ideas in the text used different expressions and successful completion of the item required inferencing. The items predicting justification performance were substantially more difficult for students than were items predicting identification performance (Item 3: 56%, Item 5: 42%, Item 7: 42%, and Item 8: 54%).

Discussion

This study examined adolescents’ credibility evaluation skills. In particular, we focused on their justification performance, specifically on how well sixth grade students can justify the author’s expertise, the author’s benevolence, and the quality of evidence. We also sought to understand how prior knowledge and basic reading skills (word recognition and reading comprehension) may contribute to students’ credibility evaluation, that is, to their identification of components (author, claim, and evidence) subject to credibility evaluation and to justification of the credibility. Consequently, this study shed further light on the relationship between credibility evaluation and lower- and higher-level reading processes. As such, it may contribute to the building of a theoretical understanding of credibility evaluation and the design of instruction that can promote students’ critical reading development.

We found that identifying the argument (i.e., claim and evidence) and the author was easy for most students, maybe because the argument structure was relatively simple (cf. Christodoulou & Diakidoy, Citation2020) and the author information was explicitly available. However, some students struggled to identify the correct author in online texts (see also, Coiro et al., Citation2015). These students confused the author with the publication venue and, for example, a person interviewed in the text.

Further, justifying credibility seemed to be more challenging for students than identifying the argument and the author. Depending on the target of the justification and the text, the proportion of the students who chose the correct justification option varied from 26% to 63%. Students performed best in justifying the author’s benevolence, with the proportion of students who chose the correct justification option varying from 58% to 81%. It should be noted that the multiple-choice options likely scaffolded students’ responses. For example, in this study, 63% of the sixth graders identified the author’s commercial intentions. In contrast, in a study that measured sixth graders’ justification performance with open-ended responses, only 19% of the sixth graders recognized or were able to express the author’s commercial intentions (Kiili, Leu, Marttunen, Hautala, & Leppänen, Citation2018). These results suggest that expressive tasks are more challenging than multiple-choice tasks (Sparks et al., Citation2021), which should be considered when interpreting results across the studies.

In line with previous results (e.g., Barzilai & Zohar, Citation2012; McGrew, Citation2021; Potocki et al., Citation2020), quite a few students did not attend to source features, or they did not consider whether the author’s expertise matched the topic of the text. Further, our results showed that a causal claim based on the author’s observations was particularly difficult to question, as only 25% of the students could do that. This is not surprising, given that understanding causality may be challenging even for undergraduates (List, Citation2023). Even though our results suggest that some credibility aspects are more difficult to justify than others, it should also be noted that on some items, the alternative justification options may have been more difficult than on others.

We found that all examined independent variables (prior knowledge, word recognition, and reading comprehension) correlated positively with identification and justification performance. However, word recognition and reading comprehension were statistically significant predictors of identification performance, whereas prior knowledge and reading comprehension were statistically significant predictors of justification performance. Thus, successful justification performance seemed to require more demanding reading processes than did identification performance. Framing these findings with Kintsch’s (Citation1998) construction-integration model, establishing a textbase was probably sufficient for a good performance regarding the identification of the author and the argument, whereas justifying the credibility likely required the building of a situational model. Presumably, students needed to make more inferences beyond the text to justify the credibility, with prior topic knowledge supporting inferencing in this process. Noteworthy, our study was limited in considering only prior topic knowledge. It remains for future research to examine whether the role of prior knowledge would be even more substantial if other types of knowledge, such as source knowledge, were included in the examination.

Our findings are in accordance with previous studies (Forzani, Citation2018; Kanniainen et al., Citation2019) suggesting that basic reading skills, especially reading comprehension, are foundational in credibility evaluation of online texts. However, the contribution of reading comprehension to credibility evaluation may vary depending on how the former is measured (cf. Andreassen & Bråten, Citation2010; Keenan, Betjemann, & Olson, Citation2008). In addition, reading comprehension tests may include items that measure different types of reading processes, which also seemed to be the case in our study. We found that more difficult reading comprehension test items, such as inferencing beyond the text, predicted justification performance. In contrast, reading comprehension items that were relatively easy for students (with one exception) predicted identification performance. The results of this initial examination suggest that it would be valuable to examine further how different types of reading comprehension processes may contribute to credibility evaluation.

Limitations

We acknowledge that the present study has several limitations. First, the low reliability of the identification performance may be considered a limitation. This low reliability may be caused by the fact that identification performance included different types of identification (i.e., identification of the author, claim, and evidence) across four texts that represented different genres.

Second, using multiple-choice items in capturing students’ justification performance also comes with some limitations. Students were forced to choose one of the four options to justify their credibility evaluation. The provided options might not have represented their actual thinking. To mitigate this limitation, we used students’ authentic written justifications to inform the formulation of the justification options. This procedure ensured that the options would reflect, at least to some extent, students’ ways of thinking. Further, multiple-choice items may overestimate students’ justification ability because the options prompt students’ thinking. However, an advantage of multiple-choice questions in measuring credibility evaluation is that performance is not dependent on students’ writing skills (see McCarthy et al., Citation2022; Sparks et al., Citation2021).

Third, the reading comprehension test used in this study seemingly measured various types of comprehension processes, with only one or a few items measuring a specific comprehension process. Therefore, we could not make any firm conclusions about the specific types of comprehension processes that could facilitate credibility justification. Future research could profitably use reading comprehension measures focusing on specific comprehension processes, such as global coherence inferences (Jensen & Elbro, Citation2022).

Instructional Implications

Teachers can support the development of their students’ credibility justification skills in various ways. As reading comprehension skills seem to build an important foundation for justifying credibility, at least in online text contexts, teachers need to ensure that students have adequate opportunities to continuously practice reading comprehension, especially with tasks that require inferencing at the paragraph and text levels or even across texts. As it has been shown that reading longer texts either offline or online support comprehension skills (Torppa et al., Citation2020) and online reading comprehension (Kanniainen et al., Citation2022), fostering students reading engagement at school and home also seems important when educating critical online readers.

Because students’ prior knowledge also may facilitate credibility justification, teachers could choose text topics students already know about when practicing reasoning about credibility. Prior topic knowledge could be activated before, during, and after reading, for example by means of open-ended prompts, visualizations, or analogical reasoning (Hattan & Alexander, Citation2024). Especially, this may assist students who do not activate and utilize their prior knowledge spontaneously. Activation could go beyond topic knowledge, however, with teachers also activating students’ knowledge about sources.

As shown in this study, some students may need support identifying the author, claim, and evidence so that they can successfully proceed to thinking about the credibility of the author and the content of the text. In classrooms, teachers could discuss with their students the roles and responsibilities of the author, the publisher, and the interviewee, that is, how they can be considered to contribute to the online text. When reading in an authentic online environment, students may also need help locating author information. Argument knowledge, such as the form and the function of arguments, has also been shown to facilitate the evaluation of arguments (Christodoulou & Diakidoy, Citation2020).

Finally, students would benefit from explicit instruction about how author information can be used to justify the author’s expertise and benevolence, and why one type of evidence is stronger than another. Teachers could, for example, model thinking (see Coiro, Citation2011) about whether the authors’ expertise match the topic of the texts, and what ethical and/or journalistic principles (ideally) direct researchers and journalists in their work. It could be useful also to demonstrate why causal claims cannot be proved by one observation or by personal experiences. For example, students could brainstorm potential, competing explanations for what might have caused the phenomenon in question. Having students discuss and reason about credibility in small-groups and share their reasoning with the class could provide teachers with opportunities to nudge shallow reasoning toward a deeper one. As the accuracy of information cannot always be judged by examining one single text, teachers should also encourage students to confirm the accuracy of information by checking other resources (Wineburg, Breakstone, McGrew, Smith, & Ortega, Citation2022).

Ethical Approval

The ethical statement has been given by the Ethics Committee of the Tampere Region.

Author Contributions

CRediT: Carita Kiili—Conceptualization, Methodology, Formal analysis, Resources, Writing - original draft, Project administration, Funding acquisition; Helge I. Strømsø—Conceptualization, Methodology, Resources, Writing -review and editing; Ivar Bråten—Conceptualization, Methodology, Resources, Writing -review and editing; Jenni Ruotsalainen—Writing -review and editing; Eija Räikkönen—Methodology, Writing -review and editing.

Supplemental material

Supplemental Material

Download MS Word (17 KB)

Acknowledgement

The authors would like to thank Jari Hämäläinen, Symcode Oy for the software development of CORRE and Riikka Anttonen for helping in the data collection.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Data Availability Statement

Data available on request from the authors

Additional information

Funding

The study was funded by the Research Council of Finland (No. 324524). The work of the third author was supported by Strategic Research Council (No. 335625, No. 335727).

References

  • Afflerbach, P. (2017). Understanding and using reading assessment, K–12 (3rd ed.). Alexandria, VA: ASCD/International Literacy Association.
  • Afflerbach, P., Cho, B.-Y., & Kim, J.-Y. (2015). Conceptualizing and assessing higher-order thinking in reading. Theory into Practice, 54(3), 203–212. https://www.jstor.org/stable/43893984. doi:10.1080/00405841.2015.1044367
  • Ahmed, Y., Francis, D. J., York, M., Fletcher, J. M., Barnes, M., & Kulesz, P. (2016). Validation of the direct and inferential mediation (DIME) model of reading comprehension in grades 7 through 12. Contemporary Educational Psychology, 44-45, 68–82. doi:10.1016/j.cedpsych.2016.02.002
  • Andreassen, R., & Bråten, I. (2010). Examining the prediction of reading comprehension on different multiple-choice tests. Journal of Research in Reading, 33(3), 263–283. doi:10.1111/j.1467-9817.2009.01413.x
  • Anmarkrud, Ø., Bråten, I., Florit, E., & Mason, L. (2022). The role of individual differences in sourcing: A systematic review. Educational Psychology Review, 34(2), 749–792. doi:10.1007/s10648-021-09640-7
  • Barzilai, S., Thomm, E., & Shlomi-Elooz, T. (2020). Dealing with disagreement: The roles of topic familiarity and disagreement explanation in evaluation of conflicting expert claims and sources. Learning and Instruction, 69, Article 101367. doi:10.1016/j.learninstruc.2020.101367
  • Barzilai, S., Tzadok, E., & Eshet-Alkalai, Y. (2015). Sourcing while reading divergent Expert accounts: Pathways from views of knowing to written argumentation. Instructional Science, 43(6), 737–766. doi:10.1007/s11251-015-9359-4
  • Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online sources. Cognition and Instruction, 30(1), 39–85. doi:10.1080/07370008.2011.636495
  • Braasch, J. L. G., Bråten, I., Strømsø, H. I., & Anmarkrud, Ø. (2014). Incremental theories of intelligence predict multiple document comprehension. Learning and Individual Differences, 31, 11–20. doi:10.1016/j.lindif.2013.12.012
  • Brand-Gruwel, S., Kammerer, Y., van Meeuwen, L., & van Gog, T. (2017). Source evaluation of domain experts and novices during Web search. Journal of Computer Assisted Learning, 33(3), 234–251. doi:10.1111/jcal.12162
  • Bråten, I., McCrudden, M. T., Stang Lund, E., Brante, E. W., & Strømsø, H. I. (2018). Task-oriented learning with multiple documents: Effects of topic familiarity, author expertise, and content relevance on document selection, processing, and use. Reading Research Quarterly, 53(3), 345–365. doi:10.1002/rrq.197
  • Breakstone, J., Smith, M., Wineburg, S., Rapaport, A., Carle, J., Garland, M., & Saavedra, A. (2021). Students’ civic online reasoning: A national portrait. Educational Researcher, 50(8), 505–515. doi:10.3102/0013189X211017495
  • Cervetti, G. N., & Wright, T. S. (2020). The role of knowledge in understanding and learning from text. In E. B. Moje, P. Afflerbach, P. Enciso, & N. K. Leseaux (Eds.), Handbook of reading research (Vol. 5, pp. 237–260). New York, NY: Routledge.
  • Christodoulou, S. A., & Diakidoy, I. A. N. (2020). The contribution of argument knowledge to the comprehension and critical evaluation of argumentative text. Contemporary Educational Psychology, 63, Article 101903. doi:10.1016/j.cedpsych.2020.101903
  • Coiro, J. (2011). Talking about reading as thinking: Modeling the hidden complexities of online reading comprehension. Theory into Practice, 50(2), 107–115. doi:10.1080/00405841.2011.558435
  • Coiro, J., Coscarelli, C., Maykel, C., & Forzani, E. (2015). Investigating criteria that seventh graders use to evaluate the quality of online information. Journal of Adolescent & Adult Literacy, 59(3), 287–297. doi:10.1002/jaal.448
  • Diakidoy, I. A. N., Ioannou, M. C., & Christodoulou, S. A. (2017). Reading argumentative texts: Comprehension and evaluation goals and outcomes. Reading and Writing, 30(9), 1869–1890. doi:10.1007/s11145-017-9757-x
  • Dyoniziak, Y., Potocki, A., & Rouet, J.- F. (2023). Role of advanced theory of mind in teenagers’ evaluation of source information. Discourse Processes, 60(4–5), 363–377. doi:10.1080/0163853X.2023.2197691
  • Ferguson, L. E., Bråten, I., & Strømsø, H. I. (2012). Epistemic cognition and change when students read multiple documents containing conflicting scientific eveidence: A think-aloud study. Learning and Instruction, 22(2), 103–120. doi:10.1016/j.learninstruc.2011.08.002
  • Forzani, E. (2018). How well can students evaluate online science information? Contributions of prior knowledge, gender, socioeconomic status, and offline reading ability. Reading Research Quarterly, 53(4), 385–390. doi:10.1002/rrq.218
  • Forzani, E., Corrigan, J., & Kiili, C. (2022). What does more and less effective internet evaluation entail?: Investigating readers’ credibility judgments across content, source, and context. Computers in Human Behavior, 135, Article 107359. doi:10.1016/j.chb.2022.107359
  • Gersten, R., Fuchs, L. S., Williams, J. P., & Baker, S. (2001). Teaching reading comprehension strategies to students with learning disabilities: A review of research. Review of Educational Research, 71(2), 279–320. doi:10.3102/00346543071002279
  • Goldhammer, F., Naumann, J., Stelter, A., Tóth, K., Rölke, H., & Klieme, E. (2014). The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights from a computer-based large-scale assessment. Journal of Educational Psychology, 106(3), 608–626. doi:10.1037/a0034716
  • Hämäläinen, E. K., Kiili, C., Räikkönen, E., & Marttunen, M. (2021). Students’ abilities to evaluate the credibility of online texts: The role of Internet-specific epistemic justifications. Journal of Computer Assisted Learning, 37(5), 1409–1422. doi:10.1111/jcal.12580
  • Hattan, C., Alexander, P. A., & Lupo, S. M. (2024). Leveraging what students know to make sense of texts: What the research says about prior knowledge activation. Review of Educational Research, 94(1), 73–111. doi:10.3102/00346543221148478
  • Holopainen, L., Kairaluoma, L., Nevala, J., Ahonen, T., & Aro, M. (2004). Lukivaikeuksien seulontatesti nuorille ja aikuisille [Dyslexia screening test for youth and adults]. Jyväskylä, Finland: Niilo Mäki Institute.
  • Jensen, K. L., & Elbro, C. (2022). Clozing in on reading comprehension: A deep cloze test of global inference making. Reading and Writing, 35(5), 1221–1237. doi:10.1007/s11145-021-10230-w
  • Kammerer, Y., Gottschling, S., & Bråten, I. (2021). The role of internet-specific justification beliefs in source evaluation and corroboration during web search on an unsettled socio-scientific issue. Journal of Educational Computing Research, 59(2), 342–378. doi:10.1177/0735633120952731
  • Kanniainen, L., Kiili, C., Tolvanen, A., Aro, M., & Leppänen, P. H. (2019). Literacy skills and online research and comprehension: Struggling readers face difficulties online. Reading and Writing, 32(9), 2201–2222. doi:10.1007/s11145-019-09944-9
  • Kanniainen, L., Kiili, C., Tolvanen, A., Utriainen, J., Aro, M., Leu, D. J., & Leppänen, P. H. (2022). Online research and comprehension performance profiles among sixth-grade students, including those with reading difficulties and/or attention and executive function difficulties. Reading Research Quarterly, 57(4), 1213–1235. doi:10.1002/rrq.463
  • Keenan, J. M., Betjemann, R. S., & Olson, R. K. (2008). Reading comprehension tests vary in the skills they assess: Differential dependence on decoding and oral comprehension. Scientific Studies of Reading, 12(3), 281–300. doi:10.1080/10888430802132279
  • Kiili, C., Bråten, I., Strømsø, H., Hagerman, M. S., Räikkönen, E., & Jyrkiäinen, A. (2022). Adolescents’ credibility justifications when evaluating online texts. Education and Information Technologies, 27(6), 7421–7450. doi:10.1007/s10639-022-10907-x
  • Kiili, C., Leu, D. J., Marttunen, M., Hautala, J., & Leppänen, P. H. T. (2018). Exploring early adolescents’ evaluation of academic and commercial online resources related to health. Reading and Writing, 31(3), 533–557. doi:10.1007/s11145-017-9797-2
  • Kiili, C., Räikkönen, E., Bråten, I., Strømsø, H. I., & Hagerman, M. S. (2023). Examining the structure of credibility evaluation when sixth graders read online texts. Journal of Computer Assisted Learning, 39(3), 954–969. doi:10.1111/jcal.12779
  • Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction-integration model. Psychological Review, 95(2), 163–182. doi:10.1037/0033-295x.95.2.163
  • Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge, UK: Cambridge University Press.
  • Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens versus the internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest: A Journal of the American Psychological Society, 21(3), 103–156. doi:10.1177/1529100620946707
  • Larson, M., Britt, M. A., & Larson, A. A. (2004). Disfluencies in comprehending argumentative texts. Reading Psychology, 25(3), 205–224. doi:10.1080/02702710490489908
  • Lerkkanen, M.-K., Eklund, K., Löytynoja, H., Aro, M., & Poikkeus, A.-M. (2018). YKÄ – Luku- ja kirjoitustaidon arviointimenetelmä yläkouluun [YKÄ - Reading and writing assessments for secondary school]. Jyväskylä, Finland: Niilo Mäki Instituutti.
  • List, A. (2023). The limits of reasoning: Students’ evaluations of anecdotal, descriptive, correlational, and causal evidence. The Journal of Experimental Education, 92(1), 1–31. Advance online publication. doi:10.1080/00220973.2023.2174487
  • List, A., & Alexander, P. A. (2018). Cold and warm perspectives on the cognitive affective engagement model of multiple source use. In J. L. G. Braasch, I. Bråten, & M. T. McCrudden (Eds.), Handbook of multiple source use (pp. 34–54). New York, NY: Routledge.
  • List, A., Du, H., & Lyu, B. (2021). Examining undergraduates’ text-based evidence identification, evaluation, and use. Reading and Writing, 35(5), 1059–1089. doi:10.1007/s11145-021-10219-5
  • Little, R. J. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83(404), 1198–1202. doi:10.2307/2290157
  • Logan, Jessica. (2017). Pressure points in reading comprehension: A quantile multiple regression analysis. Journal of Educational Psychology, 109(4), 451–464. doi:10.1037/edu0000150
  • Lucassen, T., Muilwijk, R., Noordzij, M. L., & Schraagen, J. M. (2013). Topic familiarity and information skills in online credibility evaluation. Journal of the American Society for Information Science and Technology, 64(2), 254–264. doi:10.1002/asi.22743
  • Mason, L., Boldrin, A., & Ariasi, N. (2010). Searching the Web to learn about a controversial topic: Are students epistemically active? Instructional Science, 38(6), 607–633. doi:10.1007/s11251-008-9089-y
  • Mason, L., Scrimin, S., Tornatora, M. C., Suitner, C., & Moè, A. (2018). Internet source evaluation: The role of implicit associations and psychophysiological self-regulation. Computers & Education, 119, 59–75. doi:10.1016/j.compedu.2017.12.009
  • McCarthy, K. S., Yan, E. F., Allen, L. K., Sonia, A. N., Magliano, J. P., & McNamara, D. S. (2022). On the basis of source: Impacts of individual differences on multiple-document integrated reading and writing tasks. Learning and Instruction, 79, Article 101599. doi:10.1016/j.learninstruc.2022.101599
  • McCrudden, M. T., Barnes, A. (2016). Differences in student reasoning about belief-relevant arguments: a mixed methods study. Metacognition Learning, 11, 275–303. doi:10.1007/s11409-015-9148-0
  • McCrudden, M. T., Stenseth, T., Bråten, I., & Strømsø, H. I. (2016). The effects of author expertise and content relevance of document selection: A mixed methods study. Journal of Educational Psychology, 108(2), 147–162. doi:10.1037/edu0000057
  • McGrew, S. (2021). Skipping the source and checking the contents: An in-depth look at students’ approaches to web evaluation. Computers in the Schools, 38(2), 75–97. doi:10.1080/07380569.2021.1912541
  • McNamara, D. S., & Magliano, J. (2009). Toward a comprehensive model of comprehension. Psychology of Learning and Motivation, 51, 297–384. doi:10.1016/S0079-7421(09)51009-2
  • Münchow, H., Tiffin-Richards, S. P., Fleischmann, L., Pieschl, S., & Richter, T. (2023). Promoting students’ argument comprehension and evaluation skills: Implementation of two training interventions in higher education. Zeitschrift Für Erziehungswissenschaft, 26(3), 703–725. doi:10.1007/s11618-023-01147-x
  • Official Statistics of Finland. (2020). Educational structure of population. Population with educational qualification by level of education, field of education and gender. Helsinki, Finland: Statistics Finland.
  • Organisation for Economic Co-operation and Development. (2019). PISA 2018 Results (Volume I): What students know and can do. Paris, France: OECD Publishing. doi:10.1787/5f07c754-en
  • Perfetti, C. A., Rouet, J.-F., & Britt, M. A. (1999). Towards a theory of documents representation. In H. van Oostendorp & S. Goldman (Eds.), The construction of mental representations during reading (pp. 99–122). Mahwah, NJ: Erlbaum.
  • Potocki, A., de Pereyra, G., Ros, C., Macedo-Rouet, M., Stadtler, M., Salmerón, L., & Rouet, J.-F. (2020). The development of source evaluation skills during adolescence: Exploring different levels of source processing and their relationships. Journal for the Study of Education and Development, 43(1), 19–59. doi:10.1080/02103702.2019.1690848
  • Pressley, M., & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively responsive reading. Hillsdale, NJ: Erlbaum.
  • Richter, T., Münchow, H., & Abendroth, J. (2020). The role of validation in integrating multiple perspectives. In P. Van Meter, A. List, D. Lombardi, & P. Kendeou (Eds.), Handbook of learning from multiple representations and perspectives (pp. 259–275). New York, NY: Routledge.
  • Sparks, J. R., van Rijn, P. W., & Deane, P. (2021). Assessing source evaluation skills of middle school students using learning progressions. Educational Assessment, 26(4), 213–240. doi:10.1080/10627197.2021.1966299
  • Stadtler, M., & Bromme, R. (2014). The content–source integration model: A taxonomic description of how readers comprehend conflicting scientific information. In D. N. Rapp & J. L. G. Braasch (Eds.), Processing inaccurate information: Theoretical and applied perspectives from cognitive science and the educational sciences (pp. 379–402). Cambridge, MA: MIT Press.
  • Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Boston, MA: Pearson Education.
  • Thomm, E., & Bromme, R. (2016). How source information shapes lay interpretations of science conflicts: Interplay between sourcing, conflict explanation, source evaluation, and claim evaluation. Reading and Writing, 29(8), 1629–1652. doi:10.1007/s11145-016-9638-8
  • Torppa, M., Niemi, P., Vasalampi, K., Lerkkanen, M.-K., Tolvanen, A., & Poikkeus, A.-M. (2020). Leisure reading (but not any kind) and reading comprehension support each other—A longitudinal study across grades 1 and 9. Child Development, 91(3), 876–900. doi:10.1111/cdev.13241
  • Wineburg, S. S. (1991). On the reading of historical texts: Notes on the breach between school and academy. American Educational Research Journal, 28(3), 495–519. doi:10.3102/00028312028003495
  • Wineburg, S., Breakstone, J., McGrew, S., Smith, M. D., & Ortega, T. (2022). Lateral reading on the open Internet: A district-wide field study in high school government classes. Journal of Educational Psychology, 114(5), 893–909. doi:10.1037/edu0000740