3,101
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Students’ argumentation in the contexts of science, religious education, and interdisciplinary science-religious education scenarios

ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

Background

Argumentation, that is the coordination of evidence and reasons to support claims, is an important skill for democratic society, developing subject-specific literacies, and can be embedded in multiple school subjects. While argumentation has been extensively researched in science education, interdisciplinary argumentation is less explored, particularly between subjects where collaboration is not the norm, such as science and religious education (RE). Yet everyday issues often involve considering information from multiple sources, such as scientific information or ethical, moral, or religious perspectives.

Purpose

The purpose of this study was to better understand students’ abilities in argumentation within and across the school subjects of science and RE to inform research and practice of interdisciplinary argumentation.

Sample

The participants of this study were 457 students, aged between 11 and 14 years, from 10 secondary schools in England. Following data cleaning, 394 student responses were analysed.

Design and Methods

Students completed simultaneous written assessments for argumentation in three tasks which are situated within three different subject contexts: (1) science (2) RE, and (3) an interdisciplinary context which involved argumentation from science and RE.

Results

In each of the three contexts, high proportions of students achieve all available marks for identifying claims and evidence. These proportions drop when constructing the link between claim and evidence (warrant) and constructing an evaluative argument. Higher performances were generally noted in the context of science and that students experience particular challenges in argumentation in the RE scenario.

Conclusions

This study contributes to our understanding of the challenges and successes of students’ argumentation within and across the subjects of science and RE. Implications for both research and practice are discussed.

Introduction

Argumentation is often defined as the justification of claims with evidence and reasons (Toulmin, Citation1958). It is widely recognised as an important skill to learn in school, both for the development of critically literate citizens and a deep understanding of the disciplines being studied (e.g. European Union Citation2006; Monte-Santo, Citation2016). Crucially, argumentation is not some marginal nicety for a select group of students: being able to select knowledge and reason with it is a foundation of learning across the curriculum (Kuhn and Moore Citation2015; Wolfe Citation2011). Beyond this, many of the issues that we face in everyday life are not confined to boundaries of disciplines or subjects, but are complex and multi- or interdisciplinary in nature, drawing on information from a range of sources (Crujeiras-Pérez and Jiménez-Aleixandre, Citation2019). In such contexts, concern needs to be focused simultaneously on developing students’ argumentation within and across subjects/disciplines, including those that consider morals and ethics. Clearly, students need to be taught the skills of argumentation, but consideration also needs to be given to how these skills are recognised and assessed by both teachers and researchers (Duschl and Osborne Citation2002). While argumentation has been extensively researched in many contexts (Rapanta, Garcia-Mila, and Gilabert Citation2013), there has been limited research about how to assess school students’ competence in argumentation in science (Obsorne et al. Citation2016), and fewer still simultaneously consider science and other subjects, such as Religious Education, or interdisciplinary argumentation.

This paper investigates students’ skills in argumentation in three tasks, where each task is embedded in different contexts: Science, Religious Education and an interdisciplinary Science & Religious Education context. We use the term ‘context’ to include the two curriculum subjects and the interdisciplinary space. Performance in each of these individual contexts will be of inherent interest, however, our structured and theoretically informed approach to the task construction allows for some tentative comparisons to be made between the task performances which are embedded as typical arguments in these three contexts. The findings reveal some interesting challenges and opportunities for supporting students’ argumentation in and beyond the subject of science, providing useful implications for researchers and practitioners.

Argumentation

Toulmin’s (1958) model sets out the structure of arguments, including components such as claims, data/evidence, warrants, qualifiers and rebuttals and argued that argumentation could have both generic and discipline-specific features, varying in the form of argumentation and the nature of evidence being utilised (Wolfe Citation2011). Though argumentation is thought of as the justification of claims with evidence and reasons, it is recognised that the acts of constructing and critiquing arguments within different disciplines require slightly differing, but complementary skillsets of argumentation (Osborne et al. Citation2016). These skills can be fostered in school through multiple school subjects where argumentation is an important epistemic practice of the discipline (Wolfe Citation2011). Argumentation can reflect the epistemology of a discipline or subject as the epistemic criteria of the subject are enacted: what counts as knowledge, or evidence that can legitimately support claims within the subject. In this sense, it is an important component of the procedures and practices of a subject, in creating and deploying substantive content knowledge. The practice of argumentation in both science and religious education will now be considered, before considering the interdisciplinary context.

Argumentation in the disciplines

Argumentation has been a highly prominent area of research in science education for many years and extensively investigated under the broad goals of scientific literacy (Erduran, Ozdem, and Park Citation2015). For example, developing students’ reasoning and ability to draw justified conclusions (Sadler Citation2004; Sadler and Zeidler Citation2005), engagement in discourse in educational contexts (Osborne, Erduran, and Simon Citation2004; Sadler Citation2006) and acquisition of scientific knowledge (Schwarz et al. Citation2003; Venville and Dawson Citation2010; Pabuccu and Erduran Citation2017). Furthermore, it enables students to understand how science works and how scientific knowledge is justified or evaluated, and as such it represents an important epistemic practice of the discipline (Erduran and Dagher Citation2014). Research about argumentation in science education has been broad and far reaching, including work on the understanding of arguments from both students’ (Berland and Reiser Citation2009) and teachers’ perspectives (Sadler Citation2006), the quality of arguments produced by students (Erduran, Simon, and Osborne Citation2004), teachers’ role in classroom argumentation (McNeill Citation2009; Simon, Erduran, and Osborne Citation2006); and the influence of argumentation on learning scientific skills (Aydeniz et al. Citation2012; Duschl and Osborne Citation2002). Within the realm of research on argumentation in science education has been a focus on socio-scientific issues (SSI), or socio-scientific argumentation (Sadler Citation2004; Erduran, Ozdem, and Park Citation2015). These are issues with a scientific basis but of societal concern and may be debated beyond the scientific context to include, for example, ethical and moral dimensions (Sadler Citation2011). However, these have not always explicitly drawn on concepts from religious education as part of the argumentation.

Religious Education in England, as well as in much of Western Europe, is often pluralistic in nature and concerned with the impartial study of different religions and worldviews, often through dialogical learning, rather than induction into a particular faith (Jackson Citation2015; Jawoniyi Citation2015). Thus, as a school subject, it is often positioned as a multidisciplinary field that draws on many cognate disciplines such as philosophy, theology, sociology and psychology among others (Freathy et al. Citation2017). Unlike the case of science education, argumentation has been less extensively researched in the context of religious education. However, argumentation is a strong feature of many Religious Education curriculum documents in England as students are asked to analyse and evaluate various truth claims about faith and various moral positions, and to generate well-informed and reasoned responses for themselves that draw on a range of sources (Chan, Fancourt, and Guilfoyle Citation2020). This form of religious education is often intended to contribute to pupils’ understanding of and ability to contribute to issues of societal concern – thereby overlapping with SSI.

Interdisciplinary argumentation and transfer

Many issues we face in daily life require interdisciplinary thinking and complex reasoning drawing on multiple disciplinary knowledge bases (Crujeiras-Pérez and Jimenez-Aleixandre Citation2019) or the integration of moral and ethical values (Joshi Citation2016). However, school subjects can often be presented in fragmented and siloed ways that limit integration (Billingsley et al. Citation2018) and it is unusual for science and RE teachers to collaborate (Hall et al. Citation2014). Focusing on argumentation can be helpful in generating coherence across the curriculum, highlighting the similarities and differences between different subjects. In this sense, argumentation can be a boundary crossing mechanism. The learning of argumentation can clarify the distinctiveness of a particular subject in terms of how knowledge is justified and what counts as evidence, and can also show how there are general skills of argumentation across subjects, which can be applied beyond the boundaries of the classroom. This is useful for the purpose of coherence in learning, gaining transferable skills, but also in moving to more complex understanding of arguments needed in everyday life where issues often require the consideration of information from multiple sources, including scientific information and ethical considerations (Levinson Citation2010). A recent example would be the evolving debate about the wearing of face masks during the Covid-19 pandemic, that balanced the scientific efficacy of wearing facemasks with the ethics of equitable allocation (Horwell and McDonald Citation2020).

While the research literature on argumentation has been extensive in science education, interdisciplinary argumentation between science and other disciplines is less explored (Erduran et al. Citation2019). Although there has been research on argumentation on science and religion debates (e.g. Basel et al. Citation2014; Weiß Citation2016), these have often focused on aetiological issues and have been conducted in German-speaking contexts where RE is more confessional in nature.

There are a number of studies that have considered the extent to which argumentation skills might be transferred between contexts, though often these have focused on the transfer between familiar and unfamiliar contexts within a particular subject area. For example, Zohar (Citation1996) demonstrated the successful transfer of reasoning skills between one biological topic area (seed germination) to another (rodent population size) and Khishfe (Citation2014) demonstrated the transfer of argumentation skills from a familiar scientific context (water usage) to another familiar scientific context (water fluoridation) and unfamiliar scientific context (genetic modification). Other researchers have focused on examining students’ argumentation ability in the context of interdisciplinary contexts, such as socio-scientific issues (e.g. Dawson and Carson Citation2020) or science-religion debates (note the distinction between religion and religious education) (e.g. Basel et al. Citation2014). Each of these have focused on the explicit teaching of argumentation skills to students and giving them time in the classroom to develop these skills. However, few have focused on the comparison, relationship, or transfer between science and other subject domains (Osborne et al. Citation2016), with a dearth of research on the simultaneous comparison of student argumentation in the subjects of science and RE as individual subject areas alongside argumentation in the interdisciplinary science-RE context.

Nussbaum and Asterhan (Citation2016) note some difficulties with the transfer of argumentation from one domain or context to another. They point out that even if students develop a nuanced understanding of the need for evidence within arguments, they may not be able to identify what counts as evidence or may misidentify evidence in unfamiliar contexts. Additionally, a lack of expertise or knowledge in the unfamiliar context may limit the ability to utilise or adjudicate evidence. Nevertheless, developing students’ knowledge and understanding in domains should facilitate the transfer of knowledge between them.

Measures of argumentation

Assessing the quality of student argumentation is persistently challenging for the field, with researchers proposing a range of ways to do so (Erduran Citation2008; Sampson and Clarke Citation2008). The lack of high-quality assessment measures of students’ skills and proficiency in argumentation has been recognised in previous reviews (Lee et al. Citation2014). Scenarios have been used as a route to assessing argumentation in SSI contexts in particular (e.g. Dawson and Carson Citation2017), with responses being judged at different levels of quality depending on the components of argumentation present in the student response. However, there is also a particular need to have measures which recognise the subject-specific nature of argumentation (Wolfe Citation2011) and which differentiate the performance between components of the argumentation skill (Osborne et al. Citation2016). Osborne et al. (Citation2016) produced and validated a learning progression for argumentation in science and developed structured argumentation tasks based on scenarios to assess student argumentation. Five levels of their learning progression pertinent to this study, and illustrative examples in Science and RE contexts, are displayed in .

Table 1. Examples of different levels of argumentation in science and religious education (after Osborne et al. Citation2016).

Assessing student argumentation using this heavily structured approach may afford greater opportunity to compare argumentation performance between tasks in ways that are less feasible in more open tasks. Furthermore, the clear stratification of the sub-components of argumentation (or ‘levels’ of the learning progression) affords the opportunity to focus on student success and challenges in particular components of argumentation.

It can be seen that argumentation is an important skill for students to develop within and across subject disciplines, and that little work has been done to investigate the measurement of students’ skills in argumentation. Hence, this study sought to address the following research question: How does student performance vary between argumentation tasks in science, RE and Science-RE cross-curricular contexts?

Materials and methods

Instrument

Even with decades of research on argumentation, in science education and more broadly, the assessment of students’ argumentation remains a challenge (Henderson et al. Citation2018). This is in part due to the diversity of theoretical approaches to argumentation, but also a recognition of the difficulty in capturing such a complex competency (Rapanta, Garcia-Mila, and Gilabert Citation2013). Osborne et al. (Citation2016) sought to address the limited research on how to assess student ability in argumentation through the construction and validation of assessments for argumentation in scientific and general contexts. Both the learning progression that underpins the assessments, as well as the structural approach to the tasks, informed the assessment used in this research study. The instrument (Appendix 1) used to assess students’ argumentation skills in this study comprised of three tasks: ‘Christmas for non-Christians’ (CfNC; Religious Education), addressing arguments over whether religious festivals can be celebrated by non-adherents; ‘What’s growing?’ (WG; Science) addressing the biological distinction between plants and fungi; ‘A zoo near you’ (ZNY; Science and Religious Education), addressing a SSI of the ethics and value of zoos, previously addressed in SSI research (Osborne, Erduran, and Simon Citation2004) but explicitly including an element from religious education in the shared religious notion of stewardship (Hitzhusen and Tucker Citation2013). The tasks all presented two characters with different views on each topic, so that students had to identify their individual lines of argumentation.

The tasks had a similar question structure which is detailed in . Each task consisted of seven items (A-F), though the exact presentation differed slightly to avoid repetition, fatigue and learning from the test. The test was also administered in two different sequences to ascertain if any sequence effects would emerge. Each task contains two items related to identifying the claim (graded 0 or 1), two items related to identifying the evidence (graded 0 or 1), two items that ask students to explain the reasoning/warrant that links the evidence and claim (graded 0, 1, or 2), and one item that expects the student to construct an argument by asking them to decide between the two competing arguments provided (graded 0, 1, 2, or 3). Toulmin’s elements of rebuttal and qualification were not assessed. In total, then, the highest possible score is 11 for each task, or 33 for the whole assessment, as set out in .

Table 2. Structure of student argumentation tasks.

The project was designed with Key Stage 3 (11–14 yrs) students in mind and teachers were recruited to the project on the basis that they fit this criterion. The assessment was constructed such that the subject matter content of the test would be largely inconsequential to their ability to engage with the tasks across the three year-groups of Key Stage 3 (Years 7–9). While the content may be related to the subject area, the student would not require deep prior knowledge as the necessary content would be provided, but the topics were selected such that the students would be broadly familiar with the ideas presented.

When tests were administered, guidance was provided explaining that students could ask questions about the meaning of any of the words on the assessment. The purpose was not to test content knowledge or reading ability, but the ability to identify components of arguments and construct an argument based on the information provided.

Piloting was carried out to get feedback on key elements of the design and construction. Six tasks (including initial versions of the three ultimately used) were provided to 3 teachers and 66 students to trial. Teachers provided their professional perspectives on the tasks, including on content area, language used, question structure, length, and even minor details such as the names and images used. Teachers were also able to report specific difficulties their students experienced when attempting the tasks. Students helped to inform the expected responses and the construction of the grading rubrics for the tasks. Through these processes, the instruments were refined to balance each of the considerations, and reduced to the three most suitable tasks.

There are some points to be noted about the design of these instruments and the implications for the interpretation of the findings. First, the tasks were intentionally designed using similar structures, informed by Toulminian argumentation as well as validated task-structures and learning progression for assessing argumentation (Osborne et al. Citation2016). This, along with the piloting for feedback on difficulty, should enhance the comparability of the assessments. However, given the different subject content and topics, it cannot be guaranteed that the difficulty level is identical, so comparisons need to be cautiously interpreted and conclusions tentatively drawn. Furthermore, we note that while these tasks were designed as being rather ‘typical’ of the subject context they are representing, the nature of argumentation for any discipline or subject is more complex than can be represented by one task. Students’ performance on a single task for that context will not necessarily represent their performance for all argumentation in that context but only for a ‘typical’ example. Therefore, claims about students’ argumentation in each of these disciplines or contexts need also to be considered cautiously.

Participants

The participants in this study were the students in the classes of the teachers who were involved in a professional development programme for the teaching and learning of argumentation in science and religious education in England, the Oxford Argumentation in Religion and Science (OARS) project (Erduran Citation2020).. As part of this professional development, teachers had selected that they would work with these particular classes to trial some new teaching approaches. This assessment was given prior to trialling any new teaching approaches beyond their normal practice.

Four hundred and fifty-seven students from ten schools completed the assessment. The vast majority of these students were in Year 9 (n = 404). The remaining were in Year 7 (n = 28) and Year 8 (n = 25). Just over two-thirds of the respondents were male (67%). The mean age was 13 yrs (SD 0.78 yrs).

For each item, across all three tasks, there was approximately 6% missing values (M = 5.96%, SD = 0.91%). However, given the comparative analysis involved across question types and domains (RE, Science, Interdisciplinary), missing values anywhere within the assessment will be impactful. Therefore, missing values needed to be removed and were dealt with in two ways:

  1. If a response box was left empty but other boxes surrounding it on that same task were complete, the empty box was graded as 0.

  2. If a response box was left empty and the rest of the task was empty, or a large portion of that task was unanswered, then this assessment was removed for the purposes of analysis.

Following this data cleaning process, 394 student assessments remained for analysis (). The students in this data set were still majority Year 9 (90.1%), male (69%), and had a mean age of 13 (SD = 1.1).

Table 3. Demographics of students in cleaned data set.

Analysis

The grading of the assessment was conducted by the three authors. We sought to sure a robust reliability between the grading of multiple raters. The Krippendorff’s alpha test was used (Hayes and Krippendorff Citation2007) to estimate the interrater reliability between three raters. Rubrics were initially created on the basis of expected responses, literature, and feedback from teachers and expert academics. We refined these rubrics through an iterative process of establishing intercoder reliability.

Initially, the results show our intercoder reliability across the whole assessment was low (α = 0.6870). Following a discussion of disagreements and refinement of the rubrics in use, we evaluated another sample of student assessments and interrater reliability improved (α = 0.7258). However, in order to scrutinise and improve the reliability further, we examined each of the different question types. We had complete agreement for 1-mark questions (α = 1.0000) and an acceptable agreement for 2-mark questions (α = 0.7632). However, 3-mark questions were more problematic (α = 0.4851). We discussed the disagreements for these questions in depth and revised the rubrics accordingly. Taking a new sample of questions, we achieved an acceptable reliability (α = 0.8674). With this established, each rater continued to grade an allocation of between 100 and 250 student assessments.

Means, standard deviations, and patterns of distribution were considered for each of the task’s total scores and sub-components which were assessing different levels of argumentation. The performance on each of these levels of argumentation is also considered across the three tasks.

Results

The results will be presented from three perspectives. First, in terms of the total scores achieved for task, ‘Christmas for Non-Christians’ (CfNC) in the Religious Education (RE) context, ‘What’s Growing?’ (WG) in the science context, and ‘A Zoo Near You’ (ZNY) in the science-RE cross-curricular context. Second, each of these individual task’s scores will be presented with respect to the levels of argumentation assessed (Claim, Evidence, Warrant, and Constructing an Evaluative Argument). Third, we consider the total performances for each level of argumentation. The means and standard deviations for each of these are presented in and the distributions represented in .

Table 4. Means and standard deviations of performance for each task and level of argumentation.

Figure 1. Distribution of task scores.

Figure 1. Distribution of task scores.

Figure 2. Distribution of scores for identifying claims.

Figure 2. Distribution of scores for identifying claims.

Figure 3. Distribution of scores for identifying evidence.

Figure 3. Distribution of scores for identifying evidence.

Figure 4. Distribution of scores for warrant.

Figure 4. Distribution of scores for warrant.

Figure 5. Distribution of scores for constructing evaluative argument.

Figure 5. Distribution of scores for constructing evaluative argument.

Total task scores

In the case of the CfNC task, the distribution appears skewed somewhat more towards the lower scores (M = 5.28, SD = 2.28) than the WG (M = 6.76, SD = 2.28) and the ZNY task (M = 6.19, SD = 2.02). Each context has a similar spread of scores with standard deviations ranging from 2.02 to 2.28. shows these distributions.

Next, we consider how the cohort performs in each level of argumentation. show the performance for Identifying Claim, Identifying Evidence, Constructing a link between Claim and Evidence, and Constructing a Two-sided Comparative Argument respectively in each of the three contexts. These findings are then unpacked for each context alone in the text below.

RE: Christmas for non-Christians

When identifying claims in the CfNC task, 76% of students were able to identify the claims being asserted by both characters in the argumentation task 22% could identify at least one claim, and only 2% were unable to identify the claims correctly (M = 1.74, SD = 0.49). Identifying the evidence in the arguments was a similarly graded task, allowing for some comparison (M = 1.22, S = 0.77). In this case, more students were unsuccessful in identifying the evidence for either argument (21%) and fewer students were successful at identifying the evidence that both characters used to support their claims (43%). The task components which focused on the warrant and constructing an evaluative argument were graded differently, but both distributions skew more towards the lower end of the available marks. Looking across , the performance for the CfNC task continues to shift more to the left.

SCIRE: a zoo near you

In the ZNY task, the pattern of performance across the various elements of the task followed a similar trajectory, with those obtaining the higher mark decreasing in subsequent components; 96% identified both claims (M = 1.95, SD = 0.27), 66% identified evidence for both arguments (M = 1.61, SD = 0.27), and scores continued towards the lower end of the available marks for warrant and constructing an evaluative argument, where the majority achieved less than half the available marks. It was also the case that more students achieved zero marks for the warrant element (24%) than achieved zero for the constructing an evaluative argument element (19%).

SCI: what’s growing

When identifying claims in the WG task, 84% of students were able to identify the claims being asserted by both characters in the argumentation task, 9% identified one of two claims, while 7% identified neither claim (M = 1.77, SD = 0.57). There was again a drop in performance when progressing to the next task element, where 66% of the students could identify the evidence for both arguments in this task (M = 1.58, SD = 0.66). For the two later elements, the students’ performance appeared to peak in the middle with a slight skew towards the higher end for constructing an evaluative argument, where 67% achieved 2 or 3 marks. Again, more students achieved zero in the warrant element (19%) than in the constructing an evaluative argument element (14%).

Total scores for each level of argumentation

The vast majority of students were generally successful for most items of identifying claims (M = 5.47, SD = 0.89). The distribution of scores for identifying the evidence of the argument was more spread out with fewer students achieving six marks (M = 4.40, SD = 1.26). This is highlighted in (<67% identified the evidence in both items for any of the contexts). The distribution of total scores for the warrant element were quite spread out and tending towards the lower end. While the maximum score was 12, the mean score was 4.53 (SD = 2.20). shows how score distributions differ between the domains, with higher proportions of students scoring in the middle for WG, higher proportions scoring lower in CfNC, and proportions more evenly spread for ZNY. There was a wider spread of scores for constructing evaluative argument items, with a tendency towards the lower values. The mean score for these items overall was 3.84 (SD = 1.96). appears to show that higher scores were achieved by higher proportions of students in the domain of science. When seen in the context of the other tasks in , it can be visually observed that across the figures that scores shift more to the left for the CfNC task. With the exception of identifying evidence, there were higher performances for the WG than the ZNY task in each level of argumentation.

Discussion

The purpose of this study was to examine how student argumentation ability varies between science, RE and Science-RE interdisciplinary contexts. It was observed that overall performance was generally higher in the context of science than in the cross-curricular context and also higher in the cross-curricular context than in the RE context alone. This appears to confirm and add to the findings of Basel et al. (Citation2014) who state that students found it easier to generate arguments from a scientific perspective in the context of a science-religion debate. This might perhaps be explained by the distinctions between arguments as rationalistic, emotive, or intuitive (Sadler and Zeidler Citation2005). That is to say, the arguments on the science context were perhaps more rationalistic, invoking less emotive or intuitive arguments, and therefore more straightforward to grapple with. Conversely, Osborne et al. (Citation2016) reported that students appear to find argumentation more difficult in science than in more general contexts, because of the need for specific content knowledge. However, there are two reasons to expect different patterns within the present study. Firstly, while the interdisciplinary context may appear similar to ‘general’ argumentation contexts, it is differentiated by the presence of information from both scientific and RE perspectives. Secondly, the study attempted to negate the influence of content knowledge by providing the necessary information and using concepts that students would have been exposed to previously.

When examining the progression of performance within each context for sub-components of argumentation, it can be seen that performance drops in subsequent questions. A range of 76–96% of students identified both claims in each context, but success in identifying both items of evidence dropped to 46–67%. The identification of the warrant and the construction of the evaluative argument were scored differently, but the performance generally skewed towards the lower grades. This pattern of performance is in line with expectations of the theoretical model, where subsequent tasks were for levels of argumentation considered to be more difficult (Osborne et al. Citation2016).

The overall performance at identifying claims across all three contexts was generally quite high, but the performance was higher in the science-RE context than the other two contexts and RE had observably lower performance in successfully identifying both claims.

In the case of identifying evidence, while all scores were lower than for the identification of claims in their individual contexts, performance in the CfNC task was lower than the other two contexts. Recent research has shown that students often find the identification of evidence to be particularly difficult in scientific argumentation (Rodríguez-Mora, Cebrián-Robles, and Blanco-López Citation2021), but the lower performances in CfNC may be attributed to difficulties surrounding the nature of evidence in this task, which may be typical for the RE subject context. In prior research on science and RE teachers’ views about the nature of argumentation in these two subjects, differences in the nature and range of acceptable evidence were highlighted (Erduran, Guilfoyle, and Park Citation2020, Citation2021). As RE is itself a multidisciplinary subject, there are arguably no clear standards about what counts as evidence for an argument in the subject, or how different evidence is warranted (Chan, Fancourt, and Guilfoyle Citation2020). This possibility of operating with ill-defined argumentation standards of RE is also indicated in the performances of students in terms of warrants, where the RE performance was again lower than the other two contexts. In terms of constructing evaluative arguments, too, performances were generally higher in the science context than either of the other two contexts. These findings seem to lend further support to the interpretation that science could perhaps be considered more ‘straightforward’ with more defined argumentation and reasoning on more rationalistic terms alone, while the other contexts are made more challenging by a lack of clarity about what counts as evidence or warrant and the likelihood of invoking emotive or intuitive reasoning.

It is perhaps also worth noting that the level of argumentation concerned with warranting seemed to elicit lower levels of success, where across all three tasks there were higher numbers of zero-scores, and distributions skewed towards lower marks, at the level of warrant than at the level of constructing an evaluative argument. This may highlight a particular challenge in engaging in this component of argumentation across all contexts, which may be worthy of further attention in research and teaching.

The concurrent assessment of argumentation in RE, science and the interdisciplinary context is novel for the research literature. While this offers new approaches to research to consider interdisciplinary argumentation, there are perhaps further refinements which can be made to explore these findings in greater detail. For example, if pluralistic RE is discursive and dialogical, then the elements of argumentation that we did not assess (i.e. rebuttal and qualification) might be more developed than in the other two contexts; students might be developing all these skills concurrently, whereas as science these features are not addressed till later in the curriculum.

The findings of this study make a number of important contributions to research and practice. While there has been interest in science-religion or science-RE debates in the past, these have rarely focused on argumentation, and where they have, it has been within settings with theological or confessional approaches to RE (e.g. Basel et al. Citation2014) rather than pluralistic RE as in England. Furthermore, while argumentation in science education has been extensively researched (Lin et al. Citation2018) and interest in interdisciplinarity is growing, much work remains to be done in terms of relating it to other traditionally disparate school subjects such as RE (Erduran et al. Citation2019). Gaining a deeper understanding of the challenges and successes students experience when engaging in argumentation across these contexts is important because it allows us, in research and practice, to home in on the challenges and capitalise on the successes. For example, that stronger performances were noted in the science context raises questions about the comparative challenges of identifying evidence and warrants within the RE, or interdisciplinary contexts and the extent to which standards about what counts are clear. But there are many indicators for optimism too. The differences between the contexts do not suggest a huge deficit, the spread of overall performance appeared to show a realistic distribution where a sizeable proportion of students were capable at engaging in argumentation successfully, and many did so across multiple contexts. Evidently, it is not beyond the capabilities of lower secondary school students to engage in argumentation tasks within and between their school subjects.

Given the benefits of argumentation for developing subject literacies as well as its importance in growing individuals’ capacities to critically engage as active citizens Monte-Sano Citation2016), and that most real-life issues require the integration of information from multiple sources (Crujeiras-Pérez and Jimenez-Aleixandre Citation2019), there is a pressing need to better understand how argumentation can be integrated across the school curriculum. This study advances our understanding of the challenges and opportunities in advancing the agenda for interdisciplinary argumentation.

Supplemental material

Supplemental_Files_Appendix1_-_Copy.docx

Download MS Word (207.5 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementry material

Supplemental data for this article can be accessed at https://doi.org/10.1080/02635143.2021.1947223.

Additional information

Funding

This work was supported by the Templeton World Charity Foundation [TWCF0238].

References

  • Aydeniz, M., A. Pabuccu, P. S. Cetin, and E. Kaya. 2012. “Impact of Argumentation on College Students’ Conceptual Understanding of Properties and Behaviors of Gases.” International Journal of Science and Mathematics Education 10 (6): 1303–1324. 10.1007/s10763-012-9336-1.
  • Basel, N., U. Harms, H. Prechtl, T. Weiß, and M. Rothgangel. 2014. “Students’ Arguments on the Science and Religion Issue: The Example of Evolutionary Theory and Genesis.” Journal of Biological Education 48 (4): 179–187. 10.1080/00219266.2013.849286.
  • Berland, L. K., and B. J. Reiser. 2009. “Making Sense of Argumentation and Explanation.” Science Education 93 (1): 26–55. 10.1002/sce.20286.
  • Billingsley, B., M. Nassaji, S. Fraser, and F. Lawson. 2018. “A Framework for Teaching Epistemic Insight in Schools.” Research in Science Education 48: 1115–1131. 10.1007/s11165-018-9788-6.
  • Chan, J., N. Fancourt, and L. Guilfoyle. 2020. “Argumentation in religious education in England: an analysis of locally agreed syllabuses.” British Journal of Religious Education:1–14. 10.1080/01416200.2020.1734916
  • Crujeiras-Pérez, B., and M. P. Jimenez-Aleixandre. 2019. “Interdisciplinarity and Argumentation in Chemistry Education.” In Argumentation in Chemistry Education: Research, Policy and Practice, edited by S. Erduran, 32–61. London: Royal Society of Chemistry.
  • Dawson, Vaille, and Katherine Carson. 2017. “Using Climate Change Scenarios to Assess High School Students’ Argumentation Skills.” Research in Science & Technological Education 35 (1): 1–16. doi:10.1080/02635143.2016.1174932.
  • Dawson, Vaille, and Katherine Carson. 2020. “Introducing Argumentation about Climate Change Socioscientific Issues in a Disadvantaged School.” Research in Science Education 50 (3): 863–883. 10.1007/s11165-018-9715-x.
  • Duschl, R. A., and J. Osborne. 2002. “Supporting and Promoting Argumentation Discourse in Science Education.” Studies in Science Education 38 (1): 39–72. 10.1080/03057260208560187.
  • Erduran, S., S. Simon, and J. Osborne. 2004. “TAPping Into Argumentation: Developments in the Application of Toulmin's Argument Pattern for Studying Science Discourse.” Science Education 88 (6): 915–933.
  • Erduran, S. 2008. “Methodological Foundations in the Study of Argumentation in Science Classrooms.” In Argumentation in Science Education, edited by S. Erduran and M. P. Jimenez-Aleixandre, 47–69. Netherlands: Springer.
  • Erduran, S., L. Guilfoyle, W. Park, J. Chan and N. Fancourt. 2019. “Argumentation and Interdisciplinarity: Reflections from the Oxford Argumentation in Religion and Science Project.” Disciplinary and Interdisciplinary Science Education Research1 (8). 10.1186/s43031-019-0006-9.
  • Erduran, S. 2020. “Argumentation in Science and Religion: Match And/or Mismatch When Applied in Teaching and Learning?” Journal for Education for Teaching 46 (1): 129–131. 10.1080/02607476.2019.1708624.
  • Erduran, S., L. Guilfoyle, and W. Park. 2020. “Science and Religious Education Teachers’ Views of Argumentation and Its Teaching.” Research in Science Education. 10.1007/s11165-020-09966-2
  • Erduran, S., L. Guilfoyle, and W. Park. 2021. “An investigation into secondary teachers’ views of argumentation in science and religious education.” Journal of Beliefs & Values 42 (2): 190–204. 10.1080/13617672.2020.1805925
  • Erduran, S., Y. Ozdem, and J-Y. Park. 2015. “Research Trends on Argumentation in Science Education: A Journal Content Analysis from 1998-2014.” International Journal of STEM Education 2 (5). doi:10.1186/s40594-015-0020-1.
  • Erduran, S., and Z.R. Dagher. 2014. “Regaining Focus in Irish Junior Cycle Science: Potential New Directions for Curriculum and Assessment on Nature of Science.” Irish Educational Studies 33 (4): 335–350. doi:10.1080/03323315.2014.984386.
  • European Union. 2006. “Recommendation of the European Parliament on Key Competences for Lifelong Learning.” Official Journal of the European Union, 3012-2006, L 394/10-L 394/18. Accessed 10 August 2020. https://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2006:394:0010:0018:en:PDF
  • Freathy, R., J. Doney, G. Freathy, K. Walshe, and G. Teece. 2017. “Pedagogical Bricoleurs and Bricolage Researchers: The Case of Religious Education.” British Journal of Educational Studies 65 (4): 425–443. doi:10.1080/00071005.2017.1343454.
  • Hall, S., S. McKinney, K. Lowden, M. Smith, and P. Beaumont. 2014. “Collaboration between Science and Religious Education Teachers in Scottish Secondary Schools.” Journal of Beliefs and Values 35 (1): 90–107. doi:10.1080/13617672.2014.884846.
  • Hayes, A. F., and K. Krippendorff. 2007. “Answering the Call for a Standard Reliability Measure for Coding Data.” Communication Methods and Measures 1 (1): 77–89. doi:10.1080/19312450709336664.
  • Henderson, J. B., K. L. McNeill, M. González‐Howard, K. Close, and M. Evans. 2018. “Key Challenges and Future Directions for Educational Research on Scientific Argumentation.” Journal of Research in Science Teaching 55 (1): 5–18. doi:10.1002/tea.21412.
  • Hitzhusen, G., and M. Tucker. 2013. “The Potential of Religion for Earth Stewardship.” Frontiers in Ecology and the Environment 11 (7): 368–376. doi:10.1890/120322.
  • Horwell, C., and F. McDonald. 2020. “Coronavirus: Why You Need to Wear a Face Mask in France, but Not in the UK.” Accessed 25 February 2020. https://theconversation.com/coronavirus-why-you-need-to-wear-a-face-mask-in-france-but-not-in-the-uk-137856
  • Jackson, R. 2015. Signposts - Policy and Practice for Teaching about Religions and Non-religious World Views in Intercultural Education. Strasbourg: Council of Europe.
  • Jawoniyi, O. 2015. “Religious Education, Critical Thinking, Rational Autonomy, and the Child’s Right to an Open Future.” Religion and Education 42 (1): 34–53. doi:10.1080/15507394.2013.859960.
  • Joshi, P. 2016. “Argumentation in Democratic Education: The Crucial Role of Values.” Theory into Practice 55 (4): 279–286. doi:10.1080/00405841.2016.1208066.
  • Khishfe, R. 2014. “Explicit Nature of Science and Argumentation Instruction in the Context of Socioscientific Issues: An Effect on Student Learning and Transfer.” International Journal of Science Education 36 (6): 974–1016. doi:10.1080/09500693.2013.832004.
  • Kuhn, D., and W. Moore. 2015. “Argumentation as Core Curriculum.” Learning: Research and Practice 1 (1): 66–78. doi:10.1080/23735082.2015.994254.
  • Lee, H.-S., O.L. Liu, A. Pallant, K.C. Roohr, S. Pryputniewicz, and Z.E. Buck. 2014. “Assessment of Uncertainty-Infused Scientific Argumentation.” Journal of Research in Science Teaching 51 (5): 581–605. doi:10.1002/tea.21147.
  • Levinson, R. 2010. “Science Education and Democratic Participation: An Uneasy Congruence?” Studies in Science Education 46 (1): 69–119. doi:10.1080/03057260903562433.
  • Lin, Feng., and Carol K. K. Chan. 2018. “Promoting Elementary Students’ Epistemology of Science Through Computer-Supported Knowledge-Building Discourse and Epistemic Reflection.” International Journal of Science Education 40 (6): 668–687. doi:10.1080/09500693.2018.1435923.
  • McNeill, K. L. 2009. “Teachers’ Use of Curriculum to Support Students in Writing Scientific Arguments to Explain Phenomena.” Science Education 93 (2): 233–268. doi:10.1002/sce.20294.
  • Monte-Sano, C. 2016. “Argumentation in History Classrooms: A Key Path to Understanding the Discipline and Preparing Citizens.” Theory Into Practice 55 (4): 311–319. doi:10.1080/00405841.2016.1208068.
  • Nussbaum, E. M., and C. S. C. Asterhan. 2016. “The Psychology of Far Transfer from Classroom Argumentation.” In The Psychology of Argument: Cognitive Approaches to Argumentation and Persuasion, edited by Laura Bonelli, Fabio Paglieri and Silvia Felletti, 407–423. London: College Publications.
  • Osborne, J. F., J. B. Henderson, A. MacPherson, E. Szu, A. Wild, and S.-Y. Yao. 2016. “The Development and Validation of a Learning Progression for Argumentation in Science.” Journal of Research in Science Teaching 53 (6): 821–846. doi:10.1002/tea.21316.
  • Osborne, J.F., S. Erduran, and S. Simon. 2004. “Enhancing the Quality of Argumentation in School Science.” Journal of Research in Science Teaching 41 (10): 994–1020. doi:10.1002/tea.20035.
  • Pabuccu, A., and S. Erduran. 2017. “Beyond Rote Learning in Organic Chemistry: The Infusion and Impact of Argumentation in Tertiary Education.” International Journal of Science Education 39 (9): 1154–1172. doi:10.1080/09500693.2017.1319988.
  • Rapanta, C., M. Garcia-Mila, and S. Gilabert. 2013. “What Is Meant by Argumentative Competence? an Integrative Review of Methods of Analysis and Assessment in Education.” Review of Educational Research 83 (4): 483–520. doi:10.3102/0034654313487606.
  • Rodríguez-Mora, F., D. Cebrián-Robles, and Á. Blanco-López. 2021. “An Assessment Using Rubrics and the Rasch Model of 14/15-Year-Old Students’ Difficulties in Arguing about Bottled Water Consumption.” Research in Science Education. doi:10.1007/s11165-020-09985-z.
  • Sadler, T. D. 2004. “Informal Reasoning regarding Socio-Scientific Issues: A Critical Review of the Literature.” Journal of Research in Science Teaching 41 (4): 513–536. doi:10.1002/tea.20009.
  • Sadler, T. D. 2006. “Promoting Discourse and Argumentation in Science Teacher Education.” Journal of Science Teacher Education 17: 323–346. doi:10.1007/s10972-006-9025-4.
  • Sadler, T. D. 2011. Socio-Scientific Issues in the Classroom: Teaching, Learning and Research. Dordrecht: Springer.
  • Sadler, T. D., and D. L. Zeidler. 2005. “Patterns of Informal Reasoning in the Context of Socioscientific Decision Making.” Journal of Research in Science Teaching 42 (1): 112–138. doi:10.1002/tea.20042.
  • Sampson, V., and D. Clarke. 2008. “Assessment of the Ways Students Generate Arguments in Science Education: Current Perspectives and Recommendations for Future Directions.” Science Education 92 (3): 447–472. doi:10.1002/()1098-237X.
  • Schwarz, B. B., Y. Neuman, J. Gil, and M. Ilya. 2003. “Construction of Collective and Individual Knowledge in Argumentation Activity.” Journal of the Learning Sciences 12 (2): 219–256. doi:10.1207/S15327809JLS1202_3.
  • Simon, S., S. Erduran, and J. Osborne. 2006. “Learning to Teach Argumentation: Research and Development in the Science Classroom.” International Journal of Science Education 28 (2–3): 235–260. doi:10.1080/09500690500336957.
  • Toulmin, S. 1958. The Uses of Argument. Cambridge: University Press.
  • Venville, G. J., and V. M. Dawson. 2010. “The Impact of a Classroom Intervention on Grade 10 Students’ Argumentation Skills, Informal Reasoning, and Conceptual Understanding of Science.” Journal of Research in Science Teaching 47 (8): 952–977.
  • Weiß, T. 2016. Fachspezifische und fachübergreifende Argumentationen am Beispiel von Schöpfung und Evolution. [Specialist and multidisciplinary Arguments Using the Example of Creation and Evolution]. Gottingen: VandA.
  • Wolfe, C.R. 2011. “Argumentation across the Curriculum.” Written Communication 28 (2): 193–219. doi:10.1177/0741088311399236.
  • Zohar, A. 1996. “Transfer and Retention of Reasoning Skills Taught in Biological Contexts.” Research in Science and Technological Education 14: 205–209. doi:10.1080/0263514960140207.