9,831
Views
34
CrossRef citations to date
0
Altmetric
Articles

Authentic student inquiry: the mismatch between the intended curriculum and the student‐experienced curriculum

&
Pages 43-62 | Published online: 10 Feb 2010

Abstract

As a means of achieving scientific literacy goals in society, the last two decades have witnessed international science curriculum redevelopment that increasingly advocates a ‘new look’ inquiry‐based approach to learning. This paper reports on the nature of the student‐experienced curriculum where secondary school students are learning under a national curriculum that is intent on promoting students' knowledge and capabilities in authentic scientific inquiry, that is, inquiry that properly reflects that practiced by members of scientific communities. Using a multiple case study approach, this study found that layers of curriculum interpretation from several ‘sites of influence’ both outside and inside of the schools have a strong bearing on the curriculum enacted by teachers and actually experienced by the students, and runs counter to the aims of the national curriculum policy. Over‐emphasis on fair testing limits students' exposure to the full range of methods that scientists use in practice, and standards‐based assessment using planning templates, exemplar assessment schedules and restricted opportunities for full investigations in different contexts tends to reduce student learning about experimental design to an exercise in ‘following the rules’. These classroom realities have implications for students' understanding of the nature of authentic scientific inquiry and support claims that school science is still far removed from real science.

Introduction

To achieve curriculum goals linked to scientific literacy classroom‐based scientific inquiry has re‐emerged as an emphasis in science curricula over the last decade or two (Crawford Citation2007; Ford and Wargo Citation2007; Toplis and Cleaves Citation2006). This new‐look inquiry approach, termed ‘authentic scientific inquiry’, encourages the ‘thinking and doing of science’ by students where they have the opportunity to experience the procedural and conceptual knowledge required to carry out investigation in a manner that more realistically mirrors the practice of scientific communities (Atkin and Black Citation2003). The justification is that through this authentic inquiry ‘learners can investigate the natural world, propose ideas, and explain and justify assertions based upon evidence and, in the process, sense the spirit of science’ (Hofstein and Lunetta Citation2003, 30). Students may then become enculturated into science in a manner that ultimately helps them towards an understanding and appreciation of the true nature of science (Collins Citation2004; Powell and Anderson Citation2002; Weinburgh Citation2003). Such investigations can also serve to motivate students' interest and desire to learn science (Hughes Citation2004), engender attitudes and dispositions associated with those of autonomous and self‐motivated learners (Deboer Citation2002; Reid and Yang Citation2002), improve students' thinking and learning capabilities (Duggan and Gott Citation2002; Lunetta, Hofstein, and Clough Citation2007), and facilitate cooperative learning (Hofstein and Lunetta Citation2003).

What is authentic scientific inquiry?

The precise nature of scientific inquiry is difficult to define because scientists investigate the natural world in diverse ways (McComas Citation1998; Watson, Goldworthy, and Wood‐Robinson Citation1999; Wellington Citation1998), and scientists themselves have varying perspectives on how they work (Wong and Hodson Citation2009). However, a common feature of all authentic inquiry is that scientists routinely take ownership of open‐ended problems, which Reid and Yang (Citation2002) define as problems where there are no data, known methods or established goals. In such situations all these components have to be developed by scientists in order to address the problem. ‘Real science is an untidy, unpredictable activity that requires each scientist to devise their own course of action’ (Hodson Citation1998, 96). Successful open‐ended problem‐solving then depends on the knowledge and experience held by the people involved, and their ability to draw on appropriate and relevant information. Hodson (Citation1992) contends that scientists do this intuitively, using their own personal theoretical constructs and tacit knowledge of how to do science. He describes how this ‘art and craft’, or ‘connoisseurship’, gives scientists the ‘capacity to use theoretical and procedural knowledge in a purposeful way to achieve certain goals’ (133), and believes this only comes with experience of ‘doing science’ in holistic investigations in many different contexts. Thus to mirror and come to appreciate authentic scientific inquiry, it would appear students need to have ownership over and commitment to open‐ended problem‐solving opportunities in a variety of contexts where they have to: draw on their existing science ideas to analyse the problem; plan a course of action; carry out the plan to obtain information that they can analyse; interpret analysed information to reach a conclusion and evaluate; and finally communicate their findings in some form (Duggan and Gott Citation1995; Garnett and Garnett Citation1995; Woolnough Citation1998). In such learning opportunities students should be helped to ‘articulate their ideas and explanations, reason from data and improve the quality of their argumentation…’ (Lunetta, Hofstein, and Clough Citation2007, 403).

From intended curriculum to operational curriculum

While national curricula may now give a clear lead to schools and teachers regarding authentic scientific inquiry for their students, moving from policy document to the operational curriculum (i.e., that actually experienced by students in the reality of the classroom) is not necessarily a rational and linear process (Atkin and Black Citation2003; McGee and Penlington Citation2001b; Penuel et al. Citation2008). According to Carr et al. (Citation2001) it is more realistic to view national curriculum policy as ‘the start of a cascade of interpreted curricula’ (18) that influences what students experience and learn in the classroom. These influences emanate from sources, or ‘sites of influence’ (English Citation1997; Kahle Citation2007; Toplis and Cleaves Citation2006), within the educational context that support and promote particular interpretations of ‘worthwhile’ learning. These sites of influence are contexts or arenas of action where participants have shared understandings of concepts and ideas due to the shared social contexts. Examples of such sites of influence include national curriculum policy statements; curriculum support materials such as commercial publications; political pressure groups; government educational support services, including provision of professional development for teachers; national qualification authorities and the qualifications; school and community aspirations for the education of students; educational evaluators and researchers; and finally teachers' beliefs and values about teaching and learning (Hargreaves and Fullan Citation1992). These sites exert varying degrees of influence on the operational curriculum, and research indicates some influences like national qualifications may play a more significant role in shaping the student‐experienced curriculum than others (see Atkin and Black Citation2003; Carr et al. Citation2001; Harlen and James Citation1997; Harlen and Crick Citation2003). The process of implementing curriculum policy is one of interplay between what a curriculum statement says and the various interpretations and emphases afforded it by supporting materials, agencies, schools and teachers (Knapp Citation2002; Spillane Citation2004). Added to this mix are the cognitive, social and language processes that are occurring within the classroom environment which also impact on this student‐experienced curriculum, along with decisions that students themselves make consciously about learning (Nuthall Citation1997).

Significance and background to the study

Research findings that can shed light on the match between curriculum intent and classroom reality in such dynamic and complex educational environments as described above are important when evaluating how effectively curriculum goals are actually being met, and help inform decisions about what steps may be needed to improve outcomes for students. The two case studies that comprise this study took place in the context of a national science curriculum (Ministry of Education [MoE] Citation1993) that sought to promote students' engagement in authentic inquiry. The curriculum, known as the Science in the New Zealand Curriculum (SiNZC), was introduced in the 1990s as part of sweeping educational and curricula reforms (Lange Citation1988) in line with international trends of detailed and mandated national curricula accompanied by some form of national standards describing concepts for students to learn (Carr et al. Citation2001). The SiNZC (MoE Citation1993) comprises eight progressive levels of broad learning outcomes known as ‘achievement objectives’, which are grouped under achievement aims in six learning strands, one of which called Developing Scientific Skills and Attitudes focuses on the theme of scientific inquiry. At Levels 5/6 (i.e., Years 9–11, age range 13‐ to 16‐years‐old) of the strand this theme has been translated into achievement objectives that focus on a particular type of investigation, by specifying that students ‘design fair tests, simple experiments, trials and surveys, with clear specification and control of likely variables’ (MoE Citation1993, 44). It is at Level 6 of the SiNZC (Year 11) that achievement objectives are first assessed for national qualifications – the National Certificate of Educational Achievement (NCEA) which was introduced in 2002. The essential building block of NCEA is the achievement standard.

The standard of particular relevance to this inquiry is Science Achievement Standard 1.1 Carrying out a practical investigation with direction (SAS 1.1). This standard defines an investigation as an activity covering the complete process from planning to reporting, and was to involve the student in gathering primary data (i.e., generating and recording their own data). Under direction from the teacher, students were expected to:

Produce a workable plan, containing a purpose, provision and evidence of trialling, key variables and how they will be controlled, a method for data collection and consideration of factors such as sampling, bias, sources of error and sufficiency of data.

Execute the plan, collect appropriate data and record in a table or other systematic way, and process to establish a relevant pattern or trend. Data processing is expected to usually involve calculations such as averaging.

Interpret the processed data in relation to the purpose of the investigation.

Write a report following written guidelines from the assessor. Sections of this report are to usually include the purpose and final method used, recorded and processed data showing links, interpretations, a conclusion linking findings to the purpose, and an evaluation or discussion.

Each case study involved a Year 11 science class where students (15‐ to 16‐years‐old) were learning how to perform investigations for SAS1.1 towards their NCEA qualification. The studies were set in two large New Zealand secondary schools, River Valley Boys' High School and Mountain View High School (pseudonyms). Both school populations were similar in that students were predominantly of New Zealand European ethnicity (77% and 68% respectively) and each had 12% Māori students (i.e., people who self‐identify as indigenous New Zealanders). Mountain View also had a significant proportion of Asian students (15%). Each case study involved a female teacher, and four to five Year 11 students (15‐ to 16‐years‐old) in mixed ability classes drawn from a band of classes that represented approximately 80% of the whole Year 11 cohort in the respective schools. The remaining 20% comprised students at either ends of the ability spectrum, i.e., students with learning difficulties at one end and those with special abilities at the other. Students were placed in these classes based on their achievement information from common internal assessments, such as school exams and tests in the previous two years of their compulsory schooling in science, i.e., Years 9 and 10.

At River Valley, the teacher Jenny (pseudonym) held a master of science degree in genetics and was in her eighth year of teaching. The all‐male student group included Martyn, Peter, Mitchell, Eddie and Sam (pseudonyms) who were all New Zealand Europeans. In contrast at Mountain View, the teacher Kathy (a pseudonym) had begun her teaching career three years earlier at Mountain View after completing a conjoint bachelor's degree in science and teaching. The mixed‐gender student group Anne, Carol, Alex, Mark and Steve (pseudonyms), came from different ethnic backgrounds. Anne, Mark and Steve were New Zealand Europeans, Alex a New Zealand born Asian student with English as his first language, and Carol a recently emigrated Asian student for whom English was a second language.

The three research questions for this inquiry were:

What science are New Zealand science students learning in NCEA classroom programmes for SAS 1.1?

Why and how are New Zealand science students learning the science they learn in NCEA classroom programmes for SAS 1.1?

What match is there between the intended curricula (i.e., those of the SiNZC and the teacher) and the operational science curricula (i.e., those experienced by New Zealand science students)?

Theoretical underpinnings, methodology and methods

This study sought to investigate the reality of classroom science learning and gain an inside understanding of that reality from the perspectives of the participants themselves, that is, their interpretations of the what, how and why of learning. Thus this inquiry was conducted within an interpretivist paradigm (Cohen, Manion, and Morrison Citation2007) drawing upon constructivist, sociocultural and linguistic perspectives as a framework for analysis of the data (Nuthall Citation1997). The interpretivist‐based methodology (Bryman Citation2008; Cohen, Manion, and Morrison Citation2007) comprised a multiple case study approach utilising qualitative research methods of unobtrusive observation, semi‐structured interviews and document analysis. In the first of two case studies a total of 12 one‐hour lessons were observed and six interviews conducted (three with the teacher and three with the students as a group), in the second case study fewer lessons were observed (seven in total) and five interviews with participants were conducted (three with the teacher and two with the students as a group). Classroom observations in both case studies were recorded via field notes and audio‐taping, and the interviews were audio‐taped and transcribed verbatim. The interview transcripts were sent to interviewees for verification and alteration (if desired by the interviewee). Audio‐tapes of the classroom lessons proved difficult to transcribe accurately because of background noise and the students tending to mutter at times, but they did add to the general pool of data collected in each case study, often corroborating other data. A variety of documentary material related to the teaching and learning occurring in each case study was examined, including departmental guidelines for teachers, textbooks and student workbooks, notes and assessment items such as exemplars and student scripts.

The constructivist, sociocultural and linguistic teaching and learning theories underpinning the study were initially surveyed to find suitable parameters on which to base data collection decisions. Working definitions for the what, why and how of student learning were devised, and guided the analysis, interpretation and discussion of the findings. The what of student learning, for example, was defined as those scientific concepts, skills and procedural knowledge students were acquiring and demonstrating through their words and actions during teaching and learning episodes (Bell Citation2005; Duggan and Gott Citation2002; Skamp Citation2007). Instances of why students were learning were characterised by the circumstances that led to students achieving that learning; what learners did in order to learn was the key parameter chosen to define the how set of data. This approach to the how of learning focused on the thought processes and actions attributed to students as they learn.

Steps taken to enhance the trustworthiness of the inquiry process (Bryman Citation2008; Guba and Lincoln Citation1989) included: a two‐case study approach; triangulation, of both data collection and sources of data; prolonged and extensive observation along with detailed auditing of the inquiry process and respondent validation of the raw data; rich descriptions of the case studies, using narrative style; and the cross‐case analytic categories of what, why and how the learning was occurring.

Observations from classroom sessions: an overview

At both schools many decisions to do with classroom practice were not made by the individual teachers, but were made collectively at departmental level in the form of departmental guidelines. These guidelines were based on recommendations, including exemplary materials, from the New Zealand Qualifications Authority (NZQA) – the agency which administers the NCEA qualification.

Despite the variation in the overall timing and duration of the teaching and learning sessions at the two schools, the sequence of lessons in both schools showed strong parallels. Each sequence could be divided into three distinct phases: the preparatory phase (instructional sessions); the practice phase (the formative assessment); and the formal assessment phase (the summative assessment).

The preparatory phase

In this first phase students in both classrooms were introduced to the requirements of SAS 1.1 and key concepts and skills associated with investigating relationships between two variables. Lesson content in these largely instructional sessions focused on: terms, definitions and procedures to do with fair testing; specific skills such as making observations and measuring, tabulation and averaging of data, plotting graphs and the planning and reporting of fair tests using templates and; how to meet the assessment requirements of SAS 1.1 as depicted in assessment schedule exemplars (field notes and audio‐tape transcripts from both case studies). Less time was devoted to the first phase at River Valley (three lessons compared to five at Mountain High) and Jenny also revised specific science concepts that featured in the investigation her students were to perform in the second phase (rates of chemical reaction and preparation of solutions of given concentration by dilution).

The practice phase

In the second phase students at both schools participated in a mock assessment known as the ‘formative assessment’, designed to give students practice at performing a whole investigation under test‐like conditions. In the following excerpt Kathy at Mountain High had first asked students to individually design a plan to investigate the rate at which their magnesium metal reacts with hydrochloric acid. She had then taken in the individual plans, read them and on the basis of this assessment placed the students into mixed ability groups. Thus the research group students were dispersed through the various other groups:

She instructed each group to talk amongst themselves, share plans and then write a shared or common method that they all agreed on. If there is no change in a particular plan the students could write on their script ‘see original plan’. Kathy gave some last minute advice: ‘Be careful you have only 10cm of magnesium and a set volume of hydrochloric acid. Think carefully about your quantities. It needs to be written in your method’. She ran through the report template, reiterating key points: ‘Every person has to write their results in… write the group method. Everyone has to have a copy of it… do the experiment as a group… record data in a table’. The students had a third class period in which to finish the report. They could attempt to write the report up in this second practical session, but Kathy insisted: ‘Don't rush it’. (Field notes and audio‐tape transcripts, Mountain High, Lesson 6)

Again there were many commonalities between the two case studies:

The mock assessment took place over four lessons, with each lesson covering in turn, the planning, data collecting, reporting and feedback stages of the investigation.

The science context for the investigations was the same (both teachers used the same exemplar materials for investigating the effect of factors such as temperature or concentration on the rate of reaction between magnesium metal and hydrochloric acid); students worked in teams of four for planning and data gathering but as individuals for the reporting.

The format, timing and reporting requirements of the mock assessment activity closely matched those of the summative assessment in Phase 3.

Teacher direction was highly evident, including extensive and targeted feedback for students related to the assessment schedules for the task.

In addition, at River Valley students initially peer assessed each other's reports using a common assessment schedule and provided verbal feedback to one another before the teacher provided global feedback to the class.

The formal assessment phase

In the third phase for their formal assessment, known as the ‘summative assessment’, students again performed fair test investigations in groups along similar lines to the practice investigation in the second phase. They initially planned as individuals, then collaborated as a group to produce a single plan and obtain data, and finally wrote up the reports individually. The planning and reporting templates were virtually identical in the two schools, however, the science contexts for the investigations were different. Students at Mountain Valley performed their investigation in the context of reaction rates again, this time the relationship between surface area and the rate of reaction, while students at River Valley performed their investigation in the context of pendulums, which they had no prior experience of in the course. Students at Mountain View planned and executed their investigation with relative ease, whereas the study group at River Valley experienced difficulties carrying out their plan investigating the relationship between the length of a pendulum and its period, that is, the time taken to complete a full swing. They were unable to operate the pendulum successfully and consequently could not record sufficient data. However, they were very conversant with assessment techniques and showed adeptness at ‘playing the system’ as the following excerpt shows:

Within the closing stage of the practical session the group scrambled to complete and record sufficient runs for their data processing and interpreting phase. The four group members frequently interchanged roles as they each took it in turn to record their own copy of the results (which they needed for the write‐up in the following session.). All other groups had finished their data collection and were listening as Jenny covered points for the write‐up. Martyn, Peter, Mitchell and Eddie continued operating their pendulum and consequently missed hearing what Jenny was saying during her briefing. In their rush to finish confusion set in: ‘Is this the third or fourth one?’ asked Mathew who was recording and calculating. When the pendulum continued to collide with the support arm Peter commented, ‘You'll have to estimate’, while Eddie was convinced they should ‘make up the rest’. Mitchell agreed: ‘Lets make up the rest, and take 16 seconds as the average’ and Martyn confirmed, ‘It will still give us our results’. Each group member had a complete set of written data by the end of the practical. (Field notes and audio‐tape transcripts, River City, Lesson 10)

At the last minute the students resorted to recording their remaining results from non‐existent data and then used these fabricated results to complete the reporting section of the assessment.

One other significant difference between the case studies in this formal assessment phase is that, unlike the students at River Valley, the five students in the Mountain View study did not work in the same groups for the summative investigation. Kathy purposefully decided groupings for the summative assessment at Mountain High on the basis of results from the formative assessment, so that each group intentionally had at least one student who had demonstrated advanced investigative capabilities.

What were students learning about scientific inquiry?

Findings from both case studies indicated that the learning many students were achieving about scientific inquiry closely matched that which their teachers intended them to learn. The content of the teachers' intended curricula is summarised in Table below and represents a synthesis drawn from data collected during teacher interviews, observation of classroom lessons, departmental guidelines and notes, and student workbook and text (Cooper, Hume, and Abbott Citation2002; Hannay, Howison, and Sayes Citation2002). The summary in Table serves to indicate the key concepts and skills that the majority of students did demonstrate familiarity with during their investigations, and the following sections elaborate further on the extent and nature of this learning.

Table 1. Summary of the teachers' intended learning at River Valley and Mountain High.

Data gathered from classroom observations of student and teacher actions, interviews and assessment information found what the science students were acquiring in the teaching and learning programmes of both case studies was very similar in most respects. The findings indicate that the concepts and skills students learned about science inquiry were linked to one particular form of investigation, fair testing, and match those described in the NCEA SAS 1.1 Carrying out a practical investigation with direction, and those required to complete the NZQA generic planning and reporting template for assessment tasks provided by the Ministry of Education (MoE) as specified in the accompanying assessment schedules. These same concepts and skills were reiterated in the published texts (Cooper, Hume, and Abbott Citation2002; Hannay, Howison, and Sayes Citation2002) used by students. It appears that the assessment schedules of these provided tasks effectively prescribed the concepts, vocabulary, skills and procedural knowledge students were gaining during the investigation exercises.

It is important to note that there was little evidence of open‐ended planning and investigation when students were required to do full investigations. All plans closely adhered to the experimental design inherent in the planning template used for assessment tasks and teachers had given considerable direction and support to the students about the content of these plans prior to planning sessions. This teacher direction provided the particular scientific relationships to be investigated and relevant experimental skills and techniques, even to the point of identifying the independent and dependent variables to be investigated in one of the case studies. The students thus went into planning sessions, even for the summative task, well informed about the procedural knowledge needed for that investigation. However, there were differences between the two case studies in the depth of understanding and level of experience with the background science concepts that students brought to the summative planning sessions, which impacted on their abilities to link their findings with science concepts.

As in the formative investigations, the students in both case studies worked in groups for the practical summative sessions with the result that all students had the opportunity to access workable plans. Both case study groups produced data from their experimental work, which allowed them to continue the processing, interpreting and reporting aspects of the investigation. However, since the summative practical session was unable to be observed by the researcher in one case study the students' performance from the two case studies could not be compared in this aspect of the investigations. In the case study where the researcher was present during the practical work the ability of students to collaborate successfully in the refinement and performance of their plan in the summative investigation was observed to falter at times resulting in the data fabrication incident described earlier. While these students had gained an appreciation of the need for trialling to gauge the workability of their plan and experimental methods from their formative experiences, the group focus on decisions to do with technical details occurred at the expense of decisions to do with method. Hasty last minute decisions about procedure ultimately proved to be costly. In addition, some critical logistical points were overlooked like task delegation, and as a consequence planned decisions were not always adhered to in the summative assessment. These student actions suggest that their understanding of some of the finer points of experimental design, such as repetition and appreciation of the depth of forward planning needed were superficial, and their level of experience and expertise with the technical components of this experimental context (the pendulum) limited. Despite these ‘procedural hiccups’ the students knew what data would be sufficient to allow them to accomplish the rest of the task, and they made a pragmatic decision to ‘cook their results’. In this sense they did not achieve their teacher's intended curriculum of ‘good science’ by fabricating results, but they did demonstrate an understanding of how to effectively meet assessment specifications. This action gave each group member access to a set of seemingly valid and reliable data, which they subsequently recorded and processed in their individual written reports. All members of the group achieved the standard, but as this excerpt indicates the investigation was far from authentic. Interestingly, some of the findings from research carried out by Keiler and Woolnough (Citation2002) into student learning of practical work under a high stakes qualification point to students able to demonstrate competency in skills they do not actually possess. They report similar instances where students successfully demonstrated required learning ‘playing by the “rule of the game”, including doing extraneous, possibly erroneous or fallacious work’ (87).

Peter was awarded achievement with merit for his summative grade, and he showed evidence of learning in all three sections of his report. His very detailed step‐by‐step method, which now included a list of equipment and the averaging of results, earned him excellence for the first section… Peter also evaluated their experimental work, and identified appropriate amendments to the method, like changing to a ‘spherical bob’, but like Martyn he did not divulge the group's fabrication of results. He was awarded excellence in this section. (Student assessment documents, River City)

In their written reports all students in the case studies individually recorded and processed their data sufficiently accurately enough to identify a relationship between the independent and dependent variables. Most students in Case Study A made correct data processing decisions in choosing to use line graphs, but several had trouble correctly drawing a line of best fit. Key table and graphing features that were missing from most their formative scripts were addressed in their summative scripts. In contrast, in Case Study B some of the finer details of data recording protocols for tables were missing, and all students graphed their data using bar graphs, instead of the more appropriate line graphs for identifying cause–effect relationships between two variables. Students in both case studies were able by the close of the summative assessment to draw conclusions based on their findings and related to the purpose of the investigation. Generally speaking, however, few of the students demonstrated the capacity to fully interpret and explain their results by linking their findings to existing science concepts, and while some students were attempting to evaluate the robustness of their findings, even a very able student Alex at Mountain High only managed a superficial critique of his methodology. Alex had linked his findings to the relevant science background but:

… his evaluative comment was restricted to one sentence: ‘If I were to do my experiment again I would make sure that I dried the beakers out after I washed them each time, which would make my experiment more accurate’. He did not explain why this action would make the experiment more accurate. (Student assessment documents, Mountain High)

In the River City case study lack of familiarity with the background science in the summative task hampered students in their ability to link their findings with theory, to the extent that even Martyn, an able student, who had considerable success with this aspect in a formative task could not succeed to the same degree in the science context of the summative task:

In the last section he produced acceptable interpretations and conclusions about his data… however in attempting to link the observed trends in the data to the science ideas behind the pendulum, he was unable to use the science concepts to explain why the pendulum length affects the pendulum period. He instead tried unsuccessfully to explain why the pendulum swings in an arc. For this final part of the report he received a merit grade, and achievement with merit for the overall standard. (Student assessment documents, River Valley)

Why and how were students learning?

Why and how students learned about fair testing and the assessment requirements of the AS 1.1 can be seen as the initial consequences of three direct influences: the content of their teachers' intended curricula, the pedagogical approaches and techniques that their teachers used, and the learning strategies that students employed. The key findings that support these interpretations are summarised in Table below and are sourced from data obtained in classroom observations, interviews and documentation.

Table 2. Summary of the key influences on why and how students learned.

The close match between the teachers' intended curricula and that experienced by students in both case studies indicates the most obvious reason why students in these case studies learned about fair testing in science and assessment procedures for NCEA is that the teachers made a deliberate decision to deliver this particular content in the teaching and learning programmes. These decisions meant the teachers' instructional intentions focused on concepts, skills and procedural knowledge to do with investigating cause‐effect relationships between variables and meeting the assessment requirements of the achievement standard. Clearly if certain content was not included by teachers in their programmes, then the likelihood of this ‘extra’ knowledge being accessed by students via classroom teaching was limited.

The teachers' decisions about lesson content were influenced most directly by their respective school departmental guidelines for delivering Science AS 1.1. These guidelines were, in turn, based on materials (planning templates, and exemplar assessment tasks and schedules) provided by the MoE and NZQA. Teachers' classroom curriculum planning decisions were not directly influenced by the specific requirements of the SiNZC, but more by interpretations of that national policy by NZQA and school science departments. Similar interpretations to NZQA of the SiNZC requirements were also promoted by other sites of influence, including teacher professional development providers and support agencies, and publishers of textbooks. The many similarities between the content of the students' experienced curricula in the case studies, the teachers' intended curricula, and that promoted by support materials and providers with the learning measured by the Standard provides a strong indication of why the learning these Year 11 students achieved in the case studies focused on fair test investigations and assessment procedures. It was the science assessed by AS1.1 that the teachers, curriculum support agencies and textbook publishers chose as the basis for the content of the curricula they delivered.

Discussion

The evidence emerging from this interpretive study into a student‐experienced curriculum demonstrates strong parallels between what students were learning about scientific investigations in these New Zealand classrooms and those learning trends identified from the international literature. In the case studies what students came to perceive and experience as scientific investigation was the single, linear and unproblematic methodology of fair testing. Nevertheless, within a narrow context of fair testing, the students did manifest many of those physical and mental skills used in authentic scientific inquiry and generally agreed upon in the literature as the ‘scientific process skills’ (Harlen Citation1999). For example, they were able to produce appropriate scientific investigations, obtain relevant information, interpret evidence in terms of the question addressed in the inquiry and communicate the investigation process. However, the standard did not require students to come up with original questions and solutions to experimental design. Instead planning templates which students were required to use in these case studies served as blueprints, restricting what students were learning about planning to following a formula, just as Roberts and Gott (Citation2004) observed happening for students performing investigations under similar assessment conditions for Science Attainment Target 1 of the National Curriculum in Britain. Consequently in these case studies, the students' ability to identify investigatable questions was not evident, because the purpose of the investigation was a ‘given’ and students' only task was to craft a question specifying the cause and effect relationship they were investigating. In one of the case studies the teacher's direction even extended to identifying the independent and dependent variables for students in her briefing comments for the assessment. Thus, students were not participating in authentic open‐ended investigations, where they had ownership and the responsibility for determining the purpose of the investigation and the question to be investigated, as ‘real’ scientists would (Reid and Yang Citation2002).

Some authors comment that in terms of what students learn about carrying out authentic scientific activity, students tend not to learn to take account of scientific theory in planning their investigations and interpreting their results (see Atkin and Black Citation2003; Hodson Citation1992, Citation1998). That claim is not supported by the findings of this study because students for the most part did take account of some scientific theory in the performance of their investigations. For example, they called on their prior learning of scientific concepts, skills and procedural knowledge related to fair testing and the scientific context to complete their planning template and conduct their investigations. The more able students also made some valid links between their findings and their existing scientific understanding, but in problem‐solving situations that Reid and Yang (Citation2002) would define as more closed in nature than open since teachers gave the students substantial guidance with the goals of the investigation, the scientific background and the procedures to be used. As a result of these student investigations there was little evidence that these activities were generating conceptual change for them as learners or improving their skills of argumentation (Lunetta, Hufstein, and Clough Citation2007). The practical work appeared to be serving more illustrative purposes by reinforcing rather than expanding students' existing scientific understanding. In this sense, while students were practising skills and gaining experience with procedural aspects of fair testing, students were not engaging in activity that reflected authentic science investigation.

So for most students in these case studies what they learned about scientific investigation was confined to applying a ‘set of rules’ about fair testing, as Keiler and Woolnough (Citation2002) similarly report, to illustrate and confirm scientific concepts covered in the instructional part of their classroom programme and to meet assessment requirements. The use of templates and exemplars in the teaching and learning programme produced the ‘seen exam’ phenomenon described by Roberts and Gott (Citation2004), providing the required protocols for assessment success, and not requiring students to demonstrate the sort of tacit, intuitive knowledge in their science investigative abilities that comes with wide experience and understanding (Hodson Citation1992), such as creative thinking in experimental design. Students' learning was characterised by lower to middle order thinking (Bloom et al. Citation1956; Anderson and Krathwohl Citation2001), with only a few able to display some higher order critical thinking skills. The nature of the learning for most students tended to be focused, routine, rote and superficial: rather than divergent, varied, inventive and deep‐seated.

In considering why and how the Year 11 students in these two case studies learned about fair testing and assessment requirements, the close match between the curricula they experienced in class and their teachers' intended curricula is significant. This is in agreement with findings reported in the literature, that maintain teachers' instructional intentions have a direct bearing on why and how students learn (e.g., Lederman Citation1999; McGee and Penlington Citation2001b; Tytler Citation2003). However, given the obvious similarities between the operational curriculum occurring in classrooms and the SiNZC interpretation promoted by the SAS 1.1 it appears the NCEA qualification was a strong influence on the curriculum experienced by Year 11 students in these case studies. This observation is supported by similar overseas experience where high stakes testing and qualifications are also reported to drive classroom practice (see Kahle Citation2007; Keiler and Woolnough Citation2002; McDonald and Boud Citation2003; Orpwood 2002; Roberts and Gott Citation2004). Qualifications are examples of external determinants of classroom curricula (McGee and Penlington Citation2001a) that emanate from ‘sites of influence’ outside classrooms (English Citation1997) and influence the final decisions teachers make about classroom curriculum delivery to their students. The close similarity in each case study between the teachers' intended curricula, the departmental guidelines, and the interpretation of curriculum promoted by the NZQA in its NCEA qualification illustrates the pervasive influence of the NZQA site of influence and the assessment regime underpinning the NCEA qualification on the teachers' planning intentions. This trend is also signalled in recent research by Hipkins (Citation2004) in New Zealand senior science classrooms and suggests that the teachers, and their respective science departments were for the most part acting as conduits for the achievement of that government agency's goals. Many decisions to do with classroom practice were effectively taken out of the individual teachers' hands – instead judgements were made collectively at departmental level. The departmental layer of interpretation was based on guidelines and recommendations from NZQA, which departments and classroom teachers were obligated to follow under school accreditation requirements for NZQA. This collective interpretation at departmental level took into account some of the key external determinants of these Year 11 classroom curricula (McGee and Penlington Citation2001a), and effectively made decisions that the classroom teachers as individuals were obliged to implement in their classroom curricula. These decision included the:

Content of the teaching and learning programme.

Manner in which the teaching and learning programmes were to be delivered and assessed, with the emphasis on classroom procedures and arrangements for the practice (formative) and summative assessments and methods of moderation.

Timing of the programme delivery.

Adoption of the planning template as recommended by NZQA, and the use of exemplar assessment tasks and schedules supplied by the MoE for NCEA as the basis for teaching and assessment materials for use across all classes in the department.

However, as active members of their respective science departments, the teachers in these case studies would have had at least some role in creating this layer of curriculum interpretation and pedagogical and assessment approaches presented in the departmental guidelines. Their participation in meetings concerned with marking and moderation were likely forums for their contributions to be heard and incorporated into the departmental guideline. The strong similarities between the respective departmental guidelines and intended curricula of the teachers in both case studied also lend strong support to the contention that the NCEA qualification had an over‐riding influence on the teachers' instructional intentions.

The actions of individuals from other sites of influence (English Citation1997), namely those writers who created the national assessment guidelines and exemplar materials for NCEA and the published text used in the classroom programmes in these case studies, also had a direct effect on students' learning in classrooms because students interacted frequently with these materials in their daily classroom activities and home study.

Conclusions

This study supports the observations of Black (Citation2001, Citation2003) that qualifications are considered high stakes by schools and teachers and that assessment for qualifications is driving the senior school and classroom science programmes in New Zealand. Decisions about learning programmes were made in this study at school and departmental levels, reflecting the importance the two school communities and professional staff placed on their students achieving success in the NCEA qualification, and these decisions directly impacted on the content of classroom curricula and the methods teachers used to deliver that content. As a consequence, their students were missing out on authentic scientific activity and undoubtedly gaining misleading impressions about the work that scientists do. It would appear that for these NZ students, curriculum goals related scientific literacy are not being met.

However, in the intervening period since the collection of data for this study NZQA has made some modifications to SAS 1.1 Carrying out a practical investigation with direction and introduced more flexibility into the standard and support materials. In October 2005 the standard was re‐registered with a number of changes, which seem to introduce more recognition of the complexity of scientific investigation into the standard and give more latitude for teachers to offer students some variety in their approaches to scientific investigation. The revised standard also provides more specific detail about what constitutes ‘quality’ in a scientific investigation. The achievement criteria are more generic than those in the previous form of the standard, and some former aspects of the accompanying explanatory notes have been given increased emphasis, while some have been dropped and new features introduced. For example:

Greater specificity is provided about what constitutes a directed investigation.

The terms practical investigation and quality practical investigation are introduced and defined in detail, reflecting the content of the modified achievement criteria. The terms workable and feasible to describe plans are dropped.

The terms sample and collection of data are introduced alongside the terms independent and independent variable respectively in the definition of a practical investigation, and sampling and bias as possible factors to consider in data gathering in the description of a quality practical investigation. The inclusion of these terms potentially enables students to use approaches to investigation other than fair testing, but because sampling and bias can have close connotations with fair testing it is possible that fair testing may still prevail in classroom practice unless appropriate exemplary support materials and text are accessible to professional development providers, teachers and students.

Validity of method, reliability of data and science ideas are specified as requirements to consider where relevant when evaluating the investigation.

These changes signal more acknowledgement of scientific inquiry as practised by scientists in NCEA assessment procedures for SAS 1.1, and possibly greater opportunity for students to experience authentic scientific investigations and develop higher order thinking skills. This increased alignment between curriculum intent and assessment (Penuel et al. Citation2008) should give teacher greater autonomy in designing teaching and learning programmes to meet students' learning needs and interests. While the specified nature of the SAS 1.1 and its requirements did contribute to the narrowness of the student experienced curriculum, decisions dictating the timing and time allocated to the teaching of this investigative unit were made at school and departmental levels, and were not set requirements of the NCEA qualification or the NZQA. The decision to time the unit early in the year, it seems, was one of expediency leaving more time for teachers and students to concentrate on the externally assessed standards later in the year. Choosing a context unfamiliar to students in one case study was another departmental decision impacting on student learning, that was not required by NCEA. This decision led to an able student struggling with his explanations of the findings when in an earlier investigation where he was conversant with the background science he had been successful in explaining his findings. This student's inability to explain results in an unfamiliar science context supports the argument that students need a strong theoretical or conceptual background in the science context of the investigation in order to use scientific theory to make sense of their findings (Leach and Scott Citation2003).

Implications

The influence that the qualification interpretation of scientific investigation had on curriculum design and delivery decisions made by schools, departments and classroom lends support to the view that moving from policy document to the operational curriculum in classrooms is not a straightforward process (Atkin and Black Citation2003; Carr et al. Citation2001; McGee and Penlington Citation2001b; Penuel et al. Citation2008). Members of the qualification ‘site of influence’, for example, in translating the national science curriculum into a form able to be assessed for the NCEA qualification, chose to place emphasis on particular portions of the curriculum that featured fair testing. This action resulted in a translation contrary to the wider aims of the science curriculum, and should alert policy‐makers to the importance of conveying a consistent message about student learning outcomes throughout a national curriculum statement. However, introducing more flexibility into SAS 1.1 and support materials should facilitate improved student learning outcomes in terms of authentic scientific inquiry, and give teacher greater autonomy in designing teaching and learning programmes to meet students' learning needs and interests. Awareness that school‐based decisions that focus too much on meeting administrative, logistical and moderation requirements of high stakes qualifications can have detrimental effects on pedagogy and student learning, may hopefully prompt schools to re‐evaluate the wisdom of these decisions to help students achieve quality learning in scientific inquiry. Finally the views and insights that students have given in this study, about the teaching and learning they experienced and the role they play in these processes, should provide useful information for teachers to reflect on as they evaluate the effectiveness of their teaching and assessment strategies in helping students to achieve quality learning in scientific inquiry.

References

  • Anderson , L.W. and Krathwohl , D. 2001 . A taxonomy for learning, teaching and assessing: A revision of Bloom's taxonomy of educational objectives , New York : Longman .
  • Atkin , J.M. and Black , P. 2003 . Inside science education reform: A history of curricular and policy change , New York : Teachers College Press .
  • Bell , B. 2005 . Pedagogies developed in the learning in science projects and related theses . International Journal of Science Education , 27 ( 2 ) : 159 – 82 .
  • Black , P. 2001 . Dreams, strategies and systems: Portraits of assessment past, present and future . Assessment in Education , 8 ( 1 ) : 65 – 85 .
  • Black , P. 2003 . Report to the Qualifications Development Group Ministry of Education, New Zealand on the proposals for development of the National Certificate of Educational Achievement http://www.minedu.govt.nz
  • Bloom , B. , Englehart , M. , Furst , E. , Hill , W. and Krathwohl , D. 1956 . “ Taxonomy of educational objectives: The classification of educational goals ” . In Handbook I: Cognitive domain , New York : Longmans Green .
  • Bryman , A. 2008 . Social research methods , New York : Oxford University .
  • Carr , M. , McGee , C. , Jones , A. , McKinley , E. , Bell , B. , Barr , H. and Simpson , T. 2001 . The effects of curricula and assessment on pedagogical approaches and on educational outcomes http://www.minedu.govt.nz
  • Cohen , L. , Manion , L. and Morrision , K. 2007 . Research methods in education , New York : Routledge .
  • Collins , N. 2004 . Scientific research and school students . School Science Review , 85 ( 312 ) : 77 – 86 .
  • Cooper , B. , Hume , A. and Abbott , G. 2002 . Year 11 science. NCEA level 1 workbook , Hamilton : ABA Books .
  • Crawford , B.A. 2007 . Learning to teach science as inquiry in the rough and tumble of practice . Journal of Research in Science Teaching , 44 ( 4 ) : 613 – 42 .
  • Deboer , G. 2002 . Student‐centred teaching in a standards‐based world: Finding a sensible balance . Science and Education , 11 : 405 – 17 .
  • Duggan , S. and Gott , R. 1995 . The place of investigations in practical work in the UK National Curriculum for Science . International Journal of Science Education , 17 ( 2 ) : 137 – 47 .
  • Duggan , S. and Gott , R. 2002 . What sort of science education do we really need? . International Journal of Science Education , 24 ( 7 ) : 661 – 80 .
  • English , E. 1997 . The interpretation of the curriculum. Is it all a matter of serendipity? . New Zealand Science Teacher , 85 : 27 – 33 .
  • Ford , M.J. and Wargo , B.M. 2007 . Routines, roles, and responsibilities for aligning scientific and classroom practices . Science Education , 91 : 133 – 57 .
  • Garnett , P.J. and Garnett , P.J. 1995 . Refocussing the chemistry lab: A case for laboratory‐based investigations . Australian Science Teachers' Journal , 41 ( 2 ) : 26 – 34 .
  • Guba , E. and Lincoln , Y. 1989 . Fourth generation evaluation , Newbury Park , CA : Sage .
  • Hannay , B. , Howison , P. and Sayes , M. 2002 . Year 11 science. study guide. NCEA Level 1 edition , Auckland : ESA Publications .
  • Hargreaves , A. and Fullan , M.G. 1992 . Understanding teacher development , London : Cassell .
  • Harlen , W. 1999 . Purposes and procedures for assessing science process skills . Assessment in Education , 6 ( 1 ) : 129 – 44 .
  • Harlen , W. and Crick , R.D. 2003 . Testing and motivation for learning . Assessment in Education , 10 ( 2 ) : 169 – 207 .
  • Harlen , W. and James , M. 1997 . Assessment and learning: Differences and relationships between formative and summative assessment . Assessment in Education , 4 ( 3 ) : 365 – 79 .
  • Hipkins , R. Shifting balances: A study of the impact of assessment for qualifications on classroom teaching . Paper presented at the annual conference of the Australian Science Education Association . July , Armidale , Australia.
  • Hodson , D. 1992 . Assessment of practical work: Some considerations in philosophy of science . Science and Education , 1 : 115 – 44 .
  • Hodson , D. 1998 . “ Is this really what scientists do? ” . In Practical work in school science. Which way now? , Edited by: Wellington , J. 93 – 108 . London : Routledge .
  • Hofstein , A. and Lunetta , V. 2003 . The laboratory in science education: Foundations for the twenty‐first century . Science Education , 88 ( 1 ) : 28 – 54 .
  • Hughes , P. 2004 . Mainstream chemical research in school science . School Science Review , 85 ( 312 ) : 71 – 6 .
  • Kahle , J.B. 2007 . “ Systemic reform: Research, vision and politics ” . In The handbook of research on science education , Edited by: Abell , S.K. and Lederman , N.G. 911 – 92 . Mahwah , NJ : Lawrence Erlbaum Associates .
  • Keiler , L.S. and Woolnough , B.E. 2002 . Practical work in school science: The dominance of assessment . School Science Review , 83 ( 304 ) : 83 – 8 .
  • Knapp , M.S. 2002 . Understandng how policy meets practice: Two takes on local response to a state reform initiative , Washington, DC : Center for the Study of Teaching and Policy, University of Washington .
  • Lange , D. 1988 . Tomorrow's school's: The reform of education administration in New Zealand , Wellington : Government Printer .
  • Leach , J. and Scott , P. 2003 . Individual and sociocultural views of learning in science education . Science and Education , 12 : 91 – 113 .
  • Lederman , N.G. 1999 . Teachers' understanding of the nature of science and classroom practice: Factors that facilitate or impede the relationship . Journal of Research in Science Teaching , 36 ( 8 ) : 916 – 29 .
  • Lunetta , V.N. , Hofstein , A. and Clough , M.P. 2007 . “ Learning and teaching in the school science laboratory: An analysis of research, theory and practice ” . In The handbook of research on science education , Edited by: Abell , S.K. and Lederman , N.G. 393 – 442 . Mahwah , NJ : Lawrence Erlbaum Associates .
  • McComas , W.F. 1998 . The nature of science in science education , Dordrecht : Kluwer .
  • McDonald , B. and Boud , D. 2003 . The impact of self‐assessment on achievement: The effects of self‐assessment training on performance in external examinations . Assessment in Education , 10 ( 2 ) : 209 – 20 .
  • McGee , C. and Penlington , C. 2001a . Research on learning, curriculum and teachers' roles. Report 2: Teacher–student interaction , Hamilton : Waikato Institute for Research in Learning and Curriculum, University of Waikato .
  • McGee , C. and Penlington , C. 2001b . Research on learning, curriculum and teachers' roles. Report 3: The classroom curriculum , Hamilton : Waikato Institute for Research in Learning and Curriculum, University of Waikato .
  • Ministry of Education . 1993 . Science in the New Zealand curriculum , Wellington : Learning Media .
  • Nuthall , G. 1997 . “ Understanding student thinking and learning in the classroom ” . In The international handbook of teachers and teaching , Edited by: Biddle , B. , Good , T. and Goodson , I. 1 – 90 . Dortrecht : Kluwer .
  • Orpwood , G. 2001 . The role of assessment in science curriculum reform . Assessment in Education , 8 ( 2 ) : 135 – 51 .
  • Penuel , W. , Fishman , B.J. , Gallagher , L.P. , Korbak , C. and Lopez‐Prado , B. 2008 . Is alignment enough? Investigating the effects of policies and professional development on science curriculum implementation . Science Education , : 1 – 22 . DOI 10.1002/sce.20321
  • Powell , C.P. and Anderson , R.D. 2002 . Changing teachers' practice: Curriculum materials and science education reform in the USA . Studies in Science Education , 37 : 107 – 36 .
  • Reid , N. and Yang , M‐J. 2002 . Open‐ended problem solving in school chemistry: A preliminary investigation . International Journal of Science Education , 24 ( 12 ) : 1313 – 32 .
  • Roberts , R. and Gott , R. 2004 . Using different types of practical within a problem‐solving model of science . School Science Review , 85 ( 312 ) : 113 – 19 .
  • Skamp , K. 2007 . Teaching primary science constructively , Victoria , , Australia : Thomson .
  • Spillane , J.P. 2004 . Standards deviation: How schools misunderstand education policy , Cambridge , MA : Harvard University Press .
  • Toplis , R. and Cleaves , A. 2006 . Science investigations: The views of 14 to 16 year old pupils . Research in Science & Technological Education , 24 ( 1 ) : 69 – 84 .
  • Tytler , R. 2003 . A window for a purpose: Developing a framework for describing effective science teaching and learning . Research in Science Education , 33 : 273 – 98 .
  • Watson , R. , Goldsworthy , A. and Wood‐Robinson , V. 1999 . What is not fair with investigations? . School Science Review , 80 ( 292 ) : 101 – 6 .
  • Weinburgh , M. 2003 . Confronting and changing middle school teachers' perceptions of scientific methodology . School Science and Mathematics , 103 ( 5 ) : 222 – 32 .
  • Wellington , J. 1998 . “ Practical work in science. Time for a re‐appraisal ” . In Practical work in school science. Which way now? , Edited by: Wellington , J. 3 – 15 . London : Routledge .
  • Woolnough , B. 1998 . “ Authentic science in schools ” . In Practical work in school science. Which way now? , Edited by: Wellington , J. 93 – 125 . London : Routledge .
  • Wong , S.L. and Hodson , D. 2009 . From the horse's mouth: What scientists say about scientific investigation and scientific knowledge . Science Education , 93 ( 1 ) : 109 – 30 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.