12,102
Views
8
CrossRef citations to date
0
Altmetric
Articles

Teachers’ assessments of students’ learning of mathematics

&
Pages 169-182 | Published online: 17 Jul 2008

Abstract

This paper reports a study of the classroom assessment practices of 12 sixth form college mathematics teachers in Malta. It explores the extent to which these teachers are knowledgeable about their students’ learning of mathematics and the implications that this has for their classroom practices. It reveals that these teachers’ knowledge of their students’ understanding of certain mathematical concepts is fairly limited. It then goes on to discuss this phenomenon in terms of a process that can hinder rather than promote learning. The research illuminates the position of teachers who appear to lack certain detailed information about their students which, it can be argued, could inform more effective teaching strategies. The paper concludes by exploring possible implications for similar situations where learning could be enhanced by more effective classroom assessment strategies and their use to inform future teaching and learning activities.

Introduction

In this paper we wish to reflect upon the dynamic relationship between teaching, learning and assessment. This is done through an in‐depth study of 12 mathematics teachers working in a sixth form college in Malta. Our ultimate concern was to understand more about what was going on in their mathematics classes by investigating the insights that these teachers possessed regarding the learning of their students. Our expectation was that the impact of certain teaching strategies would be strongly influenced by the knowledge that these teachers had about the mathematical learning of their students. This led us to investigate both teachers’ classroom assessment practices and the information gained from such practices to inform teachers and teaching.

Learning, assessment and the teacher

There are, of course, many different ways of conceptualising the interactions between teaching and learning. Depending upon the adopted position, different information needs may be deduced as being important to underpin effective practice (Gipps Citation1994). Some teaching approaches focus on discrete skills and concentrate upon drilling students to achieve mastery through set practice routines. Although this may seem to cover ground more quickly, progress is often illusory as imitative methods can develop dependency and a fragile fluency that is lost when practice ceases (Swan Citation2001). Within this traditional approach teaching is accomplished by telling and through promoting learning by repetition, and the learner is viewed as a passive container waiting to be filled with knowledge, but possibly not receiving knowledge because of a ‘block’ (see Denvir Citation1989). The underlying assumption that knowledge can be transferred from teacher to student explains why traditional teachers are mainly concerned with getting knowledge into their students’ heads (von Glasersfeld Citation1989). Instruction is thus seen as the delivery of information and the responsibility for decoding that information lies with the students.

It follows that the purpose of assessment within this transmission framework is to determine the effectiveness with which the teacher communicates a body of knowledge to students (Brown Citation1989). Similarly, the feedback assessment provides emphasises what has not been learned (Harlen and James Citation1997) and just tells students – often too little and too late – how they have done on a test or in a course, not how they are doing as learners (Cross Citation1998). In contrast, alternative approaches emphasise deep learning strategies and favour assessment systems that place ‘emphasis on understanding, transfer of learning to untaught problems or situations, and other thinking skills, evaluating the development of these skills through tasks that clearly must involve more than recognition or recall’ (Crooks Citation1988, 468). The embedded complementarity in this conception of learning and assessment accommodates the view that different students are inclined to learn, remember and understand in quite different ways (Gardner Citation1991). A timely recognition that, as LaCelle‐Peterson (Citation2000) points out, each learner presents a unique profile of abilities, accomplishments, characteristics and needs.

The understanding that learning is a cognitive and constructive process leads to the notion that knowledge results from the organic process of reorganising and restructuring undertaken by the student as he or she learns (Gipps Citation1994). The ultimate aim is to build a conceptual repertoire that permits the student to succeed when faced with novel problems (von Glasersfeld Citation1995). Apart from the individual sense‐making, learning also has a social component. It is now recognised that, whilst the social domain impacts on the developing individual in some formative way, the individual constructs his or her meanings in response to his or her experiences in social contexts (Ernest Citation1994). The vital point of constructivist learning theories is that students are active in their learning. The student is thus seen as an agent, the active constructor of meaning and knowledge who shares responsibility for learning with the teacher (Murphy Citation1996). Consequently, the teacher’s role is no longer to dispense knowledge but rather to help and guide the students in the conceptual organisation of their experiences (von Glasersfeld Citation1989). This involves providing students with authentic activities that are meaningful and purposeful from their perspective, and that allow them to apply and develop their understandings in explicit relation to others (Murphy Citation1996).

This more interactive relationship between teacher and learner was given prominence by Vygotsky (Citation1978) who introduced the concept of the Zone of Proximal Development (ZPD). The ZPD refers to the gap between what the learner can do on his or her own and what he or she can do with the help of others. The process of support and guidance offered by the teacher to help the student perform at a higher level is known as ‘scaffolding’. In this supportive role, the teacher has to discern the potential of the student to advance in learning, so that the activities presented, instead of being either too trivial or too demanding, fall within Vygotsky’s ZPD area of appropriate and productive challenges (Black Citation1999). The use of scaffolding understandably requires the teacher to be aware of individual students’ personal needs (Murphy Citation1996). One function of assessment is then to identify this zone accurately and to explore progress within it. Empowered with this ‘knowledge of students’ the teacher can then look forward to helping all students get the ‘best’ from their education (see Griffiths Citation2003).

This use of assessment for furthering individual growth and accomplishment lies at the heart of the newly emerging assessment culture that is now generally known as ‘assessment for learning’ (see Assessment Reform Group Citation1999, Citation2002; Black et al. Citation2003). The fact that the formal component of assessment becomes more prominent in post‐16 education and more groups start having a vested interest in its outcomes and the ways it is managed, both in and between institutions, does not lessen the need for assessment that motivates learners and encourages them to take more control over their learning (Ecclestone Citation2005). On the contrary, as this sector continues in its efforts to attract and retain new groups of learners, the pressure mounts in favour of assessment that, apart from guaranteeing certification towards qualifications, can also provide authentic and rich accounts of students’ achievements, which are essential elements if assessment is to be for learning (Ecclestone Citation2005).

The Maltese context

School attendance in Malta is compulsory between the ages of 5 and 16 – a total of 11 years of primary (6 years) and secondary (5 years) schooling. At the end of the secondary cycle, students may sit for the 16+ Secondary Education Certificate examinations (still popularly referred to as O Levels) organised by the local Matriculation and Secondary Education Certificate Examinations Board (MATSEC). Depending upon their aspirations and qualifications, students may then enter one of the post‐secondary institutions that offer either academic or vocational programmes. At present, approximately 66% of local 17‐year‐olds attend one of these post‐16 institutions (Chalmers et al. Citation2004). The academic route, which is by far the more popular, is linked to sixth form colleges that prepare students, over a two‐year programme, for MATSEC’s 18+ Matriculation Certificate Examination. This local university entrance examination is based upon a baccalaureate approach.

The official position in Malta is that the education process focuses on the human potential for self‐realisation (see Ministry of Education Citation1999). This policy is being promoted through efforts to establish learning as an active and relevant process that depends upon students having access to the aims of their learning and teachers knowing what students are making of their learning experiences. The current initiatives towards ‘a more formative assessment’ in local schools are part of the strategy to reach these goals (see Ministry of Education Citation1999). In this evolving scenario, eventual success shifts from a mere examination pass to the knowledge that all students, irrespective of their performance, have been helped to develop holistically to the best of their abilities. But given the considerable emphasis that is still placed on examinations in Malta (see Wain et al. Citation1995; Chetcuti and Griffiths Citation2002; Murphy Citation2005), these new education policies, which ultimately aim to provide everyone with quality education, face the daunting task of taking on a deeply embedded highly traditional educational culture. In the context of the current study there are real questions about the capacity of Maltese teachers to respond to the specific policies which encompass ‘assessment for learning’ philosophies, when they are already struggling to accommodate the demands being made of them through other education policy changes.

The study

The 12 teachers in the study taught mathematics in a Maltese sixth form college that employs mixed teaching modes. Lessons with relatively large groups of students (up to 50) are balanced by small group teaching methods that include seminars, tutorials, and personal contact hours (in which students can meet teachers on a one‐to‐one basis). At the end of each term (i.e., on three occasions with first‐year students and on two occasions with second‐year students), teachers forward to the school administration an assessment mark on each student, which is out of 100 and in multiples of 5. Teachers may apply whichever criteria they deem fit to arrive at their assessment marks. Promotion from first to second‐year depends to some extent upon an element of continuous assessment. For each subject, the accumulated assessment marks account for 30%, whilst the remaining 70% are allotted to the end‐of‐first‐year school‐based examination. Although not all of the participants had teaching qualifications, all were mathematics content specialists of at least Bachelor’s degree level, and represented a reasonable good mix with regards to gender, age and teaching experience.

A case study methodology was employed as this allowed us to gain an understanding of teachers’ classroom assessment practices by investigating the complex, real‐life situations encompassing them. Within the emergent and integrated research design that drew on ethnographic principles, the data collection, question identification and theory building were not three separate phases, but were interwoven into one another. During the three years in the field, evidence was gathered by using the range of qualitative data collection methods mentioned by Patton (Citation1990). These included: (i) in‐depth, open‐ended interviews (each teacher was interviewed individually on three different occasions, with each interview having a specific focus); (ii) direct observations, both inside and outside the mathematics classrooms of the twelve teachers, which were recorded in a ‘research journal’ with accompanying reflections; and (iii) written documents, both official and personal ones. The transcript data used in this paper comes primarily from an ‘assessment interview’, which was conducted midway through Year 3 of the study. During this interview, the teachers spoke extensively about their classroom assessment beliefs and practices, and related matters. Towards the end of Year 2 of the study the teachers had been asked, depending on the class level they taught, to predict either their students’ performance in the then forthcoming end‐of‐first‐year school‐based or matriculation examination.

As befits this style of research, we were concerned that the meanings and perceptions of each participating teacher should be allowed to emerge. Thus, once the recorded interviews were fully transcribed, the participants were encouraged to go over their transcripts and to make any alterations they wished. Although there was no definite point at which the data collection stopped and the data analysis began (see Patton Citation1990), the analysis proper commenced once we reached a reasonable level of data saturation. The analytical process involved a holistic review of the various data sources collected during the study, followed by a coding process, which produced ‘units of meaning’. Through this process, we were able to attach initial isolated meanings and understandings to the interview transcript phrases and sentences, observational notes and the documentary sources. This led to a process of further review which helped us to identify new connections and meanings within and amongst the different data sources. As a result, localised meanings began evolving into more general themes. These new and deeper understandings eventually led to the account that we are presenting here.

The findings

The findings are presented in three sections. The first section deals with the priorities, beliefs and practices of the participants. This data provides the context for the second section in which we explore what these teachers actually get to know about the mathematical learning of their students – what we are calling ‘teachers’ knowledge of students’. In the third section we build on the second by examining the teachers’ claim that their knowledge of students permits them to make good predictions of students’ examination results. Based on these findings we then analyse teachers’ knowledge of students both against what the teachers themselves are trying to achieve and what is expected from them within an educational system that claims to be ‘for learning’.

Teachers’ priorities, beliefs and practices

The teaching style of the teachers studied, apart from generally adhering to the traditional ‘talk and chalk’ approach, can be subdivided into three easily recognisable separate segments, with one phase following the other. These are exposition (i.e., teacher presents theory and solution methods, and students take down notes), practice (i.e., students work questions – either on their own or else, as more frequently happens, guided by the teacher – to practise the solution methods) and consolidation (i.e., teacher clarifies students’ learning and difficulties). This irrefutable cycle of exposition, practice, and consolidation – which has much in common with the widely used, traditional three‐segment lesson reported by Romberg and Kaput (Citation1999) – presents ‘teaching as transmission’ and ‘learning as practice’. The participants emerge as being primarily concerned with demonstrating manipulations and defining concepts, relegating students in the process to memorising facts and practising procedures. There was little evidence, if at all, to suggest awareness that students themselves construct meanings and that classrooms consequently need to support this kind of learning.

In line with this traditional pedagogy, the participants tended to spend a large amount of time during the lesson next to the whiteboard, usually writing on it whilst turning their backs to the students. Andrew (all are given pseudonyms), one of the teachers, caricatured this phenomenon when he said, ‘When we enter the classroom, we just dash to the whiteboard and start writing on it!’ The interviews and observations suggest that teachers perceive themselves and their students as having clearly defined and separate classroom roles – the teacher to lead from the front and students to follow from their allotted position. The teachers’ readiness to partake in the traditional practice of imparting to students a collection of techniques that simply engage them in coming to terms with what other people have done (see Romberg and Kaput Citation1999) is also evident from the ‘passive’ type of classroom participation they value and consequently seek from their students (e.g., expecting students to follow closely the teacher’s work on the whiteboard with an eye to identifying and pointing out any possible ‘mistakes’, such as a wrong addition). This results in teachers and students operating largely in isolation from each other, hardly ever interacting in a deep or meaningful manner. One fieldwork journal entry referred specifically to this:

Although teachers and students share the same space, it seems like they live in two different worlds. The net impression is that of two sets of people regularly facing each other without actually interacting, if not only fleetingly – say, when the teacher or a student asks a question. (February, Year 3)

In spite of this ‘distancing’, the participants still believe themselves to be in a position to gather unique information about students’ learning (see Calfee and Masuda Citation1997). The ways in which they claimed to generate information about their teaching‐learning situation can be classified under five categories: (1) observing students; (2) checking students’ work; (3) listening to students; (4) asking questions; and (5) class tests. A distinction invariably emerged during interviews between the sources of information that lead directly to student marks (what we termed ‘formal assessment’) and others that do not (what we termed ‘informal assessment’).

This study illustrated how high‐stakes examinations, be these for school promotion or certification purposes, create rather than measure what goes on inside the classroom (see Hanson Citation2000). The extent of this emerged when the participants spoke about their priorities in teaching. Their top three, closely interlinked priorities, in descending order of importance, were:

  1. Helping students to pass examinations. This was mentioned by all the teachers as ‘the’ top priority for reasons that had mainly to do with this being ‘the teacher’s job’, ‘what students are interested in’ (see Broadfoot Citation1996), and ‘what really counts’.

  2. Finishing the syllabus. This appears to be a must for everyone, that is, except for Ray (possibly the only teacher in the study who gives more weight to own beliefs than school prerogatives) who argued that he spends as much time as necessary on a topic, irrespective of whether or not he covers the entire syllabus.

  3. Focusing on the better students. Most teachers claimed to concentrate on those students who were thought to stand a realistic chance of passing the examination. Some of the teachers added that they nevertheless felt bad about ‘neglecting’ the others, at least those who made some effort. They explained this ‘inattention’ in terms of having to prioritise their teaching time given that they work in a constrained environment (e.g., ‘large student groups’, ‘inadequate O Level preparation’, ‘students being largely incapable of working on their own’ and ‘many unmotivated students’).

The noted common practice of basing teaching decisions on only a small group of students (see Black Citation1998) often transformed itself in our study in the outright exclusion of the weaker performers. The embedded understanding that ‘the division between those who can do mathematics and those who can’t is perfectly natural and … legitimate’ (Gates Citation2002, 212) works against the notion of equity which demands that each student is helped to realise his or her potential (see Gipps and Murphy Citation1994). Unsurprisingly, the participants’ assessment priorities, beliefs and practices were well aligned with their still very traditional classroom approaches. It can in fact be said that, generally speaking, their assessment practices:

  • lacked an effective educational base (i.e., with little or no assessment training, their practices originated mostly from personal and collective experiences);

  • pursued traditional forms and approaches to teaching (i.e., no sign of new, alternative assessments; assessment was used primarily to check if what the teacher was ‘transmitting’ had been ‘captured’ by the students);

  • emphasised non‐professional functions (i.e., assessment was largely used to judge student performances and to prepare students for examinations);

  • were firmly in teachers’ hands (i.e., students were on the receiving end of assessment);

  • were characterised by collegial isolation (i.e., almost complete lack of assessment collaboration amongst teachers);

  • were not well integrated with teaching and learning (i.e., little emphasis, particularly in the immediacy of instruction, on using assessment to help students learn);

  • provided students and outsiders with limited feedback (i.e., teachers kept to themselves most of what they learned about their students’ learning).

These characteristics produce a picture of classroom practice that is alien to the emerging view of assessment that is primarily at the service of learning. Instead, the participants’ strong linking between quality classroom assessment and the adequate monitoring of and timely support for the notions of ‘teaching as transmission’ and ‘learning as practice’ suggests, in turn, the need for a holistic approach to affecting assessment changes in teachers. As presently understood, classroom assessment is good if it produces rich evidence about students’ learning, rendering it possible to make effective instructional decisions (National Council of Teachers of Mathematics [NCTM] Citation1995).

Teachers’ knowledge of students

All the teachers studied claimed to be more convinced of and ready to rely on any evidence that is generated directly from within their own classroom than anything that originates from outside it. They invariably linked their reliance on classroom‐based information to what they view as their own sustained active involvement with students at close quarters. These teachers evidently believe that they employ a teaching style that intrinsically guarantees them knowledge. Nicholas clearly elucidated this point:

I can speak confidently about my classroom situation because I’m constantly overseeing what’s going on … I’m all the time in close contact with students. As I’ve already explained, I teach not lecture … If I were to give a lecture, I wouldn’t even know a small fraction of what I know about my teaching and students!

The teachers were so convinced about this that their most frequently cited benefit of classroom assessment was ‘the opportunity to learn about students’. Most of them maintained, in fact, that classroom assessment renders them very knowledgeable about matters related to students’ learning of mathematics. Their claims to knowledge extended to areas such as students’ understanding of mathematics, their mathematical preparation, their study efforts, as well as their mathematical potential and performance. It also appeared that the more experienced the teacher, the greater the degree of self‐confidence in his or her knowledge of students. Mario’s words portray the extreme confidence displayed by the more experienced participants:

By now [i.e., six months into the scholastic year], I have formed an opinion about each student in class. If you’ll point at any one of them, I’ll be able to tell you about his or her mathematical worth … and my opinions are hardly ever wrong!

When Mario used the word ‘formed’, he was conveying the widely held notion amongst the teachers that they acquire this knowledge over a period of time, which varies from a few weeks (e.g., Andrew insisted that after three weeks he would already know which students would fail the examination) to a few months. Like most of the teachers, Nicholas placed himself towards the less ‘speedy’ end of this spectrum:

In the first month or two there were still some students that I didn’t really know … but not now! I wouldn’t have done my duty as a teacher if I didn’t know all the students by now … after teaching them for five months!

Nicholas’ notion that ‘it is the teacher’s duty to get to know his or her students mathematically’ was a recurring theme in each participant’s discourse. During the assessment interviews, which were conducted between January and March of Year 3 of the study, all the teachers asserted that they knew at least most of their students rather well as far as mathematics went. Some teachers, however, appeared to be less certain about their knowledge of the ‘middle range’ students. Just like Weeden et al.’s (Citation2002) ‘invisible pupils’, these were students with whom the teachers reportedly had few direct interactions, as they neither stood out in class by doing relatively well nor by being particularly weak. It was Rita who best brought out the participants’ at times somewhat hesitant claims to knowledge with regards to these ‘middle’ students:

There are some students … who are neither good nor weak, just somewhere in between … about whom I feel less sure of myself. … These students usually remain in the background … they rarely open their mouths … and I find little reason to approach them! What I know about them is based on what I observe, the homework (… but I’m never sure if they did it by themselves), and the test results … but should they not come for tests, I’ll really have big problems …

To explore what they actually ‘knew’ about students, the teachers were invited during the assessment interview to pick two students from their class, and to describe them. The interview transcript below (in which Andrew talks about Jason, one of his students) illustrates the type of information that was forthcoming:

Michael: … what can you tell me about Jason?

Andrew: He’s quite good in mathematics … he usually does his homework, pays attention in class, and gets good results in tests … let me check [looks at his class records] … just as I said, he got 75% in the first test, 70% in the second, and 85% in the third … wish I had more students like him!

Michael: But … what about his understanding of mathematics?

Andrew: Oh! I see what you mean … I’m sure he’ll pass the exam … at the moment we’re doing Trigonometry, and he seems to be following … at least his work is usually correct, and he hardly has any serious difficulties … Jason would let me know if he’s not following …

Michael: What can you tell me about his difficulties?

Andrew: You know! … The usual things! … A wrong sign here and there, fractions … problems with Algebra … but this concerns most of my students, not just Jason …

Michael: Can you be any more specific than this?

Andrew: If I had some of his work in front of me, I would be able to pinpoint his mistakes … but there’s no way I could be more specific right here … I never write these things down, and not just with Jason …

Michael: What about his mathematical strengths?

Andrew: What can I say? … he doesn’t give up that easily … and he seems to have quite a good mathematical background … [refers to class records] … he’s got grade 2 in O Level … and that’s not bad … But I don’t think I can be more specific than this!

As it turned out, the teachers’ knowledge of students appeared to be mostly related to their level of commitment to work, their behaviour in class, and their chances of examination success. Invariably, whenever ‘pressed’ for information that went beyond this ‘ranking’ to include evaluations against criteria (see Gipps Citation1994), the teachers either ignored such efforts and kept to their chosen path, or else readily admitted that they could not provide such information. Another point that emerged from this exercise was that the teachers almost exclusively chose to speak about their better students (this was ascertained from the student assessment records that the teachers brought with them for the interview). When queried about this tendency (which was reminiscent of their declared preference to focus attention on the better students), most teachers maintained that these were probably the students about whom they were more knowledgeable. A few, however, explained it as being down to pure chance.

Teachers’ predictions of examination results

A claim commonly made by the teachers throughout the study was that, given their good knowledge of students derived from classroom assessment, they could predict the examination results (i.e., the school’s end‐of‐first‐year examination [range of marks: from 0 to 70] and the Matriculation Certificate’s end‐of‐second‐year examination [range of grades: from A to F]). To examine these assertions, the teachers were asked towards the end of Year 2 of the study to predict a mark (first‐year teachers had to give a mark in multiples of 5 between 0 and 70) or a grade (second‐year teachers had to choose a grade from A, B, C, D, E and F) for each student, and to indicate in writing the criteria they used to arrive at these predictions. Table gives for each teacher the computed correlation coefficient between his or her predictions and the students’ subsequent examination results.

Table 1. The correlation coefficients based on teachers’ predictions.

From what the teachers wrote, the predictions were largely based on the results of their formal assessments (particularly, the class tests that reportedly mimicked the very examinations whose results the teachers were asked to predict). The teachers either discarded completely the results of informal assessments or else only used them subsequently to fine‐tune their predicted marks or grades. Table reveals that whilst all the correlation coefficients were consistently positive and high (which in itself gave credibility to their claim of being able to predict examination results with a fair degree of accuracy), these tended to be generally higher for first‐year than for second‐year teachers. Given that the first‐year teachers generally gave more class tests than second‐year teachers (an average of 5.7 tests per year as compared to 2.8 tests per year), the correlations reported in Table appear to support the teachers’ notion that ‘the more one tests, the better the predictions’. Although the literature remains undecided on the extent to which teachers can predict the examination performances of their students (see Murphy Citation1979; Nuttall Citation1989; Delap Citation1994, Citation1995; Black and Wiliam Citation1998; Moody Citation2001), the present findings add to the evidence in favour of those who argue that teachers can make fairly good predictions. And they can apparently do so in spite of knowing so little about their students’ learning (see Weeden et al. Citation2002).

To gain a better understanding of teachers’ predictive ability, the predictions of each teacher and the corresponding examination results were regrouped according to three predicted performance levels. These were: (1) low performance (marks from 0 to 20 for first‐year teachers; grades F and E for second‐year teachers); (2) average performance (marks from 25 to 45; grades D and C); and (3) high performance (marks from 50 to 70; grades B and A). In line with the teachers’ declared general preference to focus attention on the better‐performing students and the subsequent claims made by some that these are the students they know best, statistical analysis showed that the correlation between predictions and examination results was highest in 9 cases out of 12 for students perceived by teachers to be high performers. In the three remaining cases (i.e., Andrew, Jackie and Stephen), the correlation was highest for students perceived by teachers to be low performers. The fact that the correlation was never highest for students perceived to be of average performance seems to support the claim made by some of the participants that they feel less sure about their knowledge of ‘middle’ students.

During the assessment interviews, the teachers were given the opportunity to compare their predictions against the actual examination results. They appeared pleased but not surprised with the ‘overall closeness’ between the two sets of data. When they noted some strong ‘discrepancies’ between the predictions and the examination results, more often than not, they presented a series of plausible explanations that left unchallenged their predictions (e.g., Matthew ‘accused’ a student who performed much better in the examination than he had expected of copying). In fact, most of the teachers (especially, the more experienced ones) asserted the ‘validity’ of their own predictions over the examination results. This is how Mario expressed this idea:

I know what the students are capable of … no examination result would ever convince me otherwise … I’m 100% positive that my predictions are more truthful than the examination results!

By choosing to privilege their own predictions, these teachers confirmed once again the high esteem in which they hold their classroom‐based information. Believing this, they generally spoke of themselves as competent assessors with little reason for doing things differently. Even in a hypothetical constraint‐free educational environment, these teachers emerged as primarily interested in ‘doing more of what they are already doing’. Adopting new strategies for conducting classroom assessments did not therefore appear to be something that they were likely to contemplate.

Implications for learning

The research evidence suggests that the participants’ knowledge of students, even of those they claim to know well, lacks depth. Indeed, it emerged that these teachers tend to generalise their understanding from particular assessments, which often only target a narrow sample of students and a narrow sample of each student’s behaviour, to other forms of assessment and much wider contexts. In this scenario, which reminds us of Black’s (Citation1998) observation that teachers do not usually have sound information about students’ progress, it is not uncommon that teachers develop what Bright and Joyner (Citation1998) call ‘illusions of learning’. These are the over‐generalisations made by teachers of the depth of understanding that is actually present in what students say and do.

Particularly relevant here, given the participants’ tendency to present the learning of mathematics effectively as ‘the successful handling of prescribed methods through practice’, the students’ dexterity with such methods, even if with little attention being paid to what actually lies beneath, was often projected by teachers as an indication of learning. This teacher focus on exploring reproduction was to the detriment of learning about the students’ individual and social construction of meanings. Consequently, irrespective of whether the participants convinced themselves that students were learning or not, they were very likely to acquire only surface knowledge of students that would not shed much light on what students actually know and can do, let alone on what they can nearly do (see Gipps Citation1994).

The teachers studied, therefore, did not appear well‐placed to use assessment formatively, even if they so wished. Especially with regards to those students about whom they admitted to know very little, they lacked the necessary valid and detailed information that would permit them to determine upon reflection whether or not there is a gap between actual and desired levels of performance, and which actions can successfully close this gap (see Wiliam and Black Citation1996). These teachers are consequently not in a position to offer students appropriate scaffolding activities that can provide them with the needed support and guidance to perform at a higher level (Murphy Citation1996).

To decide successfully what to do next, which students to push, and so on, it is not enough that teachers try to make sense of the available evidence that is derived from assessments that ‘just happen’ or which either do not have the potential to produce rich information or else whose potential is not exploited. Meagre evidence may well serve a number of non‐professional purposes (amongst which we would include the prediction of examination results), but it is not suitable for instructional purposes (see Brookhart Citation1999). To be for learning, assessments need to promote valid inferences about learning through adequate and relevant evidence (NCTM Citation1995). Needless to say, no amount of reflective examination by the teacher can ever turn poor quality evidence into information that helps to further learning. Within an educational system that, like the Maltese one, is trying to integrate teaching and assessment better for the benefit of learning, the inadequate level of teachers’ knowledge of students depicted in our study is consequently a cause of serious concern.

Working for change

Although the 12 teachers in this study generally projected their ability to predict examination results as a sign of their deep knowledge of students and, by implication, of the good quality of their teaching, they were not completely unaware of the ‘not for learning’ consequences of at least some of their classroom decisions. As reported elsewhere (see Buhagiar Citation2004), the participants were clearly willing to sacrifice some aspects of learning for the sake of examination success, which was by their own admission the number one priority of their teaching. Displaying a great ‘sense of practicality’, these teachers had argued against using open tasks in class, even though they were largely appreciative of their educational benefits, as these require extensive instructional time and are not directly linked to the traditional closed questions that students still face in examinations. The situatedness of the participants’ decisions in many ways explains why they strongly believe themselves ‘to be doing a good job in the present circumstances’ and feel ‘no real need to change their practices’.

The challenge identified here – namely, to improve learning by seeking to help teachers become more knowledgeable about their students – is thus embedded in what appears to be a distancing of responsibility by the teachers. The participants, in fact, attributed ‘blames’ on students (mostly for lacking motivation and commitment), their school (mostly for not taking adequate measures against faulting students) and the educational system (mostly for an inadequate mathematics preparation at secondary level and for the lowering of standards at sixth form level as a result of the continued increase in student numbers). But regardless of who or what is to blame, the point is that all learners, irrespective of whether their examination results ‘say’ they are succeeding or failing in the system, suffer when their teachers do not possess the needed rich information about their learning strengths and weaknesses that would permit them to move ahead, each at his or her own pace. There is a need for teachers to be helped to become more aware of this if they are to desire and eventually embrace change that would benefit the learning process.

This study also provides some evidence that Maltese policy‐makers, in spite of the passing years, have had little success in reforming local assessment practices through the formulation of new policies (see also Grima and Chetcuti Citation2003). The participating teachers were either unaware of these developments or else ignored or re‐dimensioned those policies of which they either did not approve or which, according to their judgement, would not function in their working context. The realisation that improvements in assessment policies, albeit necessary, are by no means a sufficient condition for change (see Torrance Citation1995) is a clear indication of the complexity surrounding change (see Fullan Citation2001).

In particular, with assessment clearly being a context‐bound activity (see Broadfoot Citation1996; Black Citation1998), the operating context must necessarily be highlighted in debates about assessment change. In fact, the larger study (Buhagiar Citation2005) from which the present data is a sub‐set concluded that improvement in teachers’ classroom assessment practices calls for action within two distinct, but highly interrelated, dimensions – what Day (Citation2000) calls the ‘working context’ and the ‘personal context’ of the teacher. Action in the personal dimension is needed so that teachers may become more sensitised to the principles and practices that underpin the new assessment paradigm, and action in the working dimension is needed so that the contexts in which teachers operate can become a vehicle for change rather than remain an obstacle to it.

One possible goal is to construct an all‐encompassing ambience that helps assessment truly become a central element of the learning process. However, this much‐needed integration between teaching, learning and assessment, the notion of which is probably still largely restricted to policy‐makers and their documents, would undoubtedly stand a better chance of becoming an entrenched classroom reality should the general public – not simply those directly involved in education or its by‐products – come to understand, appreciate and ultimately work towards an educational system that truly values ‘learning’ and ‘everyone’ in all its endeavours. The way forward thus seems to be through ‘constructive dialogue’ campaigns that educate rather than just inform all key education players, as well as the general public, about possible ways of improving student learning.

Notes on contributors

Michael A. Buhagiar is lecturer of mathematics at the Junior College of the University of Malta and associate fellow of the Euro‐Mediterranean Centre for Educational Research (EMCER) at the same university. He is particularly interested in exploring assessment issues inside mathematics classrooms.

Roger Murphy is professor of education and director of the Centre for Developing and Evaluating Lifelong Learning (CDELL) in the School of Education at the University of Nottingham. He has specialist interests in relation to the impact of educational assessment systems on student learning.

References

  • Assessment Reform Group . 1999 . Assessment for learning: Beyond the black box , Cambridge : University of Cambridge, School of Education . [Pamphlet]
  • Assessment Reform Group . 2002 . Assessment for learning: 10 principles [Leaflet/poster]
  • Black , P. 1998 . Testing: friend or foe? The theory and practice of assessment and testing , London : Falmer Press .
  • Black , P. 1999 . “ Assessment, learning theories and testing systems ” . In Learners, learning and assessment , Edited by: Murphy , P. 118 – 134 . London : Paul Chapman Publishing in association with The Open University .
  • Black , P. , Harrison , C. , Lee , C. , Marshall , B. and Wiliam , D. 2003 . Assessment for learning: Putting it into practice , Maidenhead : Open University Press .
  • Black , P. and Wiliam , D. 1998 . Assessment and classroom learning . Assessment in Education: Principles, Policy and Practice , 5 ( 1 ) : 7 – 74 .
  • Bright , G.W. and Joyner , J.M. . Understanding and improving classroom assessment: Summary of issues raised . Classroom assessment in mathematics: Views from a National Science Foundation working conference . Edited by: Bright , G.W. and Joyner , J.M. pp. 27 – 57 . Lanham, MD : University Press of America .
  • Broadfoot , P.M. 1996 . Education, assessment and society: A sociological analysis , Buckingham : Open University Press .
  • Brookhart , S.M. 1999 . The art and science of classroom assessment: The missing part of the pedagogy , Washington, DC : George Washington University .
  • Brown , M. 1989 . “ Graded assessment projects: Similarities and differences ” . In Developments in learning and assessment , Edited by: Murphy , P. and Moon , B. 300 – 311 . London : Hodder and Stoughton .
  • Buhagiar , M.A. 2004 . ‘How appropriate is this task for my class?’ Exploring teachers’ classroom decision‐making processes as they waver between ‘practical’ and ‘ideal’ positions . Mediterranean Journal of Educational Studies , 9 ( 2 ) : 83 – 108 .
  • Buhagiar , M.A. 2005 . Mathematics teachers’ classroom assessment practices: A case study in a Maltese sixth form college , University of Nottingham . Ph.D. diss.
  • Calfee , R.C. and Masuda , W.V. 1997 . “ Classroom assessment as inquiry ” . In Handbook of classroom assessment: Learning, adjustment, and achievement , Edited by: Phye , G.D. 69 – 102 . San Diego, CA : Academic Press .
  • Chalmers , R. , Thake , M.A. and Sciberras , J. 2004 . State higher education funding , Floriana, Malta : Ministry of Education, Youth and Employment .
  • Chetcuti , D. and Griffiths , M. 2002 . The implications for student self‐esteem of ordinary differences in schools: The cases of Malta and England . British Educational Research Journal , 28 ( 4 ) : 529 – 549 .
  • Crooks , T.J. 1988 . The impact of classroom evaluation practices on students . Review of Educational Research , 58 ( 4 ) : 438 – 481 .
  • Cross , K.P. 1998 . “ Classroom research: implementing the scholarship of teaching ” . In Classroom assessment and research: An update on uses, approaches, and research findings , Edited by: Angelo , T. 5 – 12 . San Francisco, CA : Jossey‐Bass Publishers .
  • Day , C. 2000 . “ Stories of change and professional development: The cost of commitment ” . In The life and works of teachers: International perspectives in changing times , Edited by: Day , C. , Fernandez , A. , Hauge , T.E. and M⊘ller , J. 109 – 129 . London : Falmer Press .
  • Delap , M.R. 1994 . An investigation into the accuracy of A‐Level predicted grades . Educational Research , 36 ( 2 ) : 135 – 148 .
  • Delap , M.R. 1995 . Teachers’ estimates of candidates’ performance in public examinations . Assessment in Education: Principles, Policy and Practice , 2 ( 1 ) : 75 – 92 .
  • Denvir , B. 1989 . “ Assessment purposes and learning in mathematics education ” . In Developments in learning and assessment , Edited by: Murphy , P. and Moon , B. 277 – 289 . London : Hodder and Stoughton .
  • Ecclestone , K. 2005 . Understanding assessment and qualifications in post‐compulsory education and training: Principles, politics and practices , 2nd ed. , Leicester : National Institute of Adult Continuing Education .
  • Ernest , P. 1994 . “ Social constructivism and the psychology of mathematics education ” . In Constructing mathematical knowledge: Epistemology and mathematical education , Edited by: Ernest , P. 62 – 72 . London : Falmer Press .
  • Fullan , M. 2001 . The new meaning of educational change , 3rd ed. , New York : Teachers College Press .
  • Gardner , H. 1991 . The unschooled mind: How children think and how schools should teach , New York : Basic Books .
  • Gates , P. 2002 . “ Issues of equity in mathematics education: Defining the problem, seeking solutions ” . In Teaching mathematics in secondary schools: A reader , Edited by: Haggarty , L. 211 – 228 . London : RoutledgeFalmer .
  • Gipps , C.V. 1994 . Beyond testing: Towards a theory of educational assessment , London : RoutledgeFalmer .
  • Gipps , C. and Murphy , P. 1994 . A fair test? Assessment, achievement and equity , Buckingham : Open University Press .
  • Griffiths , M. 2003 . Action for social justice in education: Fairly different , Maidenhead : Open University Press .
  • Grima , G. and Chetcuti , D. 2003 . Current assessment practices in schools in Malta and Gozo: A research report . Journal of Maltese Education Research , 1 ( 2 ) : 57 – 94 . University of Malta. http://www.educ.um.edu.mt/jmer
  • Hanson , F.A. 2000 . “ How tests create what they are intended to measure ” . In Assessment: Social practice and social product , Edited by: Filer , A. 67 – 81 . London : RoutledgeFalmer .
  • Harlen , W. and James , M. 1997 . Assessment and learning: Differences and relationships between formative and summative assessment . Assessment in Education: Principles, Policy and Practice , 4 ( 3 ) : 365 – 379 .
  • LaCelle‐Peterson , M. 2000 . “ How assessment policies and practices obscure the education of language minority students ” . In Assessment: social practice and social product , Edited by: Filer , A. 27 – 42 . London : RoutledgeFalmer .
  • Ministry of Education . 1999 . Creating the future together: National minimum curriculum , Floriana, Malta : Ministry of Education .
  • Moody , I. 2001 . A case‐study of the predictive validity and reliability of Key Stage 2 test results, and teacher assessments, as baseline data for target‐setting and value‐added at Key Stage 3 . The Curriculum Journal , 12 ( 1 ) : 81 – 101 .
  • Murphy , P. 1996 . “ Defining pedagogy ” . In Equity in the classroom: Towards effective pedagogy for girls and boys , Edited by: Murphy , P.F. and Gipps , C.V. 9 – 22 . London : Falmer Press .
  • Murphy , R.J.L. 1979 . Teachers’ assessments and GCE results compared . Educational Research , 22 ( 1 ) : 54 – 59 .
  • Murphy , R.J.L. 2005 . “ Some external perspectives on MATSEC ” . In MATSEC: Strengthening a national examination system , Edited by: Grima , G. , Camilleri , R. , Chircop , S. , Ventura , F. and Appendix C. 1 – 15 . Floriana, Malta : Ministry of Education, Youth and Employment .
  • National Council of Teachers of Mathematics (NCTM) . 1995 . Assessment standards for school mathematics , Reston, VA : NCTM .
  • Nuttall , D.L. 1989 . “ The validity of assessments ” . In Developments in learning and assessment , Edited by: Murphy , P. and Moon , B. 265 – 276 . London : Hodder and Stoughton .
  • Patton , M.Q. 1990 . Qualitative evaluation and research methods , 2nd ed. , Newbury Park, CA : SAGE Publications .
  • Romberg , T.A. and Kaput , J.J. 1999 . “ Mathematics worth teaching, mathematics worth understanding ” . In Mathematics classrooms that promote understanding , Edited by: Fennema , E. and Romberg , T. A. 3 – 17 . Mahwah, NJ : Lawrence Erlbaum Associates Publishers .
  • Swan , M. 2001 . “ Dealing with misconceptions in mathematics ” . In Issues in mathematics teaching , Edited by: Gates , P. 147 – 165 . London : RoutledgeFalmer .
  • Torrance , H. 1995 . “ The role of assessment in educational reform ” . In Evaluating authentic assessment: Problems and possibilities in new approaches to assessment , Edited by: Torrance , H. 144 – 156 . Buckingham : Open University Press .
  • von Glasersfeld , E. 1989 . “ Learning as a constructive activity ” . In Developments in learning and assessment , Edited by: Murphy , P. and Moon , B. 5 – 18 . London : Hodder and Stoughton .
  • von Glasersfeld , E. 1995 . “ A constructivist approach to teaching ” . In Constructivism in education , Edited by: Steffe , L.P. and Gale , J. 3 – 15 . Hillsdale, NJ : Lawrence Erlbaum Associates Publishers .
  • Vygotsky , L.S. 1978 . Mind in society: The development of higher psychological processes , Cambridge, MA : Harvard University Press .
  • Wain , K. , Attard , P. , Bezzina , C. , Darmanin , M. , Farrugia , C. , Psaila , A. , Sammut , J. , Sultana , R. and Zammit , L. 1995 . Tomorrow’s schools: Developing effective learning cultures , Floriana, Malta : Ministry of Education and Human Resources .
  • Weeden , P. , Winter , J. and Broadfoot , P. 2002 . Assessment: What’s in it for schools? , London : RoutledgeFalmer .
  • Wiliam , D. and Black , P. 1996 . Meanings and consequences: A basis for distinguishing formative from summative functions of assessment? . British Educational Research Journal , 22 ( 5 ) : 537 – 548 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.