1,636
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Filling gaps: assessment software and the production of mathematics and its teaching and learning in primary schools

ORCID Icon &
Pages 501-515 | Received 22 May 2020, Accepted 12 Apr 2021, Published online: 04 Jun 2021

ABSTRACT

Recent policy reforms to the national assessment system in England have altered the way in which teachers undertake assessment. Through an analysis of data from a small-scale interview study with eleven primary teachers in the south and east of England, we examine the ways in which teachers work with assessment software to monitor and track attainment. By employing a theoretical lens focusing on power and implementing an analytics of government, we illuminate the visualising effects of whole-class digital assessment grids. A prominent feature across the data set is the use of the language of ‘gaps’ in relation to the teaching and assessment of mathematics. We argue that digital assessment tools act as technologies of power and of the self which shape teachers’ practices, their identities, how they position children as learners, but also shape the nature of the subject of mathematics itself. It is important to better understand assessment software in education and the discourses in which it operates in order to recognise what is at stake for teaching, learning and curriculum.

Braun and Maguire (Citation2018, p. 12) have argued, in this journal, that ‘the culture in English schools is such that the forms of accountability are part of the professional repertoire of the teacher’. We agree, and in this paper aim to show how part of this professional repertoire, the teaching and assessment of primary (5–11 years) mathematics, but also school mathematics itself, develops in relationship with accountability manifested in, and through, the use of commercially produced assessment software.

We refer specifically to computer software that allows assessment data to be entered by teachers and which then evaluates it to produce a record of a pupil’s attainment. Such software has been widely promoted for use by primary teachers in England as an efficient and effective means of assessing and monitoring the curriculum. For example, one such system, School Pupil Tracker, promotes itself as a means ‘to analyse attainment and pupil progress efficiently, both to understand the needs of your school and in reporting to others’ and that it ‘relieves your teachers from a large volume of paperwork’ and can ‘identify gaps in learning’ (Educater, Citationn.d.). Such marketing for apparently secure, efficient, consistent and accurate assessment presents compelling arguments that for schools in a high-stakes accountability regime, where ‘evidence’ of pupil progress is a necessity, are hard to disregard. However, there is a growing literature on the effects of ‘datafication’; how data-based governance and accountability regimes interact with policy and practice in different contexts and at different educational levels (see, for example, Hardy, Citation2019; Ozga, Citation2020; Roberts-Holmes & Bradbury, Citation2016; Selwyn & Gašević, Citation2020). The main thrust of the argument is encapsulated by Selwyn (Citation2015, p. 74, as cited in Williamson, Citation2016) in claiming that the increasing use of data systems driven by algorithms ‘leads to a recursive state where data analysis begins to produce educational settings, as much as educational settings producing data’. A number of authors, including ourselves, have shown how datafication operates at the classroom level and is intimately bound up with teacher professionalism (Ozga, Citation2009) and the shaping of pedagogy (Bradbury, Citation2019; Neumann, Citation2021; Pratt, Citation2016; Pratt, Citation2018; Pratt & Alderton, Citation2019) as well as with feelings and affect (Sellar, Citation2015). The point of departure in this paper, and its original contribution, is its focus specifically on the use of assessment software and the ways in which forms of datafication produced through its use, interact not just with teachers’ professional teaching and learning repertoire (Braun & Maguire, Citation2018), but also with the whole nature of a curriculum subject – mathematics – itself.

Mathematics is frequently considered to be a high-stakes subject, required as a qualification in most countries for entry to higher education and many jobs, and used to make comparisons between national education systems, such as, through the Programme for International Student Assessment (PISA); tests that are much vaunted by ‘successful’ governments and the press. In England, it is one of just two subjects (with English) tested nationally from ages 5 to 16, assessments which form a significant part of teachers’ own performance measures. In this light, the mathematics curriculum has constantly evolved in response to these political and professional demands, swinging back and forth between practical utility and problem solving, and a more conservative, knowledge-based subject (Brown, Citation2010). The most recent reforms at primary level reflect the latter, with a procedural emphasis, exemplified by the introduction of a statutory quick-fire multiplication tables check for all 8/9 year olds (Department for Education, Citation2018) and the removal of solving problems from the Early Learning Goals for mathematics (Department for Education, Citation2020), the endpoint measure of pupils’ attainment at age 5. Written policy, then, is far from clear about what should form the focus for mathematics teachers at primary level and our analysis of the role of assessment software therefore takes place in what is already a contested arena.

An analytics of government

To theorise our work we draw on Dean’s (Citation2010) development of an ‘analytics of government’ for examining ‘the organized practices through which we are governed and through which we govern ourselves’ (p. 28), which he calls ‘regimes of practices’. This work is rooted in Foucault’s (Citation2007, Citation2014) theory of governmentality which we assume is familiar to the readership of this journal (but see Ball (Citation2013) alongside the references to Foucault above). Dean distinguishes at least four reciprocal dimensions of regimes of practice, as follows.

Forms of visibility i.e. ways of seeing and perceiving that illuminate certain objects and obscure others. Drawings, charts, maps, graphs and tables are all pictorial ways of visualizing fields to be governed.

Technical aspects of government which describe the procedures, tactics, techniques, technologies and vocabularies deployed in the process of governing. League tables and progress measures are examples of mechanisms of accountability that impel schools to invest in technical resources such as assessment software.

Forms of knowledge arising from and informing the activity of governing. For example, Valverde (Citation2017, p. 24) observes that ‘statistics, probability calculations, performance assessments, and audits of organizations’ are typical forms of governmental knowledge.

Forms of individual and collective identities, for example, teacher identities authorized by commonly accepted discourses of effective practice which shape how teachers judge their own and each other’s professional standing.

These four dimensions of practice – visibility, technicalities, knowledge and identification – are the means by which we can analyse assessment practices. Our concern, like Foucault’s, is not with any essentialised, objective notion of ‘what’ is true but how some things ‘come to count’ as true (Foucault, Citation1980). As Dean (Citation2010, p. 41) describes, such an analysis ‘places these regimes of practices at the centre of analysis and seeks to discover the logic of such practices’. The word ‘logic’ here needs care; it can be used to imply rational thought based on deduction, or to imply that some directions of thought, perhaps implicit and inductive, seem to flow more easily, affording, rather than constraining, certain actions over others. It is in the latter sense that we interpret it here and, hence, in our analysis priority is given to questions that ask how subjects are formed with particular capacities and possibilities of actions (Dean, Citation2010). It is through this theoretical lens that we position our research question, as follows: in making use of assessment software, how are primary teachers, teaching and learning, and school mathematics itself produced and governed within a logic of practice?

Assessment policy: reform and continuity

In England, data is fundamental to the governing work of education. The centrality of performance data and the location of accountability in schools, characterise the technical-managerial form of accountability adopted since the 1990s. National assessments currently take place within primary education in England at ages 5, 6, 7 and 11 in which pupils and schools are measured against government expectations for pupil outcomes. The results for individual schools of the assessments at ages 7 and 11 have been published as ‘performance tables’ since 1992. In the same year, a national school inspectorate, the Office for Standards in Education (Ofsted) was created. Ofsted is a key component of the accountability system, and a national pupil database (NPD), an extensive longitudinal data set of pupil and school characteristics, underpins its work. However, in addition to external data, internal school data is imperative for schools undergoing inspection to ensure they meet Ofsted’s thresholds. As Kaliszewski et al. (Citation2017, p. 36) note ‘such is the pressure of and consequences of the inspection system, that schools engage in extensive data collection and analyses themselves to prepare and defend themselves during the challenging inspection process’. From 2019, Ofsted announced that it would no longer require schools to submit internal assessment data. However, how schools use internal data to monitor progress and inform self-evaluation and improvement plans is still central to inspection processes (see Ozga (Citation2020) for a detailed analysis).

One key aspect of data-driven education is the business opportunities afforded with policy reform and advancing technological capabilities. Authors such Ball (Citation2004), Bradbury and Roberts-Holmes (Citation2018), and Roberts-Mahoney et al. (Citation2016) draw attention to the consequences of the privatisation of education services, much of which, they argue, goes un-noticed. These include the redefinition by market imperatives of what it means to be a teacher and a learner, who makes educational decisions, relationships with others and the values on which these are determined. Educational technology companies are not just able to offer solutions to problems and reduce the risk to schools of navigating ‘crises’ through the products they offer, they also influence education policy and provide solutions for ‘problems’ which they themselves identify.

Braun and Maguire's (Citation2018) case study used data from 2015, a period involving a change in assessment policy in England which we were also researching (see Pratt & Alderton, Citation2019) and to which we return here, reanalysing the same data in light of their paper and our ongoing interest in assessment technologies. Reforms at that time made a significant shift in what it meant to identify a pupil as working at national expectations; away from a best fit model across targets at different levels, to one in which all attainment targets need to be achieved at each level. This was a significant change, as best fit assessment levels had been in place since 1988. Meanwhile, the criteria by which schools as a whole are judged and compared remained unchanged. The primary mathematics National Curriculum (NC) (Department for Education, Citation2013) was modified to align with the removal of best fit levelling by the specification of more challenging, year-by-year attainment targets in place of targets only at the end of year 2 (age 7) and year 6 (age 11). The curriculum now states that ‘the expectation is that the majority of pupils will move through the programmes of study at broadly the same pace’ and ‘pupils who grasp concepts rapidly should be challenged through being offered rich and sophisticated problems before any acceleration through new content’ (Department for Education, Citation2013, p. 3). This replaced previous guidance that sanctioned accelerating high achieving pupils by introducing content earlier than set out in the programme of study (Department for Education and Employment, Citation1999). The reforms were justified in the final report of the Commission for Assessment Without Levels (Department for Education, Citation2015, p. 16) which states that levels ‘encouraged undue pace and progression onto more difficult work while pupils still had gaps in their knowledge or understanding’. Progress ‘through’ the curriculum, in a stepwise acquisition of levels, was thereby replaced by progress ‘within’ it, usually referred to as ‘depth’. However, progress against centrally defined levels of performance, the key measure by which schools are judged and held to account by parents and the wider society (Pratt, Citation2016), remained and, despite the reforms, teachers appear to maintain the fundamental idea that demonstrating progress is the main purpose of assessment (Pratt & Alderton, Citation2019).

As a result of the reforms, the software that schools use to track and monitor the progress of pupils in the years between national tests was modified by commercial producers away from tracking progress through levels and sub-levels to focus on the yearly attainment targets laid out in the NC. Whilst the compulsion to give detailed, direct information about mathematical objectives for each pupil might be a good thing, our interest is in how this is operationalised and how it plays a part in forming the subject of school mathematics and the subjectivities of teachers. This is captured through detailed snapshots of 11 primary teachers as they implemented the reforms using new software tools. These teachers were recruited through local contacts to generate a sample strategically located and with a range of experiences relating to teaching mathematics, in order to shed light on our research question. Our final sample consisted of eight females and three males, teaching 6 to 11 year olds and working in eight different state schools located in the east, south-east and south-west of England (See ).

Table 1. Participant information.

We carried out in-depth, semi-structured interviews during June and July 2016, the end of the first school year after the change in national assessment policy from levels to no levels. Our work conformed to the ethical procedures of the British Educational Research Association and was approved by both our employing institutions. Interviews lasted between 40 and 70 minutes and were audio-recorded.

Our original concern was with how teachers were reconstructing truths about the assessment systems and practices in their classrooms and schools in line with accountability procedures (Pratt & Alderton, Citation2019). Our interview schedule, which we used as an aide memoire rather than a script, aimed to ask questions open enough to allow interviewees opportunities to express their perspectives on and experiences of the tools, materials, requirements and support that were part of daily assessment processes. Our approach drew on Kvale’s (Citation1996) notion of the research interview as a conversation aimed at leading the researcher to new understandings about other people’s experiences. In line with our theoretical framing, we were not seeking to capture truths about our participants and tried to avoid value-laden questions or responding to participants’ accounts in impositional ways. However, as Brinkmann and Kvale (Citation2015) articulate, the qualitative interview is a technology of the self (Foucault, Citation1988; Foucault, Citation1997) in which both interviewers’ and interviewees’ subjectivities are constructed in our adherence to the social norms of the research interview, not a neutral medium enabling context-free exchanges; a position that we tried to be aware of in our analysis.

Our original analysis (Pratt & Alderton, Citation2019) was of teachers’ reconstructions of the ‘truth’ of assessment. We noticed the term ‘gap/s’ which stood out for us across the dataset as a way in which teachers frequently talked about pupils’ learning, but at the time this was not relevant to our focus on assessment truths. However, we return to ‘gaps’ here to argue the term’s significance in a network of ideas central to the logic of assessment practice and which are part of the way in which mathematics as a subject, and the way it is taught and learned, are constituted.

In our data we identified two main uses of the term gap in this network of ideas; the first in relation to gaps in mathematical knowledge and the second, an achievement gap between pupils or groups of pupils, with instances of the former outnumbering the latter significantly. Other, less prominent, vocabulary and ideas identified by both of us included progress, deep/depth and ability and we therefore re-examined this data in relation to Dean’s (Citation2010) four dimensions. In the following sections we present representative examples of our data and analysis to illuminate the logic of assessment practice; particularly how identifying gaps in knowledge and an uncritical production of assessment grids in the software come to produce, and are reproduced by, mathematics and its teaching and learning.

Regimes of practices in the production of mathematics

Implementing the software

All schools in our sample used assessment software to track pupil progress across the year. These were whole school approaches used either in parallel with other software, such as benchmarked online testing systems, or as part of a multi-functional school management solution. There is a vast array of school management solutions produced by educational technology providers, marketed towards schools that include assessment tracking systems. In total five different tracking systems were used across the eight primary schools in the study, supplied by both large corporations and smaller, teacher-led enterprises. One particular element of the software, curriculum assessment grids for mathematics (and literacy), were consistent features discussed in interviews and provided a, perhaps the, major way in which elements of learning were made visible – and hence validated and normalised.

Kristina: So, you’ve got the child’s name down the side, you’ve got the grid [of targets] going across, and you make your judgment. So, say, for example, in year 3 it was to know their 3, 4 and 8 times tables as well as previous 2, 5 and 10. If they could do that they were ticked as being able to do that.

Teachers entered their assessments, populating the grids in various ways depending on the software, most frequently by indicating whether pupils had begun to, partially or fully met individual attainment statements. They described predominantly using assessment evidence from low-stakes half-termly testing and some day-to-day assessment to provide the basis for their decisions. Different software used computational features in different ways to average, summarise and present data, though the majority used colour for emphasis. Ollie explained some of the features of Target Tracker, the system that his school used.

Ollie: You can just pull one child up and underneath you can have all the statements that you tick. You mark whatever and it has a little bar that goes across the top and they are colour coded, so secure is blue, working towards is red. As the bar goes across you can see how far they are. So, without scrolling through all of them you can see they’ve got quite a lot of blue which means they are secure on pretty much all of them. So, it helps it make the judgment for each individual child.

The colours, a recurring theme in the data, guide teachers to reflect on their own performance; with too much red prompting action to address pupils not on track. Thus, the tracker is not just a measurement of pupil progress but also of Ollie’s teaching performance, validating his professional conduct. It also offers school leaders a range of functions to manage both the data and their staff. Teachers were very aware that senior staff had constant, remote access to their assessment grids and that this presented a tactical requirement to keep them up-to-date.

Kerry: So, they [senior leaders] are going to take these [grids] and look at them numerically. I don’t know what they’ll do – multiply it by A divide it by Z and then they will come up with, hopefully, something that means that they can show governors that we are making progress; the children are making progress.

As Perryman et al. (Citation2017, p. 747) note ‘submission to the gaze becomes a constituent part of teacher professionalism’ and teachers in this study identified professionally in a range of ways in response to being perpetually under panoptic surveillance (Foucault, Citation1977). Kerry’s hopeful outlook was not felt by Ann.

Ann: Within what I do with the children, I see progress but I don’t always see it in what I’ve got on School Pupil Tracker. The progress isn’t always reflected there, and certainly you can feel … I’ve been in this job for years, I thought I was a good maths teacher, maybe I’m not, because of what’s coming out.

In contrast to Ann, for Mike the removal of levels resulted in a feeling of freedom, the focus on levels being superseded by a focus on pupils’ ‘gaps’ in mathematics.

Mike: We’ve found in terms of freedom we are not thinking about how am I going to get them from a [level] 3b to a 3a as quick as I can. We are thinking about ‘OK they are working at expected levels or just below but what are their gaps and how am I going to fill their gaps? What is going to make it that much better? Rather than, ‘could I give them a question tomorrow that maybe makes it look like they are working at a 3a rather than a 3b?’ So, I’m showing progress on my progress grid and then I get my performance appraisal and everything else. It’s less about the kind of quickly moving on and rushing children through.

In these quotes we begin to see the logic of practice, understood in terms of Dean’s dimensions of governmentality; the visualisation of gaps offered by the software, alongside the technical aspects of calculation (‘multiply it by A divide it by Z’) form part of the production of new ways of knowing pupils and new identifications as teacher and professional. We note, importantly, that these dimensions are not linear and causal. Rather, they are reciprocal, constituting each other; and generative of a regime within which certain practices flow more easily and thus come to govern actions. Such tendencies to act constitute technologies of the self and ‘the subjects so created produce the ends of government by fulfilling themselves [whether willingly, like Mike, or reluctantly, like Ann] rather than being merely obedient’ (Rose et al., Citation2006, p. 89).

Finding the gaps

Having considered the subjectivities of teachers in general terms above, we now turn to ways in which the notion of gaps in pupils’ mathematical knowledge, that Mike alludes to above, contributes to ways in which the teaching of mathematics and mathematics itself are understood. Identifying gaps in pupils’ understanding is strongly promoted in England as good practice. This is spelt out in the final report by the Commission for Assessment Without Levels (Department for Education, Citation2015, p. 16) whereby ‘Teachers [should] assess pupils’ understanding of a topic and identify where there are gaps’; and in an Education Endowment Foundation (EEF) publication on assessing and monitoring pupil progress which states that ‘knowing what children know and which gaps exist in their learning can be an informative exercise’ (CitationEducation Endowment Foundation, no date, part 4 of 14). There is no sign that this language is disappearing as can be seen in government guidance re-emphasising the perceived importance of ‘Identifying and addressing gaps in pupils’ understanding’, in response to school closures during the Covid-19 pandemic (Department for Education, Citation2020, June 12). However, Ofsted includes the most powerful, disciplinary example in its school inspection handbook, stating that inspectors will consider what steps the school has taken to ensure that there is flexibility in curriculum planning so that the school can address identified gaps in pupils’ mathematical knowledge that hinder their capacity to learn and apply new content’ (Office for Standards in Education, Citation2019, p. 88). These quotes indicate the context of teaching in England and, of course, commercial companies have been quick to capitalise on it, promoting their products as ‘solutions’ to the ‘problem’ of workload and accountability, as we briefly illustrated at the start of this paper. Both policy and commercial imperatives account, in part, for the frequency with which teachers referred to gaps in the data – nine of the 11 teachers used this term in their interviews, many frequently, with Mike the most prolific, using it 26 times.

To understand how this use of ‘gaps’, as part of a wider discourse of assessment, works in the production of teaching and school mathematics itself, we start with Becky, a deputy head teacher who is concerned with familiarising herself with the new attainment targets in the revised curriculum. There are on average 42 mathematics attainment targets for each year group, ranging from 30 for year 1 to 54 for year 6. Becky comments on how the software helps her:

Becky: to show the coverage of the objectives of the new curriculum, which are plentiful and, I think, because it’s the first year, we don’t all know them like that. And so it’s helpful to see where the gaps are when you enter data into Pupil Tracker … so as I say it’s helpful to help us to learn them because we don’t know them very well, to help us to see the gap.

Becky’s visualisation of gaps on a grid provides a way, literally, of seeing something that is absent, which cannot otherwise be seen; a gap. The particular way of visualizing the field to be governed brings into focus a specific way of knowing about pupils and a particular problem to be solved (eliminating gaps) which teachers identify with in new ways. Again, these are reciprocal: one can imagine that identifying as the person who can now ‘solve’ this problem by ‘filling the gap’ might reaffirm the usefulness of the technology as a means of visualising this solution – and thereby also clarifying one’s responsibility and affording forms of accountability (Pratt, Citation2018).

With the same visualisation in mind, Kristina talks about ‘trying to think of what gaps there are’, indicating her view that this conduct is part of her professional practice.

Kristina: You could do a half term and get to the end of the half term tests and find, ‘Oh my gosh, they really didn’t understand say multiplying. So, you’d know you’ve got to pick up on it again even if your planning, say, hasn’t actually asked you to teach that again. So, it’s a lot more reflective of trying to think what gaps there are for your class’s needs.

Within a wider discourse of assessment and accountability, this notion of gaps refers, metaphorically, to missing pieces in some kind of developmental tower which will collapse if the pieces are not (re)placed properly. Importantly, in what is a critique of practice, we want to emphasise that we are not criticising this view per se. We would agree strongly that helping pupils to understand mathematical ideas fully is a desirable outcome of teaching. However, our interest is in how this model of learning forms part of the way in which the subject, and its teaching, is produced and the implications of this. For the teachers, the strength of the metaphor seems to lie in the undoubted and undesirable danger of an unstable tower; and also in the clarity it brings to their own roles as teachers. Identifying gaps in knowledge becomes a self-evident part of the regime that arises from, and then recreates, the need to complete the grid; to strengthen the tower and fulfil one’s professional responsibility.

All of the teachers discussed the differing outcomes that their pupils demonstrated and many related them to the existence or absence of gaps in their assessment grids. For example,

Patrick: If that child’s completing lots of objectives at achieved level then you know they are making good strong progress in their learning.

By implication, the opposite, the presence of gaps, constitutes a professional language for teachers and schools affording certain practices, seen here in the way Jill uses it to position children either side of a binary judgment.

Jill: These children are where they should be and these children aren’t, so that then continuing on with what is going to happen with these children so that the gaps that they have got in place value are filled rather than ‘oh they didn’t get place value’.

Equating underachievement with gaps in knowledge like this becomes, indeed has become, a broadly accepted way to talk about learning and it could be argued that a focus on gaps in mathematical knowledge prevents the assignment of within-child deficits which marginalise and label pupils as low-ability. Rather, as might be implied in Jill’s description of the pupils in her class, the responsibility is transferred to the teacher to identify and then ‘fill’ the gaps. However, there is slippage in a number of the teachers’ accounts where gaps in mathematics becomes pupils with gaps, sliding back into the label. For example:

Kristina: Some of it you have to think what was previously needed because the gaps for some children, so for the low ability children there are gaps.

We see in this quote how the various aspects of assessment, as gaps/knowledge/filling/etc., are mobilised through the material properties of the grid on the computer screen, but within the wider discursive practices of assessment and accountability. The grid, visible in terms of gaps and colour codes, emphasises the technicalities of governance, to fill the gaps and to address pupils’ specific needs; but, vice versa, the requirement to know which gaps need filling, reproduces the need for the grids and identifies ‘low ability’ children.

Filling the gaps

In our interviews, teachers describe a number of different practices aimed at filling the gaps in their grids, which we consider indicative of how mathematics as a subject, and how it should be learned, are constituted. Mike, as subject lead in his school, introduced a ‘closing the gap week’ in which:

Mike: We actually did our half termly assessment test the penultimate week of term and then we marked them and then from there we found children that had particular gaps. We did what we called a closing the gap week where we did mix them and we had one class that did shape for a week, one that did fractions for a week and one that did time for a week because that’s where those children had those gaps.

However, for pupils identified as having no gaps,

Mike: who we saw as working at greater depth, they worked with teaching assistants and had some really different Nrich1 kind of problem solving where it exposed them to different ideas of maths and to apply their skills rather than them being in one of the closing the gap groups where they didn’t really need to be.

Dividing practices (Foucault, Citation1982) like this, based on what pupils ‘need’, determine the kind of mathematics they experience and potentially the development of their relationship with mathematics and subjectivities as mathematicians. Such practices are underpinned by the, potentially misguided (Foster, Citation2013), idea that developing mathematical fluency, in particular discrete knowledge, is seen as a prerequisite for tackling more complex, richer problems.

Another strategy, made more likely in a regime that encourages the visibility of gaps and described by a number of teachers, focused on breaking down attainment targets into smaller entities, on the questionable assumption that if a pupil can be trained to do the component parts then they will inevitably be successful at the whole (Foster, Citation2013; Sfard, Citation1991). For example, Ann complained that,

Ann: My class didn’t have a very good understanding of decimals, so rather than teaching thousandths and all of what was in the year 5 curriculum, I’ve had to go right back to the start and doing tenths, and quite a lot of work on tenths to get that understanding and then linking it into fractions and then going to hundredths. And that is your year 3 and year 4 objectives. It got to that point, which you just know as a teacher, when they’ve had enough. So, I’ve called it a halt at hundredths and said ‘that’s what they understand’ and next year they need to do thousandths.

In making this choice, Ann misses the opportunity to explore the structure of the base ten place value system between tenths, hundredths and thousandths, treating them as unconnected, with the emphasis instead on ensuring they are ‘covered’, even if next year. Here, again we can see an assemblage in terms of Dean’s (Citation2010) four dimensions. The construction of the curriculum – here mathematics – as small, detailed pieces of knowledge, is made more likely by the technical, assessment tools which mobilise (through making them visible and invisible) the focus on gaps. In turn, this construction produces the criteria by which certain ways of working, the need for coverage for example, become validated and normalised and hence taken-for-granted, reproducing the need for the technology in the first place. Evidently, this can lead to a mechanistic approach to teaching, where knowledge is separated out and learning the subject becomes the business of learning all the parts; indeed, Mike suggests that ‘actually, all we are trying to do is fill their gaps and help them learn’. If gaps are prioritised as a way of measuring progress, then mathematics becomes a subject of small pieces of disarticulated knowledge and as Foster (Citation2013) points out, this may well afford reductive practices in the classroom. He argues that assessment systems ‘focus on bite-sized pieces of mathematics, because they are quick and easy to test and score’ (Foster, Citation2013, p. 569) and, as we have seen above, can readily become seen as a prerequisite for creative, problem-based, we would argue ‘mathematical’, approaches to learning. Rather than knowledge coming ‘from’ activity, ‘it’ is seen as a prerequisite for such activity; and for those who acquire it more slowly this may mean that they rarely get to do any mathematics at all.

Dean (Citation2010, pp. 43/44) observes that regimes of government do not determine forms of subjectivity but ‘elicit, promote, facilitate, foster and attribute various capacities, qualities and statuses to particular agents’ as regimes of practices. The mobilisation of new ways of thinking (gaps) and new forms of visibility (remote access to data by managers) through the materiality of objects (grids, computers and online connectivity) does not mean certain actions are inevitable; but acts like flowing water, finding the downward path of least resistance. Teachers do not have to simply become ‘gap-fillers’. Neither does the subject have to become a jigsaw puzzle of separate pieces of knowledge. However, the logical fit of these ways of thinking with a system that atomises knowledge, visualises it in specific ways and produces forms of accountability based on them, means that it becomes hard to think of the subject in any other way. Not doing so increasingly becomes like swimming upstream.

What is at stake for school mathematics?

Our purpose in this paper has been to critically analyse the specific conditions under which primary teachers use assessment software to monitor and track attainment in order to illustrate the intrinsic logic within the intermeshing regimes of practice of assessment and accountability. Dean’s (Citation2010) analytics of government illuminates how, with the introduction of a new assessment system and the digital technologies produced commercially to administer it, teaching mathematics emerges as identifying and filling gaps in knowledge. Though we have focused on mathematics, our experience suggests that this is likely also to be the case in other subjects, though to varying degrees and in slightly different ways given the particular features of each discipline.

An analytics of government can make clear what is at stake and what the consequences are of regimes of practice (Dean, Citation2010). In this concluding section, we consider how contemporary assessment policy in England and the related design and use of software produces changes that have tangible implications for the production of mathematics and its teaching and learning.

Grids relating to statements of attainment are not new, and indeed they may in fact be helpful to learners and teachers. Brown (Citation2010) describes the use of grids by teachers in the late 1980s and early 1990s. However, we would argue that a key contemporary difference is the digital, online, connected nature of the assessment software – situated, of course, within the wider discursive framework of high-stakes accountability – that creates a number of important differences.

Firstly, whereas the paper grids might have been considered by school leaders after completion, the dimensions of technicality and visibility render grids produced by software continuously accessible to school leaders, meaning that teachers’ performance, represented by the colours on the grid, is under constant surveillance. Moreover, paper grids were hard to disagree with since teachers worked largely in isolation and there was no year-on-year measure of progress. Nowadays, the digitisation of data allows for calculations within the code and a range of instant visualisations that produce forms of knowledge, as colours, blocks, gaps and graphs, and which normalise progress. Teachers and children then face pressure to conform to expectations of these norms (Bradbury, Citation2012; Pratt, Citation2018; Pratt & Alderton, Citation2019).

Secondly, digitisation hides the calculations, which appeared little understood by our teachers. Whereas in the past teachers judged the overall attainment of pupils, here they judge the extent to which pupils have acquired particular, small pieces of mathematical knowledge and these produce – in some apparently little understood way – the attainment on which judgements are made. All this, set in the contemporary context of accountability, produces a logic that is hard to challenge.

We are not arguing that teachers should pay no attention to pupil knowledge in their ongoing assessment. As we made clear above, knowledge is vital in all subjects and being left with partial understanding of key ideas in mathematics, or any other discipline, is not desirable. However, work over the last two decades has made very clear the situated nature of mathematical knowledge (e.g., Boaler, Citation1997, Citation2002; Lave, Citation1988; Nunes et al., Citation1993) and implications of this for teaching (e.g., Hodgen, Citation2011), outdating the kind of atomised view of knowledge associated with the language of gaps that forms part of the logic of practice analysed here. As Biesta (Citation2015, p. 194) remarks, ‘in education the question is never whether something is effective or not, but what something is supposed to be effective for’ and our argument is that assessment software is effective for, indeed plays a role in effecting, not just a form of teaching but the form of the subject discipline of mathematics itself, that is very particular at best. It misrepresents mathematics as a subject that is straight-forward to assess (e.g., Bew, Citation2011), and as a common-sense hierarchy of ideas, despite plenty of evidence that this is not the case (Gray & Tall, Citation1994; Sfard, Citation1991). This notion of hierarchy also produces a particular version of developmental normality (Coles & Sinclair, Citation2018). These knowledge forms are used to position children as having gaps in knowledge, pathologizing them as ‘low ability’ and ‘abnormal’, and potentially leading to very different versions of the subject being experienced between those whose grid is empty or full, more red than green.

Finally, for teachers’ professional positionality there is also an important passing of responsibility involved. We have noted several times above that identifying gaps allows teachers to take responsibility for filling them; but we suggest too that it produces a form of professionalism in which the teacher becomes responsible only for ‘delivering’ the parts supplied by the curriculum, not for assembling the whole; which is assumed to be taken care of in the workings of the system. As Mike notes ‘actually, all we are trying to do is fill their gaps and help them learn’, but Biesta (Citation2005, p. 59) makes clear that this approach to schooling has wider professional implications,

because it suggests a framework in which the only questions that can meaningfully be asked about education are technical questions, that is questions about the efficiency and the effectiveness of the educational process.

Thus, we see Selwyn’s (Citation2015, as cited in Williamson, Citation2016) recursive production of data analysis and educational settings altering both the teaching of mathematics and the subject itself, where the data produces the learner and the teacher as much as they both produce the data. It appears likely that teachers will become, perhaps are becoming, unable to ask the kinds of questions which might be needed to critique such practices, leaving them as little more than the ‘fillers of mathematical gaps’.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Julie Alderton

Julie Alderton is Lecturer in Mathematics Education the University of Cambridge. She is interested in relationships between education policies, teacher and student subjectivities and pedagogical practices, discourses and inequalities in mathematics education.

Nick Pratt

Nick Pratt is Senior Lecturer in Education at the University of Plymouth. His research is focused on pedagogic relationships in all phases of education and especially in mathematics. He has a particular interest in teacher accountability and assessment and their relationship with teachers’ practices.

References

  • Ball, S. J. (2004). Education for sale! The commodification of everything? The annual education lecture 2004. King’s College.
  • Ball, S. J. (2013). Foucault, power, and education. Routledge.
  • Bew, P. (2011). Independent review of key stage 2 testing, assessment and accountability. Crown copyright.
  • Biesta, G. (2005). Against learning. Nordisk Pedagogik, 25(1), 54–66.
  • Biesta, G. (2015). Improving education through research? From effectiveness, causality and technology to purpose, complexity and culture. Policy Futures in Education, 14(2), 194–210. https://doi.org/10.1177/1478210315613900
  • Boaler, J. (1997). Experiencing school mathematics: Teaching styles, sex, and setting. Open University Press.
  • Boaler, J. (2002). The development of disciplinary relationships: Knowledge, practice and identity in mathematics classrooms. For the Learning of Mathematics, 22(1), 42–47. https://www.jstor.org/stable/40248383
  • Bradbury, A. (2012). Education policy and the ‘ideal learner’: Producing recognisable learner-subjects through early years assessment. British Journal of Sociology of Education, 34(1), 1–19. https://doi.org/10.1080/01425692.2012.692049
  • Bradbury, A. (2019). Datafied at four: The role of data in the ‘schoolification’ of early childhood education in England. Learning, Media and Technology, 44(1), 7–21. https://doi.org/10.1080/17439884.2018.1511577
  • Bradbury, A., & Roberts-Holmes, G. (2018). The datafication of primary and early years education: Playing with numbers. Routledge.
  • Braun, A., & Maguire, M. (2018). Doing without believing – Enacting policy in the English primary school. Critical Studies in Education,61(4), 1–15. https://doi.org/10.1080/17508487.2018.1500384
  • Brinkmann, S., & Kvale, S. (2015). InterViews: Learning the craft of qualitative research interviewing (3rd ed.). Sage.
  • Brown, M. (2010). Swings and Roundabouts. In I. Thompson (Ed.), Issues in teaching numeracy in primary schools (2nd ed., pp. 3–26). Open University Press.
  • Coles, A., & Sinclair, N. (2018). Re-Thinking ‘Normal’ development in the early learning of number. Journal of Numerical Cognition, 4(1), 136–158. https://doi.org/10.5964/jnc.v4i1.101
  • Dean, M. (2010). Governmentality: Power and rule in modern society (2nd ed.). Sage.
  • Department for Education. (2013). The national curriculum in England key stages 1 and 2 framework document.
  • Department for Education. (2015). Final report of the commission on assessment without levels. Crown Copyright.
  • Department for Education. (2018). Multiplication tables check assessment framework. Crown Copyright.
  • Department for Education. (2020). Statutory framework for the early years foundation stage: EYFS reforms early adopter version July 2020. Crown Copyright.
  • Department for Education. (2020, June 12). Guidance: Identifying and addressing gaps in pupils’ understanding. Crown Copyright. Retrieved January 6, 2020, from https://www.gov.uk/guidance/identifying-and-addressing-gaps-in-pupils-understanding#remote-feedback-and-the-first-weeks-back-in-the-classroom
  • Department for Education and Employment. (1999). The national curriculum for England. HMSO.
  • Educater. (n.d.). Assessment – School pupil tracker. https://www.educater.co.uk/software/school-pupil-tracker
  • Education Endowment Foundation. (2018). The attainment gap. Education endowment Foundation. https://educationendowmentfoundation.org.uk/public/files/Annual_Reports/EEF_Attainment_Gap_Report_2018.pdf
  • Education Endowment Foundation. (no date) Assessing and monitoring pupil progress: A guide to help track pupils’ progress and assess their mastery of knowledge and concepts. Education Endowment Foundation. https://educationendowmentfoundation.org.uk/tools/assessing-and-monitoring-pupil-progress
  • Foster, C. (2013). Resisting reductionism in mathematics pedagogy. The Curriculum Journal, 24(4), 563–585. https://doi.org/10.1080/09585176.2013.828630
  • Foucault, M. (1977). Discipline and punish: The birth of the prison. Penguin.
  • Foucault, M. (1980). Power/Knowledge: Selected interviews and other writings 1972–1977. (C. Gordon, edited by). Harvester.
  • Foucault, M. (1982). The Subject and Power. Critical Inquiry, 8(4), 777–795. https://doi.org/10.1086/448181
  • Foucault, M. (1988). Technologies of the self. In L. H. Martin, H. Gutman, & P. Hulton (Eds.), Technologies of the self: Seminar with Michel Foucault. Tavistock. pp. 16–49.
  • Foucault, M., 1954-1984, Vol. 1. (1997). Ethics: Subjectivity and truth. essential works of foucault. New Press.
  • Foucault, M. (2007). Security territory, population: Lectures at the Collège de France 1977-1978. Palgrave.
  • Foucault, M. (2014). On the Government of the Living: Lectures at the Collège de France 1979-1980. Palgrave.
  • Gray, E., & Tall, D. (1994). Duality, ambiguity and flexibility: A ‘proceptual’ view of simple arithmetic. Journal of Research in Mathematics Education, 25(2), 116–140. https://doi.org/10.2307/749505
  • Hardy, I. (2019). Governing teachers’ work and learning through data: Australian insights. Cambridge Journal of Education, 49(4), 501–517. https://doi.org/10.1080/0305764X.2018.1557594
  • Hodgen, J. (2011). Knowing and Identity: A situated theory of mathematics knowledge in teaching. In T. Rowlands &K. Ruthven (Eds.),  Mathematical knowledge in teaching (pp. 27–42). Springer.
  • Kaliszewski, M., Fieldsend, A., & McAleavy, T. (2017). England’s approach to school performance data – Lessons learned. Education Development Trust. [ Available online: https://www.educationdevelopmenttrust.com/our-research-and-insights/research/england-s-approach-to-school-performance-data-less accessed March 2021]
  • Kvale, S. (1996). InterViews. Sage.
  • Lave, J. (1988). Cognition in practice. Cambridge University Press.
  • Neumann, E. (2021). Setting by numbers: Datafication processes and ability grouping in an English secondary school. Journal of Education Policy, 36(1), 1–23. https://doi.org/10.1080/02680939.2019.1646322
  • Nunes, T., Schliemann, A. D., & Carraher, D. W. (1993). Street mathematics and school mathematics. Cambridge University Press.
  • Office for Standards in Education. (2019). School inspection handbook. Her Majesty’s Stationary Office.
  • Ozga, J. (2009). Governing education through data in England: From regulation to self‐evaluation. Journal of Education Policy, 24(2), 149–162. https://doi.org/10.1080/02680930902733121
  • Ozga, J. (2020). The politics of accountability. Journal of Educational Change, 21(1), 19–35. https://doi.org/10.1007/s10833-019-09354-2
  • Perryman, J., Maguire, M., Braun, A., & Ball, S. J. (2017). Surveillance, governmentality and moving the goalposts: The influence of ofsted on the work of schools in a post-panoptic era. British Journal of Educational Studies, 66(2), 145–163. https://doi.org/10.1080/00071005.2017.1372560
  • Pratt, N. (2016). Neoliberalism and the (internal) marketisation of primary school assessment in England. British Educational Research Journal, 42(5), 890–905. https://doi.org/10.1002/berj.3233
  • Pratt, N. (2018). Playing the levelling field: teachers’ management of assessment in English primary schools. Assessment in Education: Principles, Policy & Practice,25(5), 504–518. https://doi.org/10.1080/0969594X.2016.1264924
  • Pratt, N., & Alderton, J. (2019). Producing assessment truths: a Foucauldian analysis of teachers’ reorganisation of levels in English primary schools. British Journal of Sociology of Education, 40(5), 581–597. https://doi.org/10.1080/01425692.2018.1561245
  • Roberts-Holmes, G., & Bradbury, A. (2016). Governance, accountability and the datafication of early years education in England. British Educational Research Journal, 42(4), 600–613. https://doi.org/10.1002/berj.3221
  • Roberts-Mahoney, H., Means, A. J., & Garrison, M. J. (2016). Netflixing human capital development: Personalized learning technology and the corporatization of K-12 education. Journal of Education Policy, 31(4), 405–420. https://doi.org/10.1080/02680939.2015.1132774
  • Rose, N., O’Malley, P., & Valverde, M. (2006). Governmentality. Annual. Review of Law and Social Science, 2(1), 83–104. https://doi.org/10.1146/annurev.lawsocsci.2.081805.105900
  • Sellar, S. (2015). A feel for numbers: Affect, data and education policy. Critical Studies in Education, 56(1), 131–146. https://doi.org/10.1080/17508487.2015.981198
  • Selwyn, N. (2015). Data entry: Towards the critical study of digital data and education. Learning, Media and Technology, 40(1), 64–82. https://doi.org/10.1080/17439884.2014.921628
  • Selwyn, N., & Gašević, D. (2020). The datafication of higher education: Discussing the promises and problems. Teaching in Higher Education, 25(4), 527–540. https://doi.org/10.1080/13562517.2019.1689388
  • Sfard, A. (1991). On the dual nature of mathematical conceptions: Reflections on processes and objects as different sides of the same coin. Educational Studies in Mathematics, 22(1), 1. https://doi.org/10.1007/BF00302715
  • Valverde, M. (2017). Michel Foucault. Routledge.
  • Williamson, B. (2016). Digital methodologies of education governance: Pearson plc and the remediation of methods. European Educational Research Journal, 15(1), 34–53. https://doi.org/10.1177/1474904115612485