1,477
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Perceptions of performativity in English Further Education

ORCID Icon

ABSTRACT

The notions of performativity and the use of accountability practices within the UK education sector are contentious. Although some commentators suggest that statistically driven performativity measures do not align with practitioner values, little research has investigated any potential differences in relation to job role and level of management responsibility. This study focused on whether perceptions of performativity change according to someone’s job role and whether there is a differential between managers and teachers. An electronic questionnaire was disseminated at a single FE college, with 107 participants surveyed across a wide range of subject areas. Quantitative analysis revealed that perceptions of managers differ from those of teaching staff regarding the effectiveness of statistical performativity targets to drive factors which are integral to an efficacious learning environment. Results are far from unequivocal though. As practitioners take on more of a managerial emphasis within their role, the perceived benefit of and their affinity for target setting and performativity measures increase. However, the magnitude of this more favourable outlook towards performativity is limited, with managers also broadly sceptical concerning any benefit and positive impact that target setting practices can have.

Introduction

The performance of the education sector is a long-debated topic with political agency and reach. The constant drive to improve standards and raise student attainment, by political parties of all colours, often results in the introduction of new systems, processes and policies following parliamentary review. The Further Education (FE) sector is certainly no stranger to this concept of politically driven educational reform (Hall and O’Shea Citation2013; Lucas and Crowther Citation2016).

As a sector, FE is responsible for 2.2 million students and 111,000 staff (full time equivalents), across 244 colleges and £6.9 billion in funding (Association of Colleges Citation2019) and through its conduct and interactions with, is inextricably linked to many facets of society including local communities, employers, schools, and Higher Education Institutions (HEIs), through mutual dependency and reciprocity (Gleeson et al. Citation2015; Boocock Citation2019; Dennis, Springbett, and Walker Citation2020). This sweeping reach and influence – both as an economic presence and in terms of the more socially aligned importance and responsibility of educating and training a large proportion of the future workforce – means that scrutiny and accountability of practices for the sector as a whole, and of individual FE institutions, are extremely politically charged. A consistent phenomenon, seen since the shift in public service delivery model away from the ‘trust’ approach (Le Grand Citation2010), is a greater emphasis on the accountability of individual FE establishments (Boocock Citation2013, Citation2014; Keep Citation2018; Thompson and Wolstencroft Citation2018), with senior FE college leaders and governors continuously facing direct pressure to improve the quality of their educational provision, and a variety of external parties (Ofsted, the Department for Education, the Institute for Apprenticeships and Technical Education, etc) regularly monitoring and reviewing an institution’s performance against a variety of metrics (Hill, James, and Forrest Citation2016). This has led to the incorporation of neoliberal working arrangements (Mahony and Weiner Citation2019) and the emergence of performativity practices.

This move to a performativity culture within the FE landscape is a highly emotive topic. Although much research has suggested that performativity practices do not align with educational practitioners’ values (Ball Citation2003; Burnard and White Citation2008; Boocock Citation2013; Perrin Citation2015; Gleeson et al. Citation2015; Clapham and Vickers Citation2017), little research has investigated the notion of differences in perceptions being dependent on job role and level of management responsibility (Mulcahy Citation2004). Although most research concludes that practitioners view current statistical-based performativity measures from a largely negative perspective, this has become a blanket assumption for all staff within educational establishments. The notion that staff in different job roles and with differing levels of management responsibility might perceive performativity practices differently has yet to be explored. In particular, to what extent does the level of exposure to performativity rhetoric influence practitioner affinity for these practices? The focus of this study is to investigate if perceptions of performativity change according to someone’s job role, and to what extent is there a differential between FE manager and teacher perceptions of factors integral to statistical target setting.

Current FE oversight

The political importance around improving progression and meeting employer demands is set to become more prominent and focused, as it is due for further reform in spring 2021 (Department for Education Citation2021). This will result in deeper scrutiny and oversight by external regulators, through which a host of current datasets will be used to holistically review a college. This oversight is facilitated by the FE Commissioner and their team, who will perlustrate colleges with the raison d’être of: ‘work(ing) with them (FE institutions) to identify, at an earlier stage, any financial and quality issues that might get in the way of them succeeding’ (Department for Education Citation2020a, 4).

The FE Commissioner will assesses colleges that ‘have serious weaknesses or are at risk of developing weaknesses’ (‘Further Education Commissioner 2020ʹ) in quality and finances, and any analysis and subsequent intervention will be directed towards the leadership and governance of a college, where the commissioner’s team will conduct diagnostic assessments if a college is deemed to be at risk. If a college’s performance is deemed worthy of FE Commissioner intervention, this is a high-stakes scenario for college leaders and governors, with serious consequences for individuals and teams. Since Ofsted grades and educational performance data are two of the early intervention criteria (‘Further Education Commissioner 2020ʹ), the emphasis on an establishment meeting performance measure thresholds cannot be understated, and this government-level pressure will permeate through the staffing levels of colleges. But to what point does this adherence to the importance of statistical performance measures penetrate? Does it descend to all levels of a college; or, as practitioners are further away from managerial elements and facets, and are closer to the educational and pedagogical interactions, does the importance placed on meeting statistical performance measures diminish for them? As the number of reports published by the FE commissioner has increased to twenty in 2020, up from thirteen and eleven in 2019 and 2018 respectively (Department for Education Citation2020b), it can be suggested that the importance of strong leadership leading to meeting performativity measures has never been so critical to colleges and their staff.

What is performativity?

Much literature has discussed changes to political ideology and managerial practices in the UK post–compulsory education sector since the 1990s (Thompson and Wolstencroft Citation2015, Citation2018) with the emergence of performativity and accountability practices a consequence of these evolving managerial practices (Fuller and Stevenson Citation2019; Verger, Fontdevila, and Parcerisa Citation2019). So what are the cornerstones of these managerial practices? Performativity can be encapsulated in the notion that it is a culture of regulation, one which involves judgements and comparisons to incentivise and control staff, using both rewards and reprimands whereby their performance is measured and critiqued against others, offering a review of their quality and productivity (Ball Citation2003). Fundamentally, it is a process of accountability whereby individual staff and organisations are set statistical targets or measures, on a range of elements, and their performance judged on the basis of meeting these outcomes (or not). This is a component of the neoliberal approach to governance of the public sector, whereby services are improved by the application of competition between providers, and through individual responsibility (Hall and O’Shea Citation2013). The conduit or vehicle by which performativity practices are communicated, implemented, and reviewed is the appraisal (Bratton and Gold Citation2012; Bush and Middlewood Citation2013), and this can be considered a significant element in the neoliberal reforms to improve the quality of education available to society (Flores Citation2012). Within the appraisal of a teacher and a centre, there is a strong argument for the use of independent measures, with regard to outcomes, quality and participation. Johnson et al. (Citation2017) suggest that performance measures are targets that relate to the output(s) of parts of, or the whole organisation. Without these empirical measures, anecdotal and subjective progress could be constructed leading to a misrepresentation of performance and of the educational establishment’s ability to discharge the ethical responsibilities placed upon it (Lewin Citation2011). In the UK, as school and college performance (as dictated by statistical measurement of performativity metrics such as student qualification success, value-added and destinations data) forms the basis of Ofsted judgements, league-table positions (Leckie and Goldstein Citation2017) and FE Commissioner intervention (Further Education Commissioner Citation2020), it can be construed that, within the notion of performativity, these targets and measures are the key representatives of the ability and effectiveness of teaching practitioners, and thus the centre itself (Lewis and Hardy Citation2015).

Benefits of performativity culture

The use of performativity and accountability practices has certainly instigated improvement in some areas and facets of education. Research has shown that student outcomes, in terms of test results and qualifications achieved, have improved since education has moved towards a performativity-based culture (Goldstein Citation2004; Stecher et al. Citation2012; Sun and Van Ryzin Citation2014; Boocock Citation2014), and that some staff respond positively to the challenge of achieving the required benchmark standards (Boocock Citation2014). Within the UK, student qualification success has increased substantially since the early 2000s (Coffield et al. Citation2008; Mansell Citation2011) with improvements in organisation practice in colleges, such as: stronger advice and guidance; better on-course support; and timely interventions for learners, implemented and driven due to the focus on statistical results (Boocock Citation2013). These positions potentially imply that with a succinct model of creation, application and monitoring, accountability measures and a culture of performativity can be very pertinent in advancing student qualification outcomes.

Potential contraindications of this shift in culture

However, despite positive qualification outcomes, much research into the use of public sector performativity practices and target setting has questioned the credibility of the strategy as a driver to enact positive, holistic outcomes (Propper and Wilson Citation2003; Goldstein Citation2004; Christophersen, Elstad, and Turmo Citation2010; Sun and Van Ryzin Citation2014; Busetti and Dente Citation2014; Appel Citation2020). Goldstein (Citation2004) suggests that the emphasis on statistical targets can be dysfunctional, resulting in lower levels of pupil motivation and a reduction in teacher creativity; potentially contributing to improved statistical outcomes, but ultimately detrimental to the holistic development of fundamental skills and attributes essential to the long-term career success of students – a stance echoed by other researchers (Ball Citation2003; Burnard and White Citation2008; Mansell Citation2011; Perrin Citation2015). This narrative therefore implies that concentrating on statistical target setting as a key judgment factor within performativity practices is fundamentally flawed.

Impact of performativity on organisational practice and the role of middle managers

Although significant political importance has been placed on organisations meeting the statistical performance measures decided at a national level, is good or positive FE leadership and governance purely related to external inspection grading and data-driven performativity outcomes? The discourse in some research certainly points to a tension between strategic leadership which focuses purely on external performativity measures, and actual positive governance and leadership which enacts change for the benefit of the entire organisation (Clapham and Vickers Citation2017; Yi and Kim Citation2019). As Keep (Citation2018) suggests that accountability is now primarily focused upwards to the Department for Education (DfE) and other regulatory bodies, rather than to the local community, this potentially creates a disconnect between those staff working directly with students from the local community and those staff responsible for reporting performance to regulatory bodies. This may result in a juxtaposition of who they are responsible to (governing organisations or the students they serve) for staff at various levels within a college – assuming that both parties mentioned have different agendas and preferences. This may lead to the internal tension and the ontological uncertainty that Ball (Citation2003) suggests may afflict teaching staff.

The unique position of middle managers

A significant stakeholder in the strategic aims of a college being discharged, on an operational level, is the middle manager. This is far from a straightforward position and there is a real complexity to the role which is multifaceted, fluid and difficult to robustly define the components of. Research struggles to determine the very nature and purpose of the middle manager as an entity – and even the job title (Corbett Citation2017; Wolstencroft and Lloyd Citation2019). Simply put, they are the conduit between the teaching workforce and the senior leaders, who translate defined policy into tangible actions and practice (Mulcahy Citation2004; Corbett Citation2017). How much the culture of performativity influences their day-to-day role would depend on the individual institution and its values, but for all middle managers in FE colleges, performativity measures and accountability are prominent factors which play an important part in their approach to their job role (Wolstencroft and Lloyd Citation2019; Husband and Lloyd Citation2019).

How might the concept of performativity influence their practices compared with staff who solely have a teaching remit? Avis, Kendal, and Parsons (Citation2003) postulate that at the turn of the twenty-first century there was a paradigm shift in the focus of middle managers, from a more holistic educational standpoint to one with greater importance of meeting quantitative targets. Moreover, Wolstencroft and Lloyd (Citation2019, 119) insinuate that potentially, the role of middle managers has been focused to that of ‘process and outcome’ due to the importance and agency given to the achieving of these performativity measures and metrics. This suggests that their affinity towards performativity measures may be greater than that of teaching staff due to their exposure to data-driven practice on a day-to-day basis.

There is suggestion that motivational tendencies of staff may evolve due to exposure to more external rewards and pressures, particularly in an environment where key statistical measures are championed, valued and perceived to have high importance. Where the achievement of said metrics results in peer and managerial praise and endorsement, this could lead to the phenomenon of modifying behaviour in favour of extrinsic incentives, which are central to the performativity environment and culture (Moynihan Citation2010; Boocock Citation2013). More chillingly, there have been suggestions that noncompliant middle managers, who may critique and oppose the strategic direction of senior managers, can be replaced by those who are more compliant (Boocock Citation2013) and who by proxy have a stronger adherence to extrinsic rewards. However, a stronger prevalence towards performativity measures may also be due to greater awareness of the holistic factors which impact education policy. Findings from Mulcahy (Citation2004) suggest that some teaching staff feel that wider issues and factors (such as budget control and student recruitment) are not encompassed by, nor a priority in, their job role. However, manager perceptions are to the contrary: that teaching staff do have a significant stake in the entire holistic functionality of an educational institute, and it is their lack of awareness of the importance of these factors to the organisation as a whole, and the contribution that they need to make, which informs their perceptions. This is a conundrum for middle managers as they are situated between layers of senior managers and staff, and a conduit between the two entities. The key challenge for managers in this situation is to balance the needs to be both strategic and operational – where they contribute to the formulation of strategic direction, and then assume responsibility for its successful delivery. Conversely, with this focus on strategy, are middle managers too detached from the classroom due to other functions required as part of their role – administration, management reporting, ‘fire fighting’, etc. (Husband and Lloyd Citation2019) – that they are unaware of the impact that performativity measures may have on elements such as creative practice and innovation (Appel Citation2020)? Is there a difference in what middle managers and teaching staff deem important to successful pedagogical delivery? Research by Anderson (Citation2012) suggests that managers from vocational education settings tend to view teacher and trainer competence in terms of their ability to meet learner achievement targets, whereas teachers measure competence in relation to pedagogical comfort and confidence. This suggests that there may be a differential between the two groups in relation to their perceptions of performativity practices, as the different parties separately place competence, and therefore importance, to these two fundamental, but distinctly different elements – one being student focused and one being organisationally focused (Page Citation2011). An added dimension in this narrative is that many middle managers hold teaching responsibility as part of their workload. Therefore, is there an internal conflict or struggle for middle managers in terms of their position, identity, and what they hold important, depending on which role they are carrying out, how often, and when? Has this tension between meeting student needs, in terms of both qualification outcomes and more holistic learning experience factors, led to a paradoxical position for teaching staff and/or managers: both working towards and against the neoliberal agency evident within the FE sector (Robinson Citation2010)?

Methodology

Participants

A survey approach was conducted (Wolf et al. Citation2016) with the key aim to establish staff perceptions of performativity practices, driven by statistical target setting, and to discover if this varied dependant on the participants’ job role. In total, 107 participants from a single general further education college voluntarily engaged in the study. The survey population was 220 staff, of whom 203 were teaching staff and 17 were middle managers (Heads of Department). The response rate was 47% (N = 96) of teaching staff and 65% (N = 11) of managers. Having a single further education instruction as the sole place of employment meant that all staff were subject to the same performativity culture, which helped strengthen internal validity, thus enabling a focus on individuals’ subjective understandings of the environment that they were immersed in (Punch Citation2014; Cohen, Manion, and Morrison Citation2018).

Procedure

The survey approach was designed so that individuals could participate and provide responses with minimum encumbrance (Smyth Citation2016). An electronic questionnaire distributed to participants via email was used as the single dissemination method. The questionnaire elicited a Likert response with 1 being ‘positive’ and 7 ‘negative’ (Lietz Citation2010), utilising a semantic differential scale, with consistent statements of the polarity of an answer at the end of each question, to aid the participant (Brace Citation2018). A seven-point Likert scale was used to give participants a broad range of options to formulate their responses and offer a deeper level of precision when differentiating between possible responses on the questionnaire (Cohen, Manion, and Morrison Citation2018), although no significant differences between the use of five- or seven-point Likert scales exist in terms of variation, skewness, and kurtosis (Dawes Citation2008; Korkut Altuna and Arslan Citation2016). The number of questions was kept to a minimum, using plain English, with fewer than 14 words per question used (Lietz Citation2010) to limit the primacy effect, respondent fatigue and to improve comprehension (Smyth Citation2016; Brace Citation2018).

An overarching question focused on how positively staff perceive target setting performativity measures. This was extrapolated by requiring respondents to select the most appropriate graded response to a series of factors identified by Fisher et al. (Citation2015), Flores (Citation2012), and Evans (Citation2001), which contribute to an effective learning environment and motivated staff. These factors were:

QUESTION -Are current statistical targets set:

1.Highly inspiring (1) to Demoralising (7)

2.Realistic (1) to Unachievable (7)

3.Developed in conjunction with you (1) to Produced in isolation without your input (7)

4.Targeted at relevant factors (1) to Targeting irrelevant factors (7)

5.Targeting factors within your control (1) to Targeting factors outside of your control

6.Greatly improve student outcomes (1) to Harmful to student outcomes (7)

7.Develop rounded learners (1) to Create one dimensional learners (7)

8.Effective in improving own performance (1) to Detrimental to own performance (7)

9.Improve your job satisfaction (1) to Reduce your job satisfaction (7)

10.Raise standards of teaching and learning (1) to Impair standards of teaching and learning (7)

11.Facilitate creative practice (1) to Impair creative practice (7)

12.Inspire innovation (1) to Suppress innovation (7)

13.Essential for an effective learning environment (1) to Detrimental to an effective learning environment (7)

Ethical considerations

It is important for validity to establish the lens in which the results are being scrutinised and to be transparent regarding any influence this positionality may have on the study (Jafar Citation2018). At the time of the study, the author was a member of staff holding a job role with responsibilities that traversed the college, at a middle-management level. This position in the institution meant that the author was unable to truly ensure an objective position in the study. However, in an attempt to reduce any power differential between participants and researcher, and to mitigate the author’s position and any bias, through the quantitative nature of the survey research design and the use of an electronic questionnaire, the positionality of the author was negated to reduce any potential impact on the results. This method of dissemination allowed participants to make a conscious decision as to whether to participate in their own time and not in situ, allowing a level of detachment from the research team and therefore not placing undue pressure on individuals to be involved (Cohen, Manion, and Morrison Citation2018). However, participation (or not) in the study by individuals may have been influenced by the author, due to their access to the sample population and their working relationships with both teaching and management staff. This power dynamic between the author and the sample population may have influenced willingness to participate and the nature of the responses offered by individuals (Teye Citation2012).

A word of caution around the method utilised. A factor which may have affected the findings of the current study is that perceptions of satisfaction and motivation are influenced by the time of day, with workday accumulation significantly influencing the relationship. Research by Benedetti et al. (Citation2015) shows that early in the day the relationship between job satisfaction and both types of motivation is strong; however, as the day progresses and workday accumulation occurs, this relationship turns negative. Thus, the time of the day in which members of staff completed the questionnaire could significantly influence their mood state and therefore their opinions given. The time of year may also play a crucial role in the mood state of staff. The questionnaires were sent during the third term of the academic year. Anecdotally, staff may have a different mood state in the run up to examinations and assessments, than during the first weeks of a new academic year after the summer break.

Data analysis

The analysis of data was conducted using SPSS 25. Other than job role, no further identifying information, such as age range or gender, was asked for. This was to reinforce the anonymous nature of the survey to participants and was intended to elicit a more honest disclosure from them without the fear of judgment (Wolgemuth et al. Citation2015).

The analysis of ordinal data as if it is metric is a common occurrence in the psychological and social sciences which can lead to reporting errors in nonparametric data (Jamieson Citation2004; Liddell and Kruschke Citation2018; Seufert Citation2019). The choice between using parametric and nonparametric approaches of analysis is a controversial one and establishing if there is equity of intervals between data points is difficult (Jamieson Citation2004; Creswell and Guetterman Citation2019). Liddell and Kruschke (Citation2018, 328) argues that ordinal data has to be considered as ‘discrete, ordered scales’ without metric value, thus only indicating an order of response and not indicating equal intervals between levels, and therefore should be explored using nonparametric methods. Conversely, other research has shown that the errors for treating ordinal data as metric are minimal. de Winter and Dodou (Citation2010) argue that the magnitude or level of error from using parametric versus nonparametric approaches for ordinal data is negligible. Depending on factors such as sample size and whether distribution is skewed, peaked, or multimodal, either approach can elicit similar results with low levels of Type-I and Type-II errors. To decide between following a parametric or nonparametric approach, the raw data in this study was examined for distribution tendencies. Using the Shapiro–Wilk test for normality (Field Citation2018), it was determined that scores in this study were not sampled from a normal distribution and therefore failed the normality assumption for independent T-test analysis. Subsequently, as it is not possible to establish equity between intervals (Creswell and Guetterman Citation2019), a nonparametric approach was taken in the form of the Mann and Whitney U test, to better ‘describe the data’ and improve the accuracy of the results (Liddell and Kruschke Citation2018, 337). To manage objectivity in the data analysis process (Cassell, Cunliffe, and Grandy Citation2018), all information collected was utilised to prevent any bias and misrepresentation of the data (Cohen, Manion, and Morrison Citation2018).

Results

Descriptive results showing mean and median averages are displayed for the sample in below.

Table 1. Descriptive statistics

A Mann-Whitney U-rank test was conducted to explore any differences between scores for teaching staff and managers. There were statistically significant (p < 0.01 & p < 0.05) differences between group scores for seven questions. Details of significant differences between groups are illustrated in .

Table 2. Statistical differences between group answers: Mann-Whitney U test

Perceptions of performativity measures differed significantly between the groups for a number of factors (). Results for the participants show significant differences between the perceptions of managers and the perceptions of teaching staff with regard to how: inspiring (U = 133.5, z = – 4.137, p = 0.000, r = – 0.40); realistic (U = 288.5, z = – 2.502, p = 0.012, r = – 0.24); developed in conjunction with stakeholders (U = 271.0, z = – 2.677, p = 0.007, r = – 0.26); targeted at relevant factors (U = 235.0, z = – 3.056, p = 0.002, r = – 0.30); and having factors within individual’s control (U = 291.5, z = – 2.469, p = 0.014, r = – 0.24) current statistical targets are; and how much they improve student outcomes (U = 191.0, z = – 3.541, p = 0.000, r = – 0.34); and improve personal performance (U = 308.5, z = – 2.296, p = 0.022, r = – 0.22). These results indicate that the perceptions of teaching staff are less favourable towards current performativity and target setting processes than those of managers. In fact, the mean and median scores for teaching staff () were below the midpoint (3.5) for every question, suggesting that their perception is that the current model of target setting is detrimental to performance against all of the factors. Conversely, managers scored five of the questions above the mean midpoint, and seven with the median score of 2 or 3, suggesting that they perceive that target setting is potentially beneficial to those factors. Where there were significant differences between the groups, only within the factors under control and improve own performance questions did managers’ mean score fall below the midpoint (with a median score of 3 for each). These findings indicate a clear level of contention between the groups as managers indicated that they perceive that target setting positively influences factors which teachers suggest are negatively influenced by target setting.

However, there are elements of synergy in their perceptions. Where there are no significant differences between the perception of managers and teachers, both groups’ median score is 4 or 5, and mean scores fall below the midpoint of 3.5. This suggests that both groups perceive that the current performativity culture and target setting has a detrimental impact on: the development of learners (U = 377.0, z = – 1.585, p = 0.113, r = – 0.15); job satisfaction (U = 410.5, z = – 1.230, p = 0.219, r = – 0.12); raising of standards (U = 378.0, z = – 1.567, p = 0.117, r = – 0.15); creative practice (U = 360.5, z = – 1.748, p = 0.081, r = – 0.17); innovation (U = 434.0, z = – 0.981, p = 0.326, r = – 0.17); and an effective learning environment (U = 397.0, z = – 1.370, p = 0.171, r = – 0.13).

Discussion

The main finding of this study is that the perceptions of managers differ to those of teaching staff as to the effectiveness of statistical targets within a culture of performativity, to drive factors which are integral to an efficacious learning environment and motivated staff. Teaching staff perceive that the use of statistically driven performativity metrics is not in line with their values (Flores Citation2012) and motivational drivers (Evans Citation2001; Fisher et al. Citation2015), and is potentially ineffective at positively influencing aspects (such as creative practice and innovation) which impact student outcomes (Christophersen, Elstad, and Turmo Citation2010). This is particularly resonant for teaching staff, however the results show that as practitioners take on more of a managerial emphasis in their role, the perceived benefit and their affinity for statistical target setting and performativity measures both increase. This could be due to managers having greater exposure to the managerial business practices of target setting and statistical measurement as they interact on a more strategic level (Mulcahy Citation2004; Moynihan Citation2010; Boocock Citation2013); with evaluative pressures from external elements such as governors and Ofsted essentially making them more responsive to extrinsic rewards (Le Grand Citation2010). As middle managers’ core responsibilities often focus on measuring performance against data-driven targets (Husband and Lloyd Citation2019), it can be suggested that consistent exposure to this practice and regime has influenced their ability to identify with the process, and therefore influenced their affinity with the practice and shaped their judgment around how effective it is at improving elements that are instrumental in student educational performance. Recent research has confirmed that external accountability pressures have a direct impact on the leadership style of principals, with intra-establishment practices such as: professional development strategies for teachers and promotion of certain teaching styles and pedagogies, aligned to (hopefully) ensure that the centre meets key external metrics (Yi and Kim Citation2019).

The lack of awareness by teaching staff of the wider political and strategic factors could explain the difference in perceptions of importance between teaching staff and managers, and the lack of empathy towards statistical target setting from many teachers (Mulcahy Citation2004). However, as statistical targets set at the institution in question purely relate to core teaching factors, it is beyond the scope of this study to ascertain whether this is the case. Another possible reason for this greater affinity from managers towards performativity measures may be the expectation and embedding of the concept of target setting in modern day educational managers’ job role (Thompson and Wolstencroft Citation2018). The notion that managers are more receptive to statistical target setting is supported by Moynihan (Citation2010), who infers that the reforms towards a more performance driven model of delivery for public services are leading to a ‘crowding out’ of intrinsic motivations, and the displacement of altruistic tendencies for practitioners. It could also be suggested that these traits may be innate in these members of staff, and that they have always allocated importance to these factors throughout their careers. The above statement is purely speculative, as a definitive answer is beyond the scope of this study and further research is needed.

Overall, the results of this study suggest that middle managers are more likely to be receptive to neoliberal performativity measures, which drive the strategic success of a college, than their teaching colleagues. This position supports the previous findings of Mulcahy (Citation2004) who suggests that as teachers and managers ascend the hierarchical structures of a college, that they are more likely to identify with strategic concepts than frontline staff.

Performativity and identity

The link between identity and the affinity to target setting and performativity seems to be powerful, multifaceted and multidimensional (Briggs Citation2007). As middle managers often have teaching commitments in addition to their management responsibilities (Page Citation2011; Wolstencroft and Lloyd Citation2019), they may experience a dichotomy between their personal values and ethos exhibited when they are in the classroom and those exhibited when they are managing teams and individuals. It can be inferred from the result of this study that middle managers are not completely detached from the thoughts, beliefs and perceptions of teaching staff – they are still invested in and attached to that identity and mindset. It could be postulated that there is a difficulty in separating teaching and managing into two distinct entities and isolating the agency of each, when individuals are undertaking both roles in unison – especially since defined middle manager job roles and responsibilities are often vague and filled with ambiguity from role to role and institution to institution (Corbett Citation2017; Wolstencroft and Lloyd Citation2019). This may go some way to explain the equivocal nature of this study’s findings, and the fact that middle managers broadly agree and align with their fellow teaching colleagues on many of the factors of performativity investigated. Within the middle manager group, individuals’ affinity for performance measures may be affected by how and to what extent they identify themselves and how much agency and affiliation they have with being a teacher and/or with being a leader and manager of staff, and where their priorities sit at any given time (Lambert Citation2014). As many leaders still retain their identity as teachers (or at least elements of it) as it is a strongly rooted status, this may explain why there are not significantly different perceptions between middle managers and teaching staff for all the factors investigated, especially if they resist or are reticent about the often process-driven nature of middle management expressed by Wolstencroft and Lloyd (Citation2019). Husband and Lloyd (Citation2019) suggest that middle managers are ‘poacher turned gamekeeper’, and how far they have transitioned to gamekeeper and their career trajectories might also influence the extent of their favour for performativity measures. Further qualitative research would be required to establish any within-group differences, and whether a link between middle-manager identity and affinity for performativity measures is evident.

Inextricably linked to the exposure to teaching hours and commitment, the typological position that middle managers assume may also influence the perceptions that individuals have towards performativity practices. Page (Citation2011) offers a trialectic model which expresses the primary forces which impact a middle manager’s work – students, team, and the organisation – and argues that managers cannot have an encompassing focus on all three elements, and that they are forced to prioritise between the three and position themselves in favour of one or another. Findings from the current study suggest that the middle managers in question lean towards a student-focused emphasis and are not, as Page (Citation2011) metaphorically labels, ‘converts’ (or at least not fully) to performativity and neoliberal business practices. How this positioning has come about, and the influence that performativity practices have had, or how individuals’ perceptions of performativity have been shaped by the typological positioning, would require further exploration to discover.

Previous research has postulated that school teachers who were pupils during a post-performative educational system accept and even welcome performative accountability and external monitoring of performance (Wilkins Citation2011). Holloway and Brass (Citation2018) go further and suggest that the modern teachers’ values and ethos are in harmony with neoliberal practices and the use of numerical performance measures is a means to drive self-reflection and improvements. Moreover, newer staff and those who have transferred into working in FE as a secondary career (Page Citation2011) may view the meeting of targets as key to professional identity (Gleeson and Knights Citation2006). Age and career history could have indeed influenced the results of the current study, however as this data was not collected, further investigation would be needed to establish any link to exposure as a pupil to post-performative educational environments and career history, and therefore any subsequent affinity to performativity practices as a teacher and/or manager.

A limited positive outlook

A major finding in this study was that teachers perceive statistical target setting as detrimental to performance for a number of key educational factors. The mean and median scores for teaching staff were below the midpoint (3.5) for every question, suggesting that teaching staff are not synchronous with the current management philosophy surrounding the emphasis on meeting, and the benefits of, statistical performativity measures. This is potentially a major issue, as public sector workers are motivated when they feel tasks are important and achievable (Wright Citation2007), and with a lack of importance given towards statistical targets set, operational performance will be negatively affected (Moynihan Citation2010). However, a key finding of this study is that managers have a more positive perception of target setting than teaching staff do. This finding is concurrent with other research which argues that there has largely been an acceptance of accountability and performativity in teaching establishments (Lewis and Hardy Citation2015), with management practices evolving and adapting to suit this ideology (Ball Citation2003). Nonetheless, results do show that the magnitude of this more positive outlook is limited. In only five of the thirteen questions were managers’ mean scores above the midpoint and towards the positive semantics. This indicates that managers at this establishment are also sceptical regarding the positive impact of statistical target setting – just to a significantly lesser extent than teaching staff are. As the managers surveyed in this study had limited experience in management roles (less than two years), they may still be struggling with the transition from teacher to manager, and therefore in conflict between the need to be an ‘organisational professional’ and the desire to maintain a student-centred, holistic approach (Hobley Citation2019). It could therefore be inferred that the phenomenon of modifying behaviour to adapt to the environment of extrinsic incentives (Moynihan Citation2010; Boocock Citation2013) has not fully occurred for educational practitioners at this establishment.

Impact on creativity and innovation

All parties are consistent with their judgements as to how little benefit current performativity practices have with reference to improving the learning experience and elements within the learning environment, such as developing learners, and championing innovation and creative practices. This stance is supported by Appel (Citation2020), who proposes that performativity cultures stifle creative and innovative practices and notions, resulting in the neglect of these integral aspects of the learning environment. For all of the questions in this study surrounding how influential the current statistical targets process is to these aspects, both groups exhibited a negative outlook, i.e., less than the midpoint of 3.5 on the scale of 7. This implies that all staff perceive a focus on statistical outcomes as potentially detrimental to their performance in producing effective learners, creating an effective learning experience and benefitting students – all of the aspects that research has shown (Christophersen, Elstad, and Turmo Citation2010) actually influence student qualification outcomes. This finding that performativity negatively impacts the development of more rounded learners and learners with adept higher-order cognitive skills, is consistent with research by Ab Kadir (Citation2017), who postulates that a culture of performativity marginalises the ideology of developing critical thinking and thinkers.

The impact on motivation

Another major finding in this study is that managers find current target setting practices inspiring, target relevant factors and ultimately improve student outcomes, with scores towards the positive semantics and significantly higher than those given by teaching staff. This standpoint is supported by research which has found that student outcomes have improved since the move towards neoliberal management practices (Coffield et al. Citation2008; Mansell Citation2011). However, an area of concern identified in this study is that the choice of targets, selected by managers, are aimed at factors which teaching staff feel they cannot control. Teaching staff deem that student outcomes are beyond their complete authority and are influenced by other factors and elements over which they have little or no control, a stance with which previous research concurs (Stecher et al. Citation2012; Christophersen, Elstad, and Turmo Citation2010; Perrin Citation2015; Leckie and Goldstein Citation2019). If factors are beyond the control of an individual, results may vary over time despite consistent performance on their part (Devisch Citation1998). This inability to control elements for which they are deemed responsible, through the process of target setting, will lead to a reduction in self-efficacy and motivation (Ball Citation2003; Ordóñez et al. Citation2009), and may explain why teaching staff in this study finding targeting setting uninspiring. As individuals have limited influence over student outcomes (Stecher et al. Citation2012; Christophersen, Elstad, and Turmo Citation2010; Perrin Citation2015), the use of this metric to hold staff to account could be fundamentally flawed and partially explain why teaching staff do not believe that current performativity measures are targeted at relevant factors. This is a concern as research by Ordóñez et al. (Citation2009) has shown that the overuse and increased valuation of achieving statistical targets has negative consequences as it results in engaging in a task for its own sake, essentially driving out more powerful (Deci, Ryan, and Koestner Citation1999) and durable intrinsic motivation (Le Grand Citation2010; Moynihan Citation2010). As intrinsic motivation is a more consistent motivator in terms of task pursuit, satisfaction and vitality (Benedetti et al. Citation2015), this could have a detrimental impact on the operational effectiveness of practitioners.

A lack of communication and collegial approach?

As previously mentioned, the results demonstrate a divide between teachers and middle management in terms of their perception of how statistical target setting motivates. Maybe most noteworthy are the findings that managers believe current targets set are realistic and developed in conjunction with all staff, whereas teaching staff have a significantly different perception. These results suggest that performativity targets are being decided upon at a senior level within a college, and then imposed on staff further down the organisational hierarchy. This reflects current broader findings that governmental pressure influences FE institutions from the top down, meaning that subsequently this approach is also manifested at a local level within colleges (Hill, James, and Forrest Citation2016; Keep Citation2018). This implies that management practices in FE, in reality, do not follow a top-down and bottom-up approach (Clegg et al. Citation2019) previously suggested as familiar in education (Bush and Glover Citation2014); that they do not illustrate an open dialogue between management and staff; and that they are in conflict with the feelings of staff who believe that performativity targets and expectations are unrealistic for them to discharge. This hints at a major issue as leadership practice has been shown to be the key factor in influencing attitudes towards teacher morale (Evans Citation2001). These divergent scores regarding harmony when constructing performativity targets indicate contention between managers and their workers. This potentially highlights a lack of, or breakdown in, effective dialogue, communication and collaboration when formulating targets at appraisals, suggesting that the performativity culture potentially instigates an unconscious breakdown in trust between all parties (Appel Citation2020). This lack of effective consultation could indicate that operational knowledge and awareness is either neglected or not valued, due to the importance of meeting top-down set statistical targets to satisfy political and external stakeholders (Boocock Citation2013; Keep Citation2018).

This contention about whether statistical targets are developed in conjunction with staff could also be attributed to personal management styles and personalities. The managers in this study were predominately new to management and leadership roles and may not be effective, or adept, at managing people or utilising a management style which is efficacious (Bush and Glover Citation2014).

Levels of targets set

The disparity regarding how performativity practices are valued by teaching staff and managers, and the fact that teaching staff feel that targets are currently uninspiring; unrealistic; conceived without their input; and not conducive in producing effective learners, hint at conflict between the different hierarchies and therefore a situation which is not conducive to high staff morale and motivation. Without acceptance and agreement from staff about policy implementation (Poon Citation2004), or the direction in which the establishment is travelling, or value of what is deemed important or a priority, then practitioner morale, motivation and effort will be suboptimal and therefore compromise operational effectiveness (Deci, Ryan, and Koestner Citation1999). Research has shown that achievable targets inspire higher levels of effort within repeated-interaction settings (Fisher et al. Citation2015). Results in the current study suggest that targets derived by managers based on national averages, particularly if not taking consideration of local and subject based nuances, elicit a perception by staff that targets are not realistic, and therefore not inspiring, consequently reducing both staff morale and potentially long-term performance (Deci, Ryan, and Koestner Citation1999). This concurs with Yi and Kim’s (Citation2019) findings that, in the lowest performing schools, leaders have a greater adherence to external accountability measures and their practice is shaped and impacted more substantially by said metrics. This could be a contributory factor to the fact that the overall success, for the college in this study, has declined over the last three years. However, it is impossible to resist the external political pressures placed upon educational establishments by governments to meet statistical performance measures. As educational managers are held accountable via these means, it is almost impossible to move away from producing an internal performance/appraisal system which does not mirror, or at least acutely resemble, the factors the institution has to address to be judged ‘competent’ by Ofsted and the DfE (Keep Citation2018).

Conclusions

The purpose of this study was to investigate the perceptions of teaching staff and managers towards performativity practices and statistical target setting, and whether there are differences due to job role and responsibility. The main findings from this study indicate a general division between the groups in terms of their perceptions of target setting. Teaching staff and managers harbour significantly differing perceptions as to the impact of statistical target setting on a number of factors previously identified as important to motivating staff and developing a sound learning environment. Teaching staff unequivocally perceive the practice of statistical target setting as detrimental to a number of performance factors, whereas managers express some perceptions which are significantly more positive in their view on the practice. However, managers also offer the view that a reasonable number of factors are potentially harmed by the current practice of statistical target setting and a culture of performativity. This suggests that some perceptions are broadly aligned between the two parties, with regard to the view that statistical target setting is detrimental to improving elements within the learning environment, such as developing learners, and championing innovation and creative practices. This indicates that perceptions are not completely different for all factors between teaching staff and managers, and offers an equivocal position with regard to whether perceptions of performativity change depending on job role and managerial responsibility. Further qualitative research, exploring individual voices, and the detail and potential reasons behind these differences, would be beneficial in the pursuit of offering future, practice-based recommendations.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Richard Poole

Richard has been working in the field of education for over 19 years covering various roles. In his final role before joining the University, he was the Teaching, Learning & Quality Manager at an FE college, in addition to being a centre quality reviewer for Edexcel. He has conducted funded research in collaboration with a number of establishments investigating: the use of digital technologies within assessment feedback; an online staff development space; and impact of behavioural science interventions on the progression of learners to higher level academic courses. Richard has presented at local and national conferences, including the Festival of Skills and Digifest, and is keen to continue exploring the impact of differing pedagogical and leadership techniques and how leaders in education implement these effectively.

References

  • Ab Kadir, M.A, and . 2017. “Engendering a Culture of Thinking in a Culture of Performativity: The Challenge of Mediating Tensions in the Singaporean Educational System.” Cambridge Journal of Education 47 (2): 227–246. doi:10.1080/0305764X.2016.1148115.
  • Anderson, F. 2012. ”The construction of professionalism in vocational education and training in Ireland: A mixed methods study of trainers' roles and professional development in the workplace.” PhD diss. Dublin City University PhD diss.
  • Appel, M. 2020. “Performativity and the Demise of the Teaching Profession: The Need for Rebalancing in Australia.” Asia-Pacific Journal of Teacher Education 48 (3): 301–315. doi:10.1080/1359866X.2019.1644611.
  • Association of Colleges. 2019. Aoc College Key Facts 2019–20. London.
  • Avis, J., A. Kendal, and J. Parsons. 2003. “Crossing the Boundaries: Expectations and Experience of Newcomers to Higher and Further Education.” Research in Post-Compulsory Education 8 (2): 179–196. doi:10.1080/13596740300200148.
  • Ball, S. J. 2003. “The Teacher’s Soul and the Terrors of Performativity.” Journal of Education Policy 18 (2): 215–228. doi:10.1080/0268093022000043065.
  • Benedetti, A. A., J. M. Diefendorff, A. S. Gabriel, and M. M. Chandler. 2015. “The Effects of Intrinsic and Extrinsic Sources of Motivation on Well-Being Depend on Time of Day: The Moderating Effects of Workday Accumulation.” Journal of Vocational Behavior 88: 38–46. doi:10.1016/j.jvb.2015.02.009.
  • Boocock, A. 2013. “Further Education Performance Indicators: A Motivational or A Performative Tool?” Research in Post-Compulsory Education 18 (3): 309–325. doi:10.1080/13596748.2013.819272.
  • Boocock, A. 2014. “Increased Success Rates in an FE College: The Product of a Rational or a Performative College Culture?” Journal of Education and Work 27 (4): 351–371. doi:10.1080/13639080.2012.758356.
  • Boocock, A. 2019. “Meeting the Needs of Local Communities and Businesses: From Transactional to Eco-Leadership in the English Further Education Sector.” Educational Management Administration and Leadership 47 (3): 349–368. doi:10.1177/1741143217739364.
  • Brace, I. 2018. Questionnaire Design: How to Plan, Structure and Write Survey Material for Effective Market Research. 4th ed. London: Kogan Page Ltd.
  • Bratton, J., and J. Gold. 2012. Human Resource Management: Theory & Practice. 5th ed. Basingstoke: MacMillian.
  • Briggs, A. R. J. 2007. “Exploring Professional Identities: Middle Leadership in Further Education Colleges.” School Leadership and Management 27: 471–485. doi:10.1080/13632430701606152.
  • Burnard, P., and J. White. 2008. “Creativity and Performativity: Counterpoints in British and Australian Education.” British Educational Research Journal 34: 667–682. doi:10.1080/01411920802224238.
  • Busetti, S., and B. Dente. 2014. “Focus on the Finger, Overlook the Moon: The Introduction of Performance Management in the Administration of Italian Universities.” Journal of Higher Education Policy and Management 36 (2): 225–237. doi:10.1080/1360080X.2014.884674.
  • Bush, T., and D. Glover. 2014. “School Leadership Models: What Do We Know?” School Leadership and Management 34: 553–571. doi:10.1080/13632434.2014.928680.
  • Bush, T., and D. Middlewood. 2013. Leading and Managing People in Education. 3rd ed. LA: SAGE Publications Ltd.
  • Cassell, C., A. Cunliffe, and G. Grandy. 2018. The SAGE Handbook of Qualitative Business and Management Research Methods: Methods and Challenges. London: SAGE Publications. doi:10.4135/9781526430236.
  • Christophersen, K. A., E. Elstad, and A. Turmo. 2010. “Is Teacher Accountability Possible? The Case of Norwegian High School Science.” Scandinavian Journal of Educational Research 54 (5): 413–429. doi:10.1080/00313831.2010.508906.
  • Clapham, A., and R. Vickers. 2017. “Policy, Practice and Innovative Governance in the English Further Education and Skills Sector.” Research in Post-Compulsory Education 22 (3): 370–390. doi:10.1080/13596748.2017.1358518.
  • Clegg, S., M. Kornberger, T. Pitsis, and M. Mount. 2019. Managing and Organizations. An Introduction to Theory and Practice. 5th ed. London: Sage.
  • Coffield, F., S. Edward, I. Finlay, A. Hodgson, K. Spours, and R. Steer. 2008. Improving Learning, Skills and Inclusion: The Impact of Policy on Post-Compulsory Education. London: Routledge. doi:10.4324/9780203928998.
  • Cohen, L., L. Manion, and K. Morrison. 2018. Research Methods in Education. 8th ed. New York: Routledge.
  • Corbett, S. 2017. “From Teacher to Manager: Expectations and Challenge in the Further Education Sector. A Relationship Model.” Research in Post-Compulsory Education 22 (2): 208–220. doi:10.1080/13596748.2017.1314680.
  • Creswell, J., and T. Guetterman. 2019. Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. 6th ed. New Jersey: Pearson.
  • Dawes, J. 2008. “Do Data Characteristics Change according to the Number of Scale Points Used.” International Journal of Market Research 50 (1): 61–104. doi:10.1177/147078530805000106.
  • de Winter, J. C. F., and D. Dodou. 2010. “Five-Point Likert Items: T Test versus Mann-Whitney-Wilcoxon.” Practical Assessment, Research and Evaluation 15 (11): 1–16. doi: https://doi.org/10.7275/bj1p-ts64
  • Deci, E. L., R. M. Ryan, and R. Koestner. 1999. “A Meta-Analytic Review of Experiments Examining the Effects of Extrinsic Rewards on Intrinsic Motivation.” Psychological Bulletin 125: 627–668. doi:10.1037/0033-2909.125.6.627.
  • Dennis, C. A., O. Springbett, and L. Walker. 2020. “Further Education College Leaders: Securing the Sector’s Future.” Futures 115: 102478. doi:10.1016/j.futures.2019.102478.
  • Department for Education. 2020a. “College Oversight: Support and Intervention Accessed 27 05 2021.” London. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/929201/College_Oversight_document_Oct_20_updates_FINAL.pdf
  • Department for Education. 2020b. “Further Education Commissioner Intervention Reports,” December 2020. https://www.gov.uk/government/collections/further-education-commissioner-intervention-reports#intervention-reports-and-ministerial-responses:-2020
  • Department for Education. 2021. “Skills for Jobs: Lifelong Learning for Opportunity and Growth Accessed 27 05 2021.” London. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/957856/Skills_for_jobs_lifelong_learning_for_opportunity_and_growth__web_version_.pdf
  • Devisch, M. 1998. “The Kioto People Management Model.” Total Quality Management 9 (4–5): 62–65. doi:10.1080/0954412988578.
  • Evans, L. 2001. “Delving Deeper into Morale, Job Satisfaction and Motivation among Education Professionals: Re-Examining the Leadership Dimension.” Educational Management, Administration and Leadership 29 (3). doi:10.1177/0263211X010293004.
  • Field, A. P. 2018. Discovering Statistics Using IBM SPSS Statistics. 5th ed. London: ProtoView.
  • Fisher, J. G., S. A. Peffer, G. B. Sprinkle, and M. G. Williamson. 2015. “Performance Target Levels and Effort: Reciprocity across Single- and Repeated-Interaction Settings.” Journal of Management Accounting Research 27 (2): 145–164. doi:10.2308/jmar-51089.
  • Flores, M. A. 2012. “The Implementation of a New Policy on Teacher Appraisal in Portugal: How Do Teachers Experience It at School?” Educational Assessment, Evaluation and Accountability 24 (4): 351–368. doi:10.1007/s11092-012-9153-7.
  • Fuller, K., and H. Stevenson. 2019. “Global Education Reform: Understanding the Movement.” Educational Review 71: 1–4. doi:10.1080/00131911.2019.1532718.
  • Further Education Commissioner. 2020. “2020 Accessed 27 05 2021.” https://www.gov.uk/government/organisations/further-education-commissioner/about
  • Gleeson, D., and D. Knights. 2006. “Challenging Dualism: Public Professionalism in ‘Troubled’times.” Sociology 40: 277–295. doi:10.1177/0038038506062033.
  • Gleeson, D., J. Hughes, M. O’Leary, and R. Smith. 2015. “The State of Professional Practice and Policy in the English Further Education System: A View from Below.” Research in Post-Compulsory Education 20 (1): 78–95. doi:10.1080/13596748.2015.993877.
  • Goldstein, H. 2004. “Education for All: The Globalization of Learning Targets.” Comparative Education 40 (1): 7–14. doi:10.1080/0305006042000184854.
  • Hall, S., and A. O’Shea. 2013. “Common-sense neoliberalism“ In Soundings 55 9–25. doi:10.3898/136266213809450194.
  • Hill, R., C. James, and C. Forrest. 2016. “The Challenges Facing Further Education College Governors in England: A Time for Caution or Creativity?” Management in Education 30 (2): 79–85. doi:10.1177/0892020616637232.
  • Hobley, J. 2019. “Rites of Passage, Moving into Management: Practice Architectures and Professionalism.” Research in Post-Compulsory Education 24 (4): 424–438. doi:10.1080/13596748.2019.1654686.
  • Holloway, J., and J. Brass. 2018. “Making Accountable Teachers: The Terrors and Pleasures of Performativity.” Journal of Education Policy 33 (3): 361–382. doi:10.1080/02680939.2017.1372636.
  • Husband, G., and C. Lloyd. 2019. “Reimagining Middle Leadership in Further Education. Report of the Middle Leaders Working Group.” CSPACE Journal 2 (1): 1–8.
  • Jafar, A. J. N. 2018. “What Is Positionality and Should It Be Expressed in Quantitative Studies?” Emergency Medicine Journal: EMJ emermed-2017-207158. doi:10.1136/emermed-2017-207158.
  • Jamieson, S. 2004. “Likert Scales: How to (Ab)use Them.” Medical Education 38: 1217–1218. doi:10.1111/j.1365-2929.2004.02012.x.
  • Johnson, G., K. Scholes, R. Whittington, D. Angwin, and P. Regnér. 2017. Exploring Strategy. Harlow: Pearson Education.
  • Keep, E. 2018. “Marketisation in English Further Education - the Challenges for Management and Leadership.” Education Journal Review 25 (2): 84–92.
  • Korkut Altuna, O., and F. M. Arslan. 2016. “Impact of the Number of Scale Points on Data Characteristics and Respondents’ Evaluations: An Experimental Design Approach Using 5-Point and 7-Point Likert-Type Scales.” İstanbul Üniversitesi Siyasal Bilgiler Fakültesi Dergisi (55): 1–20. doi:10.17124/iusiyasal.320009.
  • Lambert, S. 2014. Leading for the Future. Newcastle upon Tyne: Cambridge Scholars Publishing.
  • Le Grand, J 2010 “Knights and Knaves Return: Public Service Motivation and the Delivery of Public Services ” International Public Management Journal 13 (1): 56–71 doi:10.1080/10967490903547290
  • Leckie, G., and H. Goldstein. 2017. “The Evolution of School League Tables in England 1992–2016: “Contextual Value-Added”, “Expected Progress” and “Progress 8”.” British Educational Research Journal 43 (2): 193–212. doi:10.1002/berj.3264.
  • Leckie, G., and H. Goldstein. 2019. “The Importance of Adjusting for Pupil Background in School Value-Added Models: A Study of Progress 8 and School Accountability in England.” British Educational Research Journal 45 (3): 518–537. doi:10.1002/berj.3511.
  • Lewin, K. 2011. Taking Targets to Task Revisited: How Indicators of Progress on Access to Education Can Mislead. CREATE Pathways to Access. Research Monograph No. 54.
  • Lewis, S., and I. Hardy. 2015. “Funding, Reputation and Targets: The Discursive Logics of High-Stakes Testing.” Cambridge Journal of Education 45 (2): 245–264. doi:10.1080/0305764X.2014.936826.
  • Liddell, T. M., and J. K. Kruschke. 2018. “Analyzing Ordinal Data with Metric Models: What Could Possibly Go Wrong?” Journal of Experimental Social Psychology 79: 328–348. doi:10.1016/j.jesp.2018.08.009.
  • Lietz, P. 2010. “Research into Questionnaire Design: A Summary of the Literature.” International Journal of Market Research 52 (2): 249–272. doi:10.2501/S147078530920120X.
  • Lucas, N., and N. Crowther. 2016. “The Logic of the Incorporation of Further Education Colleges in England 1993–2015: Towards an Understanding of Marketisation, Change and Instability.” Journal of Education Policy 31 (5): 583–597. doi:10.1080/02680939.2015.1137635.
  • Mahony, P., and G. Weiner. 2019. “Neo-Liberalism and the State of Higher Education in the UK.” Journal of Further and Higher Education 43 (4): 560–572. doi:10.1080/0309877X.2017.1378314.
  • Mansell, W. 2011. “Improving Exam Results, but to What End? The Limitations of New Labour’s Control Mechanism for Schools: Assessment-Based Accountability.” Journal of Educational Administration and History 43 (4): 291–308. doi:10.1080/00220620.2011.606896.
  • Moynihan, D. P. 2010. “A Workforce of Cynics? The Effects of Contemporary Reforms on Public Service Motivation.” International Public Management Journal 13 (1): 24–34. doi:10.1080/10967490903547167.
  • Mulcahy, D. 2004. “Making Managers within Post-Compulsory Education: Policy, Performativity and Practice.” Research in Post-Compulsory Education 9 (2): 183–202. doi:10.1080/13596740400200174.
  • Ordóñez, L. D., M. E. Schweitzer, A. D. Galinsky, and M. H. Bazerman. 2009. “Goals Gone Wild: The Systematic Side Effects of Overprescribing Goal Setting.” Academy of Management Perspectives 23 (1): 6–16. doi:10.5465/AMP.2009.37007999.
  • Page, D. 2011. “Fundamentalists, Priests, Martyrs and Converts: A Typology of First Tier Management in Further Education.” Research in Post-Compulsory Education 16 (1): 101–121. doi:10.1080/13596748.2011.549738.
  • Perrin, B. 2015. “Bringing Accountability up to Date with the Realities of Public Sector Management in the 21st Century.” Canadian Public Administration 58 (1): 183–203. doi:10.1111/capa.12107.
  • Poon, J. M. L. 2004. “Effects of Performance Appraisal Politics on Job Satisfaction and Turnover Intention.” Personnel Review 33 (3): 322–334. doi:10.1108/00483480410528850.
  • Propper, C., and D. Wilson. 2003. “The Use and Usefulness of Performance Measures in the Public Sector.” Oxford Review of Economic Policy 19 (2): 250–267. doi:10.1093/oxrep/19.2.250.
  • Punch, K. 2014. Introduction to Research Methods in Education. 2nd ed. LA: SAGE Publications Ltd.
  • Robinson, D. 2010. “Further and Higher Education Partnerships in England, 1997–2010: A Study of Cultures and Perceptions Accessed 07 05 2021.” Huddersfield: University of Huddersfield. http://eprints.hud.ac.uk/id/eprint/10182/1/RobinsonDeniseFinalThesis.pdf
  • Seufert, M. 2019. “Fundamental Advantages of considering Quality of Experience Distributions over Mean Opinion Scores.” 2019 11th International Conference on Quality of Multimedia Experience, QoMEX 2019 Berlin, Germany. doi:10.1109/QoMEX.2019.8743296.
  • Smyth, J. D. 2016. “Designing Questions and Questionnaires.” In The SAGE Handbook of Survey Methodology (London: Sage). doi:10.4135/9781473957893.n16.
  • Stecher, B. M., F. Camm, C. L. Damberg, L. S. Hamilton, K. J. Mullen, C. Nelson, P. Sorensen, et al. 2012. “Toward a Culture of Consequences: Performance-Based Accountability Systems for Public Services.” Rand Health Quarterly 2 (1): 5.
  • Sun, R., and G. G. Van Ryzin. 2014. “Are Performance Management Practices Associated With Better Outcomes? Empirical Evidence From New York Public Schools.” The American Review of Public Administration 44 (3): 324–338. doi:10.1177/0275074012468058.
  • Teye, J. K. 2012. “Benefits, Challenges, and Dynamism of Positionalities Associated with Mixed Methods Research in Developing Countries: Evidence from Ghana.” Journal of Mixed Methods Research 6 (4): 379–391. doi:10.1177/1558689812453332.
  • Thompson, C., and P. Wolstencroft. 2015. “Promises and Lies: An Exploration of Curriculum Managers’ Experiences in FE.” Journal of Further and Higher Education 39 (3): 399–416. doi:10.1080/0309877X.2013.858676.
  • Thompson, C., and P. Wolstencroft. 2018. “Trust into Mistrust: The Uncertain Marriage between Public and Private Sector Practice for Middle Managers in Education.” Research in Post-Compulsory Education 23 (2): 213–230. doi:10.1080/13596748.2018.1444372.
  • Verger, A., C. Fontdevila, and L. Parcerisa. 2019. “Reforming Governance through Policy Instruments: How and to What Extent Standards, Tests and Accountability in Education Spread Worldwide.” Discourse. doi:10.1080/01596306.2019.1569882.
  • Wilkins, C. 2011. “Professionalism and the Post-Performative Teacher: New Teachers Reflect on Autonomy and Accountability in the English School System.” Professional Development in Education 37 (3): 389–409. doi:10.1080/19415257.2010.514204.
  • Wolf, C., D. Joye, T. Smith, and Y.-C. Fu. 2016 Survey Methodology: Challenges and Principles. The SAGE Handbook of Survey Methodology (London: Sage). doi:10.4135/9781473957893.
  • Wolgemuth, J. R., Z. Erdil-Moody, J. E. Tara Opsal, T. K. Cross, E. M. Dickmann, and S. Colomer. 2015. “Participants’ Experiences of the Qualitative Interview: Considering the Importance of Research Paradigms.” Qualitative Research 15 (3): 351–372. doi:10.1177/1468794114524222.
  • Wolstencroft, P., and C. Lloyd. 2019. “Process to Practice: The Evolving Role of the Academic Middle Manager in English Further Education Colleges.” Management in Education 33 (3): 118–125. doi:10.1177/0892020619840074.
  • Wright, B. E. 2007. “Public Service and Motivation: Does Mission Matter?” Public Administration Review 67 (1): 54–64. doi:10.1111/j.1540-6210.2006.00696.x.
  • Yi, P., and H. J. Kim. 2019. “Exploring the Relationship between External and Internal Accountability in Education: A Cross-Country Analysis with Multi-Level Structural Equation Modeling.” International Journal of Educational Development 65: 1–9. doi:10.1016/j.ijedudev.2018.12.007.