Publication Cover
Educational Psychology in Practice
theory, research and practice in educational psychology
Volume 38, 2022 - Issue 4
9,100
Views
2
CrossRef citations to date
0
Altmetric
Articles

Assessment practices of educational psychologists and other educational professionals

ORCID Icon, &

ABSTRACT

Assessment is one of the five functions of the educational psychologist’s (EP’s) role, yet there is a dearth of research exploring its distinctive contribution to school-based practice, and a lack of definition about what it is. In this study, the assessment practices of EPs were compared with those of other educational professionals who had achieved certification for competence in educational testing. Data were analysed using descriptive and inferential statistics, with patterns emerging which indicated that, while there is some overlap, assessment practices of EPs and other educational professionals often have different foci. Specifically, evidence suggests that EPs offer a broader, more holistic perspective with strong emphasis on the social and emotional wellbeing of the child. Additionally, other practitioners typically have a more school-orientated focus, for example, testing for exam access arrangements, reflecting their professional role. Implications are discussed, particularly in relation to the distinctive contribution to EP assessment.

Introduction

Assessment and the educational psychologist’s role

Assessment, along with consultation, intervention, research, and training, is one of the core functions of the United Kingdom (UK) educational psychologist’s (EP’s) role (Farrell et al., Citation2006; Scottish Executive, Citation2002). Despite this, neither document offers a definition of what EP assessment is, beyond stating that ‘Assessment and intervention involve the use of a wide range of techniques and strategies with systems and individuals’ (Scottish Executive, Citation2002, p. 20). Historically, assessment has been seen as a significant and important part of EP practice. In a small-scale, but widely reported study, Ashton and Roberts (Citation2006) found that within one local authority, special educational needs coordinators (SENCOs) identified statutory and individual assessment work completed as two of the three most valued EP activities. Additionally, the majority of SENCOs recognised statutory assessment work as a service unique to EPs. Currently, the requirement for EPs to provide psychological advice and information as part their statutory duty to contribute to educational, health and care (EHC) needs assessments (Association of Educational Psychologists, Citation2017; Department for Education & Department of Health, Citation2015) means that often the EP’s role is inextricably linked to assessment; yet research on the topic is relatively sparse.

Historically, assessment within EP practice has been linked to hypothesis testing and formulation (Frederickson et al., Citation1991) and this notion has been supported by its positioning within professional practice frameworks (cf., Annan et al., Citation2013; Division of Educational and Child Psychology, Citation1999; Monsen et al., Citation1998; Woolfson et al., Citation2003). Fallon et al. (Citation2010) advised ‘that all assessment and intervention by professional agencies should take place within a framework which is highly accountable and securely integrated’ (p. 7). Assessment can be multi-faceted and include criteria-referenced, dynamic, and standardised methods (Freeman & Miller, Citation2001) and can include a vast range of methods and address many different skills. For example, just within the context of reading, Topping (Citation2010) highlighted diverse examples including tests, observations, miscue analyses, computer-aided, and self-assessment, with foci including phonic skills, fluency, recall/summarisation, pre-reading skills, and affective and motivational factors.

Towards the end of the last century, there was a great deal of deliberation and discussion about the use of individual assessment methods, especially standardised assessment, within EP practice (cf., Leyden, Citation1999; Lyons, Citation1999; Thomson, Citation1996) although this may have coincided with a wider debate about the role of the EP (see Wood, Citation1998). As teachers and schools became more adept at measuring student performance, EPs questioned their distinctive contribution to individual assessment, for example, within Freeman and Miller’s (Citation2001) questionnaire survey of 59 special needs coordinators within one local authority. Debate continued about the role of assessment in school-based practice amongst EPs, including the use of psychometric testing, and its relative value and validity in a conceptual paper by Boyle and Lauchlan (Citation2009).

Of late, however, there has been a dearth of published research on the assessment role of the EP. The authors speculate that this has happened for a number of reasons:

  1. The positioning of EPs as scientist practitioners (Fallon et al., Citation2010; Frederickson, Citation2002) has helped to give EPs a clearer sense of professional identity.

  2. The advent of three-year doctoral training has allowed for greater opportunity for EPs to develop proficient assessment practice, as well as a critical and informed perspective on a wide range of assessment methods.

  3. The aforementioned professional practice frameworks have enabled assessment to be contextualised within practice during EP training (Monsen et al., Citation1998) and may have allowed greater critical reflection on the rationale, methods, and process of assessment.

  4. There appears to have been increased interest in a wider range of assessment methods, including dynamic assessment (Lauchlan & Carrigan, Citation2017). There is also evidence of EPs using therapeutic work as an assessment modality (Atkinson et al., Citation2011; Thomas et al., Citation2019).

Educational assessment and professional competencies

Probably the most recent large-scale survey of the assessment practices of UK EPs was conducted by Woods and Farrell (Citation2006) who analysed data from 142 practitioner EPs in seeking to understand, not only the practice of EPs but the theoretical and professional frameworks which guided this. Woods and Farrell (Citation2006) compared techniques used by EPs in assessments for both learning and ‘behavioural’ difficulties and found that while interviews with parents, teachers and children, and observation were the most commonly used approaches, standardised assessment of reading and number was conducted frequently within both types of assessments, suggesting that, certainly at that time, this practice was commonplace.

Woods and Farrell (Citation2006) also found that standardised educational assessments were used considerably more often than criterion-referenced and dynamic approaches, although multifaceted and wide-ranging assessment was reported in Farrell et al.’s (Citation2006) review into the role and functions of the EP. While the data from this study pre-date changes to local authority structures, including traded services (Fallon et al., Citation2010; Lee & Woods, Citation2017), the prevalence of the use of standardised educational, rather than cognitive assessments, amongst this sample was notable.

In terms of standardised assessment, in 2005 the British Psychological Society (BPS) introduced the Certificate of Competence in Educational Testing (CCET) Level A, which indicated standards for the use of educational tests within the UK (Boyle & Fisher, Citation2008). More recently, the BPS’s Psychological Testing Centre produced an updated set of guidelines for competent practice in test use (BPS, Citation2017b). Practitioners can be assessed against a series of educational test user (ETU) standards related to psychological knowledge, psychometrics, and practitioner skills; and demonstration of these competencies means that individuals can apply for inclusion in the register of qualified test users (RQTU). EPs are deemed to be qualified test users, through their doctoral level psychological qualification and/or Health and Care Professions (HCPC) registration (cf Hogrefe, Citation2022; Pearson, Citation2022). Despite this, when this process was introduced as the CCET, some EPs applied to join the RQTU under a grand-parenting clause which lasted for two years; but now competence has to be confirmed by a BPS-verified assessor. Completion of the ETU standards is a feature of some EP training programmes, but is more often undertaken by school-based practitioners, especially SENCOs.

Within schools, expertise in educational testing by non-EPs has grown in recent years (Atkinson et al., Citation2019). Part of the reason for this is likely to be the increasing onus on schools to organise access arrangements for English General Certificate of Secondary Education (GCSE) examinations, and stricter regulation around these (Woods et al., Citation2018). Specifically, the Joint Council for Qualifications (Citation2021) requires access arrangements assessments to have been carried out by either a HCPC registered psychologist, or a ‘specialist diagnostic assessor’ (p. 29) who must have completed at least 100 hours of post-graduate study in ‘individual specialist assessment’ (p. 76). This has resulted in many SENCOs and other school-based professionals following courses which involve achieving ETU competencies, and subsequently joining the RQTU, although it should be noted that neither EPs nor specialist teachers are required to be members of the RQTU.

With this surge in standard testing expertise amongst non-EP school-based professionals, it is perhaps timely to revisit the assessment role of both EPs and other educational professionals, to explore the extent to which EPs offer assessment practice that is distinct and different. In doing so, this paper uses data from a large-scale BPS survey, devised in conjunction with EPs, to answer the following research questions.

  1. What is the focus of assessment work undertaken by EPs and other educational professionals (hereafter referred to as non-EPs)?

  2. What methods of assessment are used by EPs and non-EPs?

  3. What do EPs and non-EPs identify as the most important features of standardised assessment?

  4. What are the reasons given by EPs and non-EPs for using standardised assessments?

Method

Design

Assessment practices of EPs and school-based professionals were of interest to the BPS Psychological Testing Centre and data included in this paper form part of a BPS database used in a wider study (see Atkinson et al., Citation2019). Specifically, the Psychological Testing Centre distributed a survey, developed by the first and third authors who were at the time members of the BPS Verifiers' Group, which oversees verification and assessment in competent use of educational tests. This was sent by email link to individuals on the RQTU, and the Division of Educational and Child Psychology (DECP) mailing lists. While it is acknowledged that some EPs could be members of both groups, in the authors’ experience most EPs are not RQTU members since registration as HCPC EPs confers the necessary testing-competence status, and therefore this is generally not required or acknowledged within their professional role. Additionally, being on the RQTU confers an annual charge.

Recipients were informed that the survey was being conducted to further the Psychological Testing Centre’s understanding of which tests educational professionals used in their practice and why, and to gain insights to enable the BPS and Psychological Testing Centre to assist members with professional development. The survey was expected to take less than 15 minutes to complete, responses were anonymous, and it was not possible to respond to the survey on multiple occasions. Formal ethical approval was not required from the university leading the research. This was established via the university’s Ethics Decision Tool because the study involved staff research; and data were secondary (collected by the BPS), anonymised, and collected at an organisation level.

Funding was provided by the BPS to support analysis of previously unused survey data to address a different study focus. This enabled recruitment of the second author, who undertook independent statistical analysis of the data.

Sample

A self-selected sample of 437 participants (8.4% response rate) completed the survey anonymously. Of these, 89% lived in the UK, although responses were also received from practitioners in Africa, Asia, Australasia and Europe. Most (86%) of the international respondents were members of the RQTU rather than the DECP. Overall survey responses were published by Atkinson et al. (Citation2019).

For the purpose of the current study, a decision was taken not to include international respondents in the comparative analysis within this paper because of potential differences between practice and working contexts between different countries, which could account for variance and therefore skew the statistical analysis. This left 388 responses in total, although there were some missing items, as can be seen from . These were limited (the most missing items being nine for gender), with the overall impact on the results likely to be minimal. Nearly three-quarters of the sample (73.4%) were non-EPs and 26.6% were EPs. Participant details and available data are detailed in .

Table 1. Sample demographic information.

Data collection and analysis

Data were collected by the Psychological Testing Centre between April and May 2018 using a Questback interface, which allowed them to be collated within an Excel spreadsheet. The interface allowed only one response, by informing the respondent if they had already completed the survey. Participants were asked to provide anonymised personal information (gender, professional role) and details of the context in which they worked. They were then asked the questions shown in , with possible response fields, developed through discussions between the researchers and colleagues from the Psychological Testing Centre.

Table 2. Assessment questions and possible response fields.

Data analysis was undertaken by the researchers via SPSS, using descriptive and inferential statistics. Specifically the analysis focused on differences between the practices of EPs and non-EPs.

Findings

Assessment focus

The focus of assessment work for EPs and non-EPs is shown in . The most common assessment foci for all practitioners were ability testing (used significantly more by EPs than non-EPs [EPs 98.1%, non-EPs 75.1%, X2 = 26.135, p < .000]); and attainment testing relating to the core literacy skills (reading, writing and spelling) for which there were no significant differences; although handwriting was assessed more frequently by non-EPs (EPs 53.3%, non-EPs 70.2%, X2 = 9.453, p = .002). Analysis revealed that EPs are significantly more likely to undertake assessment of behaviour, developmental, play skills and social skills, mental capacity, social communication, wellbeing and mental health, compared to other practitioners. These differences were significantly correlated according to chi-squared statistical tests (behaviour, EPs 80.6%, non-EPs 18.6%, X2 = 127.692, p < .000; developmental skills, EPs 93.2%, non-EPs 27%, X2 = 134.142, p < .000; language, EPs 86.4%, non-EPs 37.1%, X2 = 73.309, p < .000; play skills, EPs 68%, non-EPs 4.2%, X2 = 184.149, p < .000; social skills, EPs 78.6% non-EPs 11.5%, X2 = 163.987, p < .000; mental capacity, EPs 25.2%, non-EPs 5.6%, X2 = 30.197, p < .000; social communication, EPs 75.7%, non-EPs 11.9%, X2 = 149.970, p < .000; wellbeing/mental health, EPs 73.7%, non-EPs 16.8%, X2 = 112.821, p < .000).

Figure 1. Focus of assessment work (numbers indicate percentage of sample).

Figure 1. Focus of assessment work (numbers indicate percentage of sample).

Assessment methods

Overall, the main types of assessment methods used by the sample were standardised tests of ability (94.6%) and attainment (91.2%) (see ). There were again some distinct differences between the methods used by EPs in comparison with the rest of the sample. Significant differences were reported for parental consultation (EPs 95.1%, non-EPs 54.7%, X2 = 54.642, p < .000); observation (EPs 94.2%, non-EPs 65.6%, X2 = 31.456, p < .000); pupil perspective (EPs 92.2%, non-EPs 60.0%, X2 = 36.465, p < .000) and teacher report (EPs 86.4%, non-EPs 62.8%, X2 = 19.726, p < .000). In addition, EPs reported using more criterion-referenced (X2 = 43.017, p < .000), curriculum-based (X2 = 11.527, p < .000) and dynamic assessment methods (X2 = 107.025, p < .000), while non-EPs relied more heavily on outcomes from exams and formal tests (X2 = 14.776, p < .000). Social, emotional, behavioural and wellbeing inventories (X2 = 106.657, p < .000) and inspection of school work (X2 = 27.770, p < .000) were more commonly used by EPs.

Figure 2. Assessment methods.

Figure 2. Assessment methods.

Important features of standardised assessment

The most important features of standardised assessments for all respondents (see ) were: the availability of appropriate norms (83.5%), reliability (83.2%), ease of administration (76.3%), validity (75.3%), ease of scoring (65.7%) and provides relevant information (65.7%). There were no significant differences between the groups for these features. However, chi-square values indicated that there were some significant differences between EPs and other practitioners in the sample, with EPs prioritising the importance of low adverse impact on the child (EPs 72.8%, non-EPs 47.4%, X2 = 19.731, p < .000), engaging materials (EPs 61.2%, non-EPs 32.6%, X2 = 25.622, p < .000), providing relevant information (EPs 79.6%, non-EPs 60.7%, X2 = 12.009, p = .001), and usefulness for feedback discussion (EPs 62.1%, non-EPs 42.5%, X2 = 11.746, p = .001). Perhaps unsurprisingly, given their peripatetic role, portability was more important to EPs than non-EPs (EPs 51.4%, non-EPs 23.5%, X2 = 27.662, p < .000).

Figure 3. Most important features of standardised assessments.

Figure 3. Most important features of standardised assessments.

Reasons for standardised assessments

Overwhelmingly, the main reason why non-EPs (86.7%) used standardised assessment was for exam concessions (access arrangements) (see ). This was significantly higher than use by EPs (40.7%) (X2 = 82.199, p < .000). EPs were significantly more likely to use standardised assessments in diagnosis (EPs 60.2%, non-EPs 47.7%, X2 = 5.132, p = .023), norm-referencing (EPs 75.7%, non-EPs 36.5%, X2 = 48.195 p < .000); planning for intervention (EPs 83.5%, non-EPs 68.8%, X2 = 9.177, p = .002), and profiling strengths and difficulties (EPs 90.3%, non-EPs 59.3%, X2 = 34.909, p < .000); while non-EPs used them more often in baseline screening (EPs 40.8%, non-EPs 57.9%, X2 = 8.439, p = .004). Smaller numbers of EPs in particular, used standardised assessment to access resources and in research.

Discussion

Findings will be discussed in relation to the four research questions concerning the focus of assessment, methods, important features of tests, and reasons for assessment, before a summary of salient observations across the dataset is offered, in relation to the distinct contribution of EP assessment. Limitations of the study will be considered and future research proposed.

Findings in relation to the research questions

Focus

Findings indicate that both EPs and non-EPs are involved in assessment across a wide range of domains including ability; behaviour; educational, social and developmental skills; mental health; and mental capacity. Across the sample, the most common testing focus tended to be ability and attainment, indicating that at the time of the questionnaire, learning profiling was a priority for both EPs and non-EPs. However, the focus of EP assessment was significantly more likely to be related to mental health and wellbeing, suggesting that schools are looking to EPs for support in this area. This is consistent with findings from a large-scale survey by Sharpe et al. (Citation2016) which suggested that mental health support was most likely to be provided in schools by EPs. It also reinforces that EPs could be important to the delivery of the government’s mental health agenda for schools and educational settings (Department of Health & Department for Education, Citation2017) or have involvement in universal screening (Waite & Atkinson, Citation2021; Waite et al., Citation2022).

Significant differences between EPs’ and non-EPs’ focus on social skills and social communication assessment could relate to a significant EP role in profiling for autistic spectrum disorders (ASD) which resonates with Saddredini et al.’s (Citation2019) survey of EPs in the UK and Republic of Ireland, in which the 161 respondents indicated that work related to autism accounted for about 25% of their caseload, although reported practice encompassed much broader casework, including consultation.

Developmental and play skills assessment was also much more commonplace within the practice of the EP respondents, indicating the importance of EP assessment within Early Years settings and highlighting EPs’ role as early years practitioners (Hussain et al., Citation2019; Shannon & Posada, Citation2007). There is evidence too that EPs are more likely to engage in assessments within areas of specialist knowledge such as speech and language (Sedgwick & Stothard, Citation2019) and mental capacity (Atkinson et al., Citation2015; The Stationery Office, Citation2005).

Assessment methods

Standardised assessment tests were the most widely used by both EPs and non-EPs. Notably, schools need evidence from standardised assessments to apply for access arrangements for students in exams (Joint Council for Qualifications, Citation2021; Woods et al., Citation2018). Indeed, Atkinson et al. (Citation2019) found that being able to carry out exam access arrangements was the most common reason why survey respondents were on the RQTU. However, the findings indicate that, in general, the range of assessment methods used by EPs tends to be wider than that used by non-EPs. Other methods favoured by EPs (observation, teacher report, pupil perspectives, and parental consultation) echoed findings from Woods and Farrell’s (Citation2006) questionnaire survey, although notably that study was more specific in defining the interactions EPs had with teachers and parents as part of the assessment process, such as facilitating a problem-solving process, or joint review of progress. EPs were more likely to use curriculum-based and criterion-referenced methods; while findings suggest that, amongst EPs, dynamic assessment methods may have increased in popularity since the time of Freeman and Miller’s (Citation2001) study, perhaps due to these being more clearly documented (cf Lauchlan & Carrigan, Citation2017). In keeping with the reported role of EPs in supporting wellbeing (Sharpe et al., Citation2016) EPs were more likely to use inventories which assess social, emotional and mental health.

Features

The reliability and validity of tests and the availability of appropriate norms were also broadly recognised as important features of standardised assessment across the sample. This suggests a sound understanding of assessment principles. However, it is noteworthy that reliability was valued higher by non-EPs than EPs, and that 16.5% of EPs and 28% of non-EPs did not rate validity as an important feature of standardised assessment, despite the fact that they are fundamental principles of measurement (Corcoran & Fischer, Citation2006). It is possible that both EPs and non-EPS recognise the limitations of standardised assessment and would acknowledge the importance of measuring a concept (for example, reading proficiency) through multiple sources, although this requires further investigation.

Notably EPs were particularly invested in the child’s experience of assessment, giving significantly higher ratings for high interest/engaging materials and low adverse impact on the child. Despite this, only 72.8% of EP respondents prioritised low adverse impact in their rated responses about the important features of standardised assessment. Given HCPC (Citation2015) guidelines for practitioner psychologists, which emphasise the need to exercise a duty of care, and act in the service user’s best interests, this may suggest that individual practitioner reflection through continuing professional development, and discussions at a service delivery level about child-friendly practices, might be useful for some EPs.

EPs indicating preference for assessments which provide relevant information which can inform a feedback discussion or interview, potentially adds weight to Woods and Farrell’s (Citation2006) findings that a distinctive feature of EP practice was holistic and child-centred assessment, as reported by their EP sample.

Reasons

Predominantly, EPs saw assessment as enabling profiling strengths and difficulties, and planning for intervention. This is consistent with both the conceptualisation of EPs as scientist practitioners (Fallon et al., Citation2010; Lane & Corrie, Citation2007) and of assessment being positioned within a broader context of formulation, hypothesis testing and intervention, as described within professional practice frameworks (Annan et al., Citation2013; Division of Educational and Child Psychology, Citation1999; Monsen et al., Citation1998; Woolfson et al., Citation2003). Access arrangements, for example, for exams, are more likely to be a reason for non-EPs to undertake standardised assessment, as indicated within Joint Council for Qualifications (Citation2021) guidance. Although EPs have a critical role within statutory procedures, and writing advice which can contribute to educational, health and care plans (Department for Education & Department of Health, Citation2015) notably accessing resources was a priority of only a quarter of the sample, suggesting that EP assessment (and indeed the EP role) is much more holistic and wide-ranging than just supporting statutory processes (Association of Educational Psychologists, Citation2017).

The EP’s distinct role within assessment

Previous research by Farrell et al. (Citation2006) sought to understand the distinct contribution made by EPs in working with children with special educational needs. Their review found that all respondent groups in their survey (schools, local authorities, other professionals, EPs, principal EPs and university programme directors) rated the distinctive role of individual assessment as either ‘high’ or ‘very high’. While this study, given its origins from within the BPS’s Psychological Testing Centre, tends to focus on standardised assessment, rather than other sorts of assessment approaches, including criterion-referenced, curriculum-based and dynamic (Freeman & Miller, Citation2001), it potentially offers some insights into exactly why this contribution is distinct.

Specifically, there are many areas in which EPs carry out assessments much more frequently than non-EPs. These include assessments of behaviour, developmental skills, language, mental capacity, play skills, social skills, social communication and wellbeing and mental health. While research shows that EPs continue to use standardised assessments, findings indicate that a high proportion of the respondent EP sample uses other methods including observation, parental consultation and gaining pupil perspectives. This may be consistent with what Annan et al. (Citation2013) described as ‘triangulation of evidence’ (p. 83) – sampling from a number of sources to ensure the accuracy of information and to develop an integrated and rationalised conceptualisation of student need. For EPs as practitioner psychologists, assessment practices should be informed by both theory and evidence, and by an understanding of the practice context (British Psychological Society, Citation2017a) with formulation based on ‘the summation and integration of the knowledge that is acquired by the assessment process’ (p. 10).

The review report by Farrell et al. (Citation2006) highlighted EPs’ ability to develop a holistic view of the child, based on their psychological knowledge and considered opinion. In this study, EPs use a wider range of assessment methods, across more areas of learning and development. Additionally, reasons for undertaking standardised testing tended to be more varied, going beyond norm-referencing and accessing resources, with profiling strengths and difficulties and planning for intervention a priority for most EPs. While it is not possible to draw firm conclusions from these findings, it is possible that standardised assessment practices undertaken by EPs link to finding out what the child does well, and what they struggle with; but then this information is used in devising interventions which are needs-focussed, but utilise children’s strengths. Also, that triangulated assessment enables hypothesis formulation about a child’s difficulties and strengths, which can be tested through their response to intervention (Woods & Farrell, Citation2006). Further qualitative research could usefully elucidate whether this is an accurate picture of EPs’ current practice.

Implications for practitioners

While these results are illustrative of distinct and holistic practice, they also reveal possibilities for development. While debates continue about standardised assessment within the profession, particularly in relation to the use of psychometrics (Apter, Citation2017; Minks et al., Citation2020), figures suggest that ability assessments are still widely used by EPs, in this case 98% of the respondent EP sample (see ). It is perhaps concerning, therefore, that reliability and validity were not rated by all ability test users as important features of standardised assessment; especially given that EPs work with children and young people with a range of needs (Minks et al., Citation2020). While practitioners may be more mindful of other features (for example, impact on/experience for the child), it seems surprising that these concepts would be overlooked, and might indicate that some EPs have lost touch with some of the key underpinning principles of standardised assessment. Alternatively, it could suggest that they are familiar enough with assessment methods and materials to take these principles as a ‘given’. Additionally, while some training courses cover the British Psychological Society (Citation2017b) competencies systematically, it might be possible that other trainees, perhaps those who have practice placement experiences with limited opportunities for standardised testing, do not get the opportunity to fully engage with these principles; although this would only be problematic if they were using them in practice (Health and Care Professions Council, Citation2015). This situation may be exacerbated by the COVID-19 pandemic, which has limited opportunities for face-to-face contact, and therefore direct assessment.

While ability testing is not synonymous with psychometrics, figures from this study and from previous research (Woods & Farrell, Citation2006), as well as anecdotal evidence available to the researchers in their role as programme tutors, through feedback from trainees, do not necessarily indicate a decline in standardised assessment practices. Whilst EPs are not required to be on the RQTU, it is perhaps timely for individual EPs and EP services to be considering whether benchmarking testing competencies (British Psychological Society, Citation2017b) against current practice is desirable.

Because data from this study are from 2018 (see Limitations section below), they represent a picture of EP and non-EP assessment pre-pandemic. While distinct features of EP assessment appear to include the range of areas covered, and holistic and child-centred approaches, changing to working patterns arising from COVID-19-related restrictions (Association of Educational Psychologists, Citation2020), coupled with pressures on services due to austerity measures, statutory work and traded services (Hill & Murray, Citation2019) may pose a threat to these patterns of assessment practice. Therefore, it would be useful for services to consider how to maintain these distinct features, while considering assessment practices affected by remote working and restricted access to school environments.

Limitations and future directions

Unfortunately, changes to working patterns and researcher availability arising from the COVID-19 pandemic, and subsequent local and national lockdowns delayed the submission of the research. The data from the survey are from 2018 and therefore represent an insight into practice pre-pandemic, which could be useful for future comparison. Assessment practices have been significantly affected by lockdown measures, with UK GCSE examinations in 2020 and 2021 being based on teacher assessment rather than examinations (GOV.UK, Citation2021). Additionally, guidance from the Association of Educational Psychologists (Citation2020), published during the pandemic, advised against EPs going into school, meaning that assessment work was often conducted remotely, making standardised assessment unfeasible. It is therefore hoped that the findings provide a helpful picture, as schools and EP services return to more normal patterns of delivery.

It should be acknowledged that the sample survey is unlikely to be representative and free of bias given that the survey response rate was only 8.4% and completed by a self-selecting sample of respondents from the DECP and RQTU mailing lists. Not all EPs are DECP members, and not all professionals engaging in assessment in schools are on the RQTU, thus limiting the sample. Furthermore, it is acknowledged that the EP sample, recruited predominantly through the DECP, might be more representative than the sample of the other professionals, recruited via the RQTU, given that the RQTU is an affirmation of testing practices, thus attracting respondents with a particular interest or specialism in this area. While EP numbers are small, to the best of the authors’ knowledge this is the largest dataset addressing EP assessment practices since Woods and Farrell’s (Citation2006) study, in which the sample came from EPs attending two conferences.

EPs in the DECP and on the RQTU may have been mailed two emails, potentially increasing their chances of responding, although the survey was set up to limit respondents to only completing it once. EPs were only recruited from these channels, and a wider representation of EPs could have been sought using other distribution lists or social media platforms. Additionally, with the benefit of hindsight, the dataset could have been enriched by further information about participants (for example, length of time in role) and their workplace context. Any conclusions drawn from the data should therefore be tentative, and scrutinised through further research.

The data from this study are quantitative and while they enable an overall picture of assessment practices amongst EPs and non-EPs, further qualitative research would be helpful to explore reasons for some of the trends highlighted in the findings. These include decision points around assessment, and practices in areas where EP assessment seems to represent more of a distinct contribution.

Finally, while data from this study suggest that EPs are mindful of children’s experiences of assessment, there appears to be a dearth of published research on this topic, either in case study format, or as an evaluation of practice. Given that many services actively seek feedback from the child and/or family, it would be useful to see some of these data analysed and made available in the form of published studies, to allow reflection on how to improve the assessment experience for children and young people.

Figure 4. Reasons for using standardised tests.

Figure 4. Reasons for using standardised tests.

Geolocation information

Keywords have been selected which are relevant to the article and to searches undertaken within this study. Author ORCID identifiers have been included where appropriate.

Disclosure statement

The BPS Psychological Testing Centre seeks to promote competent test use amongst professionals. Funding was allocated only for analysis of the dataset, conducted by the second author, who was not connected with the BPS in any way.

Additional information

Funding

This work was supported by the British Psychological Society.

References