574
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Cultivating realistic self-appraisal: examining student overclaiming and fair assessment through PISA and classroom data

One of the key goals for educators and assessors is to support students to develop skills needed for their future and life-long learning and to make sure they have a realistic understanding of their own capabilities. Given the demands of the future job market, it is vital that students understand both their abilities and limitations when applying for roles necessitating specific competencies like problem-solving. Fostering and cultivating precise self-insight and realistic self-appraisal ensures students can present themselves accurately and continue to build skills that will benefit the broader society. The newly released OECD report on PISA results from 2021 addresses these issues specifically:

“Some people argue that the PISA tests are unfair, because they may confront students with problems they have not encountered in school. But then life is unfair, because the real test in life is not whether we can remember what we learned at school, but whether we will be able to solve problems that we can’t possibly anticipate today. (OECD, Citation2023, p. 4).

While most people will agree on the importance of knowing what you can do, the challenge is that we do not always report accurately what we know and are able to do. In fact, previous PISA reports may have data suggesting students overclaim what they are able to do.

Overclaiming is the theme of the first article in this regular issue, authored by Jerrim et al. (Citation2023). Using data from PISA 2012, and nine Anglophone countries (a total of 2689 schools and 62 969 students with 80% response rate) the research team examined students’ familiarity with mathematical terms. More specifically, the research team investigated the response to the question; thinking about a mathematical concept: how familiar are you with the following terms? A list of 16 items was then given to the students, and they were asked to indicate on a 5-point scale their familiarity with the concepts (constructs included were exponential function, vectors, complex numbers, radicals, probability, etc). Three of the 16 items were fake, and the research team further investigated the students who claimed familiarity of these three constructs on a scale of overclaiming.

Students reported their perceptions further on constructs such as self-reported popularity in schools, problem-solving abilities, self-efficacy and perseverance.

Analysis of the sample demonstrated some key-findings, which also confirm previous studies: boys are much more likely to overclaim than girls, in fact, this finding appeared across all nine countries with effect sizes between 0.2 and 0.3 standard deviation in each country. Students with higher socio-economic status are more likely to overclaim than students from more disadvantaged background. Overall, the researchers found that students from US and Canada had significantly higher overclaiming score than the other countries.

With the newly released PISA results (OECD, Citation2023), it is worth suggesting that researchers engage critically with the data before providing any policy recommendations to different governments while keeping in mind also the need for further detailed analysis in different countries to understand and interpret data correctly considering biases such as overclaiming.

Issues around self-reported behaviour and perceptions are also touched upon in the second article in this issue. Rasooli et al. (Citation2023) has published an article in response to a research field which is still not properly examined, students’ perceptions of fairness in assessment. The current study collected data from first-year undergraduate students in a university in Canada, using the The Classroom assessment fairness inventory (CAFI).

The research team discuss how the use of CAFI can contribute to develop better polices and standards for classroom assessment, promote teacher capacity in fair assessment within teacher education programmes and possibly improve the assessment practices in schools. To continue to research this area is of importance, as the authors remind us students have a right to fair assessments and the exclusion of their voices violate these rights (Leighton, Citation2022).

In the third article on this regular issue, Chen et al. (Citation2023) has reviewed how technology-based assessment has affected the education systems pre-college. Data was collected from 34 countries, through 34 full-text English sources in 2018–2022, covering the time of the pandemic. The study reports mixed evidence about technology-based assessments and practices in relation to cheating, enhancement of learning, and workload. The study did demonstrate strong evidence for the assumptions that technology-based assessments improve measurement precision, interpretability, engagement and interactions between teachers and parents. In an era of increased use of artificial intelligence, language models and technologies, it is of importance to further investigate the possibilities, limitations and challenges around these technologies and assessment (Børte et al., Citation2023; Hopfenbeck et al., Citation2023) as well as tackling the threats around the digital divide globally (Ercikan et al., Citation2018).

The fourth article reports from a study in Brazil where the research team investigated the validity of a rubric. Data was collected from middle-school students from four schools who participated in an intervention programme of the Instiute Ayrton Senna in the Brazilian state of Ceará with the aim to examine the effects of construct - irrelevant variance in the rubrics. Results demonstrated that their approach is a promising method for improving the validity of rubrics (Pancorbo et al. Citation2023).

The fifth paper in this issue, authored by Pinedo et al. (Citation2023) is addressing students’ perspectives and experiences with self-assessment in secondary and higher education. The authors suggest that the Dunning-Kruger effect impacts students’ ability to accurately perform self-assessment, as high performers might choose better strategies than low performers.

Both the first paper by Jerrim et al. (Citation2023) and the last by Pinedo et al. (Citation2023) tackle the challenges surrounding self-assessment and accurate self-perception of one’s abilities. When interpreting findings from international assessments like PISA, analysing students’ self-reported approaches to learning and thoughts about their own knowledge could better inform efforts to cultivate precise self-understanding. Both papers tap into pitfalls described as the Dunning-Kruger effect, whereby those with limited skills or knowledge may overestimate their own competence, while highly competent individuals underestimate their capabilities assuming tasks easy for them are easy for all.

For these reasons, the current regular issue offers important articles which should raise discussions for our future research programme. How can we better educate students, so they are able to not only think they can solve problems but have the skills to do it? How can we provide them feedback in ways which will enhance a better understanding of their own capabilities?

People affected by the Dunning-Kruger effect may be resistant to feedback or learning opportunities because they believe they are already highly competent. This resistance hinders personal and collective growth, as individuals may miss out on valuable opportunities for improvement. These challenges need to be tackled not only by our research community but also in collaboration with our colleagues across the education sectors globally.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Therese Hopfenbeck

Therese Hopfenbeck Professor of Educational Assessment, Assessment and Evaluation Research Centre, Faculty of Education, University of Melbourne.

References

  • Børte, K., Lillejord, S., Chan, J., Wasson, B., & Greiff, S. ISSN 1747-938X. (2023). Prerequisites for teachers’ technology use in formative assessment practices: A systematic review. Educational Research Review, 41(100568). https://doi.org/10.1016/j.edurev.2023.100568.
  • Chen, D., Jeng, A., Sun, S., & Kaptur, B. (2023). Use of technology-based assessments: A systematic review covering over 30 countries. Assessment in Education Principles, Policy & Practice. https://doi.org/10.1080/0969594X.2023.2270181
  • Ercikan, K., Asil, M., & Grover, R. (2018). Digital divide: A critical context for digitally based assessments. Education Policy Analysis Archives, 26(51), 1–18. https://doi.org/10.14507/epaa.26.3817
  • Hopfenbeck, T. N., Zhonghua, Z., Sundance, S. Z., Robertson, P., & McGrane, J. A. (2023). Challenges and opportunities for classroom-based formative assessment and AI: A perspective article. Frontiers in Education, 8. https://doi.org/10.3389/feduc.2023.1270700
  • Jerrim, J., Parker, P. D., & Shure, N. (2023). Overclaiming. An international investigation using PISA data. Assessment in Education Principles, Policy & Practice, 1–21. https://doi.org/10.1080/0969594X.2023.2238248
  • Leighton, J. P. (2022). Leveraging socio-emotional assessment to foster children’s human rights. Routledge. https://doi.org/10.4324/9781003152781
  • OECD. (2023). PISA 2022 results (volume I): The state of learning and equity in Education. OECD Publishing.
  • Pancorbo, G., Primi, R., John, O. P., Santos, D., & De Fruyt, F. (2023). Decomposing social-emotional skill rubrics: A methodological approach to examining acquiescence in rubrics’ ratings. Assessment in Education: Principles, Policy & Practice, 1–19. https://doi.org/10.1080/0969594X.2023.2271679
  • Pinedo, L., Panadero, E., Fernandez-Ruiz, J., & Rodríguez-Hernández, C. (2023). Students’ experiences in self-assessment: Training, processes and feedback use in secondary and higher education. Assessment in Education Principles, Policy & Practice.
  • Rasooli, A., DeLuca, C., Cheng, L., & Mousavi, A. (2023). Classroom assessment fairness inventory: A new instrument to support perceived fairness in classroom assessment. Assessment in Education Principles, Policy & Practice, 1–24. https://doi.org/10.1080/0969594X.2023.2255936

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.