301
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Decomposing social-emotional skill rubrics: a methodological approach to examining acquiescence in rubrics’ ratings

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 429-447 | Received 22 Oct 2021, Accepted 10 Oct 2023, Published online: 15 Nov 2023

References

  • Abrahams, L., Pancorbo, G., Primi, R., Santos, D., Kyllonen, P., John, O. P., & De Fruyt, F. (2019). Social-emotional skill assessment in children and adolescents: Advances and challenges in personality, clinical, and educational contexts. Psychological Assessment, 31(4), 460–473. https://doi.org/10.1037/pas0000591
  • Adams, R. J., Wilson, M., & Wang, W.-C. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21(1), 1–23. https://doi.org/10.1177/0146621697211001
  • AERA, APA, & NCME. (2014). Standards for educational and psychological testing. American Educational Research Association.
  • Andrade, H. L. (2007). Self-assessment through rubrics. Educational Leadership, 65(4), 60–63.
  • Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. Alexandria, Virginia USA: ASCD.
  • Brookhart, S. M. (2018). Appropriate criteria: Key to effective rubrics. Frontiers in Education, 3. https://doi.org/10.3389/feduc.2018.00022
  • Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343–368. https://doi.org/10.1080/00131911.2014.929565
  • Brown, G., Andrade, H., & Chen, F. (2015). Accuracy in student self-assessment: Directions and cautions for research. Assessment in Education Principles, Policy & Practice, 22(4), 444–457. https://doi.org/10.1080/0969594x.2014.996523
  • Chamorro-Premuzic, T., & Arteche, A. (2008). Intellectual competence and academic performance: Preliminary validation of a model. Intelligence, 36(6), 564–573. 10.1016/j.intell.2008.01.001
  • Cruz, L., & Loureiro, A. (2020). Achieving world-class education in adverse socioeconomic conditions : The case of Sobral in Brazil. World Bank. http://hdl.handle.net/10986/34150.
  • Danner, D., Aichholzer, J., & Rammstedt, B. (2015). Acquiescence in personality questionnaires: Relevance, domain specificity, and stability. Journal of Research in Personality, 57, 119–130. 10.1016/j.jrp.2015.05.004
  • Dawson, P. (2015). Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347–360. https://doi.org/10.1080/02602938.2015.1111294
  • Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies in Educational Evaluation, 53, 69–76. 10.1016/j.stueduc.2017.03.003
  • Haladyna, T. M., & Downing, S. M. (2004). Construct-irrelevant variance in high-stakes testing. Educational Measurement Issues & Practice, 23(1), 17–27. https://doi.org/10.1111/j.1745-3992.2004.tb00149.x
  • Humphry, S. M., & Heldsinger, S. A. (2014). Common structural design features of rubrics may represent a threat to validity. Educational Researcher, 43(5), 253–263. https://doi.org/10.3102/0013189x14542154
  • Jackson, D. N., & Messick, S. (1958). Content and style in personality assessment. Psychological Bulletin, 55(4), 243–252. https://doi.org/10.1037/h0045996
  • Jensen, M. (2015). Personality traits, learning and academic achievements. Journal of Education & Learning, 4(4), 91. https://doi.org/10.5539/jel.v4n4p91
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144. https://doi.org/10.1016/j.edurev.2007.05.002
  • Kane, M., & Bridgeman, B. (2017). Research on validity theory and Practice at ETS. In R. E. Bennett & M. V. Davier (Eds.), Advancing human assessment: The methodological, psychological and policy contributions of ETS (pp. 489–552). Springer Nature .
  • Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23. https://doi.org/10.2307/1176219
  • Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. The American Psychologist, 50(9), 741. https://doi.org/10.1037/0003-066X.50.9.741
  • Messick, S. (1996). Validity of performance assessments. In G. W. Phillips (Ed.), Technical issues in large-scale performance assessment (pp. 1–18). National Center for Education Statistics.
  • Moskal, B. M. (2003). Recommendations for developing classroom performance assessments and scoring rubrics. Practical Assessment, Research & Evaluation, 8, 14. https://doi.org/10.7275/jz85-rj16
  • Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation, 7(10), 1–6.
  • Nunnally, J. C.M, S. (1978). Psychometric theory. (2nd ed.). New York, NY: McGraw-Hill.
  • Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129–144. https://doi.org/10.1016/j.edurev.2013.01.002
  • Panadero, E., & Jonsson, A. (2020). A critical review of the arguments against the use of rubrics. Educational Research Review, 30, 100329. 10.1016/j.edurev.2020.100329
  • Panadero, E., Jonsson, A., & Botella, J. (2017). Effects of self-assessment on self-regulated learning and self-efficacy: Four meta-analyses. Educational Research Review, 22, 74–98. 10.1016/j.edurev.2017.08.004
  • Panadero, E., Jonsson, A., & Strijbos, J. (2016). Scaffolding self-regulated learning through self-assessment and peer assessment: Guidelines for classroom implementation. In D. Laveault & L. Allal (Eds.), Assessment for learning: Meeting the Challenge of Implementation (pp. 311–326). Springer International Publishing.
  • Panadero, E., & Romero, M. (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assessment in Education Principles, Policy & Practice, 21(2), 133–148. https://doi.org/10.1080/0969594X.2013.877872
  • Pancorbo, G., Primi, R., John, O. P., Santos, D., Abrahams, L., & De Fruyt, F. (2020). Development and psychometric properties of rubrics for assessing social-emotional skills in youth. Studies in Educational Evaluation, 67, 100938. https://doi.org/10.1016/j.stueduc.2020.100938
  • Popham, W. J. (1997). What’s wrong–and What’s right–with rubrics. Educational Leadership, 55(2), 72–75.
  • Poropat, A. E. (2009). A meta-analysis of the five-factor model of personality and academic performance. Psychological Bulletin, 135(2), 322–338. 10.1037/a0014996
  • Poropat, A. E. (2014). A meta-analysis of adult-rated child personality and academic performance in primary education. British Journal of Educational Psychology, 84(2), 239–252. https://doi.org/10.1111/bjep.12019
  • Primi, R., De Fruyt, F., Santos, D., Antonoplis, S., & John, O. P. (2019a). True or false? Keying direction and acquiescence influence the validity of socio-emotional skills items in predicting high school achievement. International Journal of Testing, 20(2), 97–121. https://doi.org/10.1080/15305058.2019.1673398
  • Primi, R., Santos, D., De Fruyt, F., & John, O. P. (2019b). Comparison of classical and modern methods for measuring and correcting for acquiescence. The British Journal of Mathematical and Statistical Psychology, 72(3), 447–465. https://doi.org/10.1111/bmsp.12168
  • Primi, R., Santos, D., John, O. P., & De Fruyt, F. (2021). SENNA inventory for the assessment of social and emotional skills in public school students in Brazil: Measuring both identity and self-efficacy. Frontiers in Psychology, 12, 716639. https://doi.org/10.3389/fpsyg.2021.716639
  • Rammstedt, B., Danner, D., & Bosnjak, M. (2017). Acquiescence response styles: A multilevel model explaining individual-level and country-level differences. Personality and Individual Differences, 107, 190–194. https://doi.org/10.1016/j.paid.2016.11.038
  • Rammstedt, B., & Farmer, R. F. (2013). The impact of acquiescence on the evaluation of personality structure. Psychological Assessment, 25(4), 1137–1145. https://doi.org/10.1037/a0033323
  • Rammstedt, B., Goldberg, L. R., & Borg, I. (2010). The measurement equivalence of big five factor markers for persons with different levels of education. Journal of Research in Personality, 44(4), 53–61. https://doi.org/10.1016/j.jrp.2009.10.005
  • Rammstedt, B., Kemper, C. J., & Borg, I. (2013). Correcting big five Personality measurements for acquiescence: An 18-country cross-cultural study. European Journal of Personality, 27(1), 71–81. https://doi.org/10.1002/per.1894
  • Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859
  • Rusman, E., & Dirkx, K. (2017). Developing rubrics to assess complex (generic) skills in the Classroom: How to Distinguish skills’ mastery levels? Practical Assessment, Research & Evaluation, 22(12), 1–9.
  • Shiner, R., Soto, C., & De Fruyt, F. (2021). Personality assessment of children and adolescents. Annual Review of Developmental Psychology, 3(1), 113–137. https://doi.org/10.1146/annurev-devpsych-050620-114343
  • Soto, C. J., & John, O. P. (2017). The next big five inventory (BFI-2): Developing and assessing a hierarchical model with 15 facets to enhance bandwidth, fidelity, and predictive power. Journal of Personality & Social Psychology, 113(1), 117–143. https://doi.org/10.1037/pspp0000096
  • Soto, C. J., John, O. P., Gosling, S. D., & Potter, J. (2008). The developmental psychometrics of big five self-reports: Acquiescence, factor structure, coherence, and differentiation from ages 10 to 20. Journal of Personality & Social Psychology, 94(4), 718–737. https://doi.org/10.1037/0022-3514.94.4.718
  • Spielmann, J., Yoon, H. J. R., Ayoub, M., Chen, Y., Eckland, N. S., Trautwein, U., Zheng, A., & Roberts, B. W. (2022). An in-depth review of conscientiousness and educational issues. Educational Psychology Review, 34(4), 2745–2781. https://doi.org/10.1007/s10648-022-09693-2
  • Ten Berge, J. M. (1999). A legitimate case of component analysis of ipsative measures, and partialling the mean as an alternative to ipsatization. Multivariate Behavioral Research, 34(1), 89–102. https://doi.org/10.1207/s15327906mbr3401_4
  • Tierney, R., & Simon, M. (2004). What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9, 2. https://doi.org/10.7275/jtvt-wg68
  • Wollenschläger, M., Hattie, J., Machts, N., Möller, J., & Harms, U. (2016). What makes rubrics effective in teacher-feedback? Transparency of learning goals is not enough. Contemporary Educational Psychology, 44-45, 1–11. https://doi.org/10.1016/j.cedpsych.2015.11.003

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.