Abstract
Position effects (PE) cause decreasing probabilities of correct item responses towards the end of a test. We analysed PEs in science, mathematics and reading tests administered in the German extension to the PISA 2006 study with respect to their variability at the student- and school-level. PEs were strongest in reading and weakest in mathematics. Variability in PEs was found at both levels of analysis. PEs were stronger for male students, for students with a migration background (science and mathematics), and for students with a less favourable socio-economic background (reading). At the school level, PEs were stronger in lower school tracks and in schools with a high proportion of students with a migration background. The relationships of the test scores with the covariates partly reflected the covariates’ relationships with PEs. Our findings suggest that PEs should be taken seriously in large-scale assessments as they have an undesirable impact on the results.
Notes
1. To study the sample selectivity with respect to PEs, we applied the statistical model presented below to the proficiency scores derived solely on the basis of item responses (weighted likelihood estimates; Warm, Citation1989). In these models, we regressed the latent PE variable on two student variables, indicating whether (1) they provided answers to the test evaluation question and whether (2) they were excluded from the sample. Missing values for the test evaluation question were strongly related to PEs (y-standardised effects of −0.83, −0.85 and −0.79 for science, mathematics and reading, respectively). However, the second indicator had only negligible incremental effects on PEs (y-standardised effects of −0.11, −0.06 and −0.04 for science, mathematics and reading, respectively).