478
Views
3
CrossRef citations to date
0
Altmetric
Articles

Validating performance assessments: measures that may help to evaluate students’ expertise in ‘doing science’

ORCID Icon, & ORCID Icon
Pages 419-445 | Published online: 28 Dec 2018
 

ABSTRACT

Background: Several different measures have been proposed to solve persistent validity problems, such as high task-sampling variability, in the assessment of students’ expertise in ‘doing science’. Such measures include working with a-priori progression models, using standardised item shells and rating manuals, augmenting the number of tasks per student and comparing different measurement methods.

Purpose: The impact of these measures on instrument validity is examined here under three different aspects: structural validity, generalisability and external validity.

Sample: Performance assessments were administered to 418 students (187 girls, ages 12–16) in grades 7, 8 and 9 in the 2 lowest school performance tracks in (lower) secondary school in the Swiss canton of Zurich.

Design and methods: Students worked with printed test sheets on which they were asked to report the outcomes of their investigations. In addition to the written protocols, direct observations and interviews were used as measurement methods. Evidence of the instruments’ validity was reported by using different reliability and generalisability coefficients and by comparing our results to those found in literature.

Results: An a-priori progression model was successfully used to improve the instrument’s structural validity. The use of a standardised item shell and rating manual ensured reliable rating of the written protocols (.79 ≤ p0 ≤ .98; .56 ≤ κ ≤ .97). Augmenting the number of tasks per student did not solve the challenge of reducing task-sampling variability. The observed performance differed from the performance assessed via the written protocols.

Conclusions: Students’ performance in doing science can be reliably assessed with instruments that show good generalisability coefficients (ρ2 = 0.72 in this case). Even after implementing the different measures, task-sampling variability remains high (σˆpt2=47.2%). More elaborate studies that focus on the substantive aspect of validity must be conducted to understand why students’ expertise as shown in written protocols differs so markedly from their observed performance.

Disclosure statement

No potential conflict of interest was reported by the authors.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

ISS Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 1,007.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.