1,092
Views
2
CrossRef citations to date
0
Altmetric
Research Articles

An Application of Generalizability Theory to Evaluate the Technical Quality of an Alternate Assessment

&
Pages 279-297 | Published online: 27 Sep 2013
 

Abstract

Although federal regulations require testing students with severe cognitive disabilities, there is little guidance regarding how technical quality should be established. It is known that challenges exist with documentation of the reliability of scores for alternate assessments. Typical measures of reliability do little in modeling multiple sources of error, which are characteristic of alternate assessments. Instead, Generalizability theory (G-theory) allows researchers to identify sources of error and analyze the relative contribution of each source. This study demonstrates an application of G-theory to examine reliability for an alternate assessment. A G-study with the facets rater type, assessment attempts, and tasks was examined to determine the relative contribution of each to observed score variance. Results were used to determine the reliability of scores. The assessment design was modified to examine how changes might impact reliability. As a final step, designs that were deemed satisfactory were evaluated regarding the feasibility of adapting them into a statewide standardized assessment and accountability program.

Notes

1There are limitations in treating tasks in this manner. First, results will not be produced that allow for interpretations in variability due to differences in task difficulty because task differences cannot be disentangled from task by student interactions in this design. In addition, the task variability that is confounded with the interaction might also be influenced by the fact that task is not a truly nested facet. It might be slightly underestimated as compared to a design where students were administered a purely unique pair of tasks.

2An interaction between rater type and student indicates that students are rank-ordered differently by the two types of raters. Another way of thinking about it is that the way in which the two types of raters differ in their ratings depends on the student.

3These percentages are based on total variance of scores had the scores for students been averaged across a single rater type, single task, and single attempt. This is not to be confused with the observed total variance of scores where the scores for students were averaged across two rater types, two tasks and three attempts.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.