783
Views
48
CrossRef citations to date
0
Altmetric
Original Articles

Interrater Agreement and Reliability

Pages 13-34 | Published online: 18 Nov 2009
 

Abstract

The distinction between interrater (or interobserver, interjudge, interscorer) "agreement" and "reliability" is discussed. A total of 3 approaches or techniques for the estimation of interrater agreement and reliability are illustrated and compared, using data from a hypothetical study. The 3 approaches are (a) simple percentage of agreement and kappa, (b) simple correlational techniques, and (c) generalizability (G) theory techniques. In discussing the relative advantages and disadvantages of the various approaches, the G theory techniques are emphasized-because they are the most comprehensive and flexible, allowing the researcher to isolate multiple sources of measurement error in one study. Some recommendations regarding the "method of choice" for various situations are offered.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.