Abstract
Coefficients that assess the reliability of data-making processes, that is, coding text, transcribing interviews, or categorizing observations into analyzable terms, are mostly conceptualized in terms of the agreement a set of coders, observers, judges, or measuring instruments exhibit. When variation is low, reliability coefficients reveal their dependency on an often neglected phenomenon, the amount of information that reliability data provide about the reliability of the coding process or the data it generates. This paper explores the concept of reliability, simple agreement, three conceptions of chance to correct that agreement, and sources of information deficiency, and it develops two measures of information about reliability, akin to the power of a statistical test, intended as a companion to traditional reliability coefficients, especially CitationKrippendorff's (2004, pp. 221–250; CitationHayes & Krippendorff, 2007) alpha.
ACKNOWLEDGEMENTS
I am grateful to Ron Artstein for valuable suggestions on an earlier draft of this paper and to Andrew Hayes for encouraging me to simplify access to the information measures.