Abstract
In method comparison and reliability studies, it is often important to assess agreement between multiple measurements made by different methods, devices, laboratories, observers, or instruments. For continuous data, the concordance correlation coefficient (CCC) is a popular index for assessing agreement between multiple methods on the same subject where none of the methods is treated as reference. Barnhart et al. (Citation2007) proposed coefficient of individual agreement (CIA) to assess individual agreement between multiple methods for situations with and without a reference method extending the concept of individual bioe-quivalence from the FDA Citation2001 guidelines. In this paper, we propose a new CCC for assessing agreement between multiple methods where one of the methods is treated as reference. We compare the properties of the CCC and CIA and their dependency on the relative magnitude of between-subject variability and within-subject variability. The relationship between CCC and CIA as well as the impact of between-subject variability are presented algebraically and graphically. Several examples are presented to explain the interpretation of the CCC and CIA values.
ACKNOWLEDGMENT
This research is supported by the National Institutes of Health Grant R01 MH70028.
Notes
*The method is used as reference.