Abstract
A ‘cognote’ system has been developed for coding electronic discussion groups and promoting critical thinking. Previous literature has provided an account of the strategy as applied to several academic settings. This article addresses the research around establishing the inter-rater reliability of the cognote system. The findings suggest three indicators of reliability, namely: 1. that raters assign similar grades to students' discussion group contributions; 2. that raters predominantly assign the same cognotes to students' discussion group contributions and 3. that raters are selecting in excess of 50% of the same text in assigning the same cognotes.