Abstract
We question widely accepted practices of publishing articles that present quantified analyses of qualitative data. First, articles are often published that provide only very brief excerpts of the qualitative data themselves to illustrate the coding scheme, tacitly or explicitly treating the coding results as data. Second, articles are often published that treat interrater reliability solely as a matter of justifying the coding scheme, without further attention to the variance it makes evident in the process of coding. We argue that authors should not treat coding results as data but rather as tabulations of claims about data and that it is important to discuss the rates and substance of disagreements among coders. We propose publication guidelines for authors and reviewers of this form of research.
ACKNOWLEDGMENTS
This essay draws significantly on CitationHammer and Louca (2008). An earlier version benefitted from feedback by Mike Stieff and Bob Mislevy and a more recent version from feedback by Michelle Wilkerson-Jerde.
Notes
1We believe that this will be familiar enough to readers that we do not need to call out particular examples.
2Often in the natural sciences it is possible and important to reproduce phenomena, but for any particular experiment all that remains are the records.
3Our emphasis here is on the publication of this form of research; we recommend that readers interested in methodology consult Chi's (1997) article and these other accounts.
4In other respects, the article followed the same widely accepted practices we are calling into question.
5Nor, it may be important to note, are we arguing that qualitative research must eventually lead to quantification. To be sure, our own work (e.g., Berland & Hammer, 2012) is often purely qualitative.