85,513
Views
1,103
CrossRef citations to date
0
Altmetric
Review

To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales

ORCID Icon & ORCID Icon
Pages 201-216 | Received 27 Sep 2019, Accepted 10 Dec 2019, Published online: 26 Dec 2019
 

ABSTRACT

The concept of data saturation, defined as ‘information redundancy’ or the point at which no new themes or codes ‘emerge’ from data, is widely referenced in thematic analysis (TA) research in sport and exercise, and beyond. Several researchers have sought to ‘operationalise’ data saturation and provide concrete guidance on how many interviews, or focus groups, are enough to achieve some degree of data saturation in TA research. Our disagreement with such attempts to ‘capture’ data saturation for TA led us to this commentary. Here, we contribute to critical discussions of the saturation concept in qualitative research by interrogating the assumptions around the practice and procedures of TA that inform these data saturation ‘experiments’, and the conceptualisation of saturation as information redundancy. We argue that although the concepts of data-, thematic- or code-saturation, and even meaning-saturation, are coherent with the neo-positivist, discovery-oriented, meaning excavation project of coding reliability types of TA, they are not consistent with the values and assumptions of reflexive TA. We encourage sport and exercise and other researchers using reflexive TA to dwell with uncertainty and recognise that meaning is generated through interpretation of, not excavated from, data, and therefore judgements about ‘how many’ data items, and when to stop data collection, are inescapably situated and subjective, and cannot be determined (wholly) in advance of analysis.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. In a parallel focus group study, Hennink, Kaiser, and Weber (Citation2019) reported that four focus groups were sufficient for code saturation (94% of all codes and 96% of high prevalence codes were identified). However, meaning saturation (fully understanding the issues identified through code saturation) required five or more groups. Again, this is not dissimilar to the average number of groups across focus group research (e.g. a mean of 8.4 and a median of 5 groups identified by Carlsen and Glenton Citation2011). Previously, Guest, Namey, and McKenna (Citation2017) had reported that 80% of themes were discoverable in very few (2–3) focus groups, and 90% in 3–6, and claimed three focus groups were enough to identify all of the most prevalent themes. Some have compared (data) saturation in TA from interview and focus group data collection. Namey et al. (Citation2016) reported that eight interviews or three focus groups were necessary to achieve 80% thematic saturation (i.e. 80% of the total number of codes identified) and 16 interviews or five focus groups to achieve 90%. To adequately address a research question focused on evaluation, they recommend a sample size between 8 and 16 interviews or three and five focus groups. An earlier study had identified five focus groups and nine interviews as the point at which (data) saturation was reached (Coenen et al. Citation2012).

2. An important wider implication – raised by an anonymous reviewer – is how the inclusion of saturation, and the positioning of saturation as a (required) measure of quality, in these guidelines, might have implications that do not just affect the judged quality and publishability of an individual study. In a context where systematic review and methods like qualitative synthesis deploy ‘quality controls’ for inclusion, the ramifications are far broader than the individual study, with impacts on what qualitative ‘evidence’ gets seen and heard through such (highly regarded) mechanisms for assessing evidence for developing, for instance, policy, evidence-based practice, and so forth. We do not have scope to do this point justice here, but raise it as a wider quality consideration to be addressed.

3. Ando, Cousins, and Young (Citation2014) are an exception; they describe their method as a modified version of our approach (Braun and Clarke Citation2006), involving the addition of a second stage of coding clarifying the initial coding, and the review of codes rather than themes for the purpose of creating a codebook. Yet even so, in claiming that 12 interviews ‘should be a sufficient sample size for thematic analysis’ (p. 7), they nonetheless evoke a singular method of ‘thematic analysis’.

4. The understanding of a ‘deductive’ approach in coding reliability and codebook TA is often rather different from our conceptualisation – of using existing theory as a lens through which to code and interpret the data. In reflexive TA, using interview questions as themes does not represent a deductive approach just an under-developed analysis (Braun and Clarke Citation2006).

5. Theoretical saturation – whether interpreted as implying a fixed point or not – requires concurrent process of data collection and analysis, and crucially theoretical sampling, practices fairly particular to grounded theory, and not typically elements of TA.

Additional information

Notes on contributors

Virginia Braun

Virginia Braun is a Professor in the School of Psychology at The University of Auckland. She is a feminist and (critical) health psychologist and teaches and researches in these areas. She has an ongoing interest in qualitative research and wrote (with Victoria Clarke) the award winning textbook Successful Qualitative Research (Sage). She has written extensively on thematic analysis (with Victoria and others) and co-edited Collecting Qualitative Data (Cambridge University Press) with Victoria and Debra Gray. She also has a particular interest in the story completion method and recently co-edited (with Victoria, Hannah Frith and Naomi Moller) a Special Issue of Qualitative Research in Psychology dedicated to this method.

Victoria Clarke

Victoria Clarke is an Associate Professor of Qualitative and Critical Psychology at the University of the West of England, Bristol, where she teaches about qualitative methods and sexuality and gender to undergraduate and postgraduate students. She has published an award winning textbook Successful Qualitative Research (Sage) and numerous publications on thematic analysis with Virginia Braun, and the edited text Collecting Qualitative Data (Cambridge University Press) with Virginia and Debra Gray. Most recently, Victoria and Virginia, along with Hannah Frith and Naomi Moller, co-edited a Special Issue of Qualitative Research in Psychology on the story completion method.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 348.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.