1,872
Views
11
CrossRef citations to date
0
Altmetric
Part 2: Knowledge Mobilisation and Engagement

Rethinking policy-related research: charting a path using qualitative comparative analysis and complexity theory

Pages 333-345 | Received 03 Nov 2012, Accepted 16 Nov 2012, Published online: 10 Dec 2012

Abstract

This article argues that conventional quantitative and qualitative research methods have largely failed to provide policy practitioners with the knowledge they need for decision making. These methods often have difficulty handling real-world complexity, especially complex causality. This is when the mechanism of change is a combination of conditions that occur in a system such as an organisation or locality. A better approach is to use qualitative comparative analysis (QCA), a hybrid qualitative/quantitative method that enables logical reasoning about actual cases, their conditions and how outcomes emerge from combinations of these conditions. Taken together, these comprise a system, and the method works well with a whole-system view, avoiding reductionism to individual behaviours by accounting for determinants that operate at levels beyond individuals. Using logical reduction, QCA identifies causal mechanisms in sub-types of cases differentiated by what matters to whether the outcome happens or not. In contrast to common variable-based methods such as multiple regression, which are divorced from actual case realities, QCA is case-based and rooted in these realities. The use of qualitative descriptors of conditions such as ways of working engages practitioners, while their standardisation enables systematic comparison and a degree of generalisation about ‘why’ questions that qualitative techniques typically do not achieve. The type of QCA described in the article requires conditions and outcomes to be dichotomised as present or absent, which is helpful to practitioners facing binary decisions about whether to do (a) or (b), or whether or not an outcome has been achieved.

Introduction

Policy practitioners do not need to be reminded that the world is complex but our research methods have been slow to respond to this reality (Head, Citation2008). Policy action is by definition about causing an outcome that is different from what would otherwise have happened. It requires intervention in complexity. This article argues that the challenge for policy-related research is therefore one of understanding complex causality when causes are to be found in the conditions that combine together in ‘cases’, including the emergence of ‘outcomes’ as a result of these combinations (Ragin, Citation2000). Complex causality has been demonstrated for many social phenomena and is the context in which much policy intervention occurs (Blackman, Citation2006; Cooper & Glaesser, Citation2010; Head, Citation2008). But we need to go beyond the term as a general description of ‘wicked issues’ and understand what it really means for how our research methods can help inform what policy practitioners should do.

Policy action intervenes in cases such as households with children living below a certain income threshold or organisations failing on a performance measure. These cases are systems, and policy-related research aims ‘to find out how systems work’, the task of all sciences according to Bunge (Citation2004, p. 207). Yet our research methods, in their most widely used conventional forms, are remarkably ill-suited to this task. Many quantitative studies claim to identify the ‘independent’ effects of ‘variables’ on an outcome, but what effects can really be regarded as ‘independent’ in a complex world, and what causal power can a ‘variable’ have when variables have no existence beyond the cases in which their particular values are embodied? In the social world, intervening to change the value of an ‘independent’ variable is unlikely to produce the change in an outcome variable that conventional statistical models predict because social systems are open systems with multiple interactions that determine a possible range of outcomes. For example, a variable can have a different effect on an outcome depending on what combination of other variables it is part of, and this varies from case to case. We see this in the example of tackling health inequalities discussed later.

Qualitative research is often advocated as the best way to capture the complexity of social phenomena, but even rich case studies full of insight about how things happen are very limited in answering ‘why’ questions without the systematic comparison of cases that would enable us to understand causation, which is essential to policy intervention. Qualitative case studies would be all we could use to understand how systems work if all systems were unique, but they are not because systems group into types. So, another way of putting the challenge for policy-related research is to find the combinations of conditions and outcomes that represent sub-types of the system, where the same kinds of mechanism are at work. Policy action is then about moving systems of a particular type from one sub-type to another: for example, moving households with children from below to above a poverty threshold or organisations from poor to good performance. To elaborate further on this challenge, it is not just to find the methods that can do this but to build ‘middle range theory’. This is theory that highlights ‘the heart of the story by isolating a few explanatory factors that explain important but delimited aspects of the outcomes to be explained’ (Hedström & Ylikoski, Citation2010, p. 61).

With the social sciences increasingly expected to demonstrate relevance and impact from their public funding, it is their contribution to how to achieve policy objectives in complex circumstances where this return on investment is frequently promised. It is often argued that complex policy interventions pose methodological difficulties that have greatly limited the amount of useful evidence on which they can be based, especially evaluations (Coote, Allen, & Woodhead, Citation2004). However, my argument is that it is the aptness of our methods that is the main problem. This is not an argument for greater methodological sophistication: sophisticated methods may impress academic reviewers but risk a loss of transparency for research users faced with what they may see as arcane and impenetrable publications (Brousselle & Lessard, Citation2011; Oliver & McDaid, Citation2002). Policy practitioners work under time pressure and if there is not a return from studying research evidence in the form of clear, actionable findings that improve on pre-existing know how or official guidance, the motivation to seek and use evidence is unlikely to be there.

The lack of evidence may also reflect the ‘independence’ of academic researchers. Although this should mean an engagement with policy interventions based on standards of research integrity, academic detachment can in fact mean that policy challenges either fail to get attention or are framed and investigated in ways that are not useful or actionable. Other research questions may be regarded as more interesting or important, more worthy of academic recognition or more deliverable and therefore publishable (Coote et al., Citation2004).

This article sets out to rethink policy-related research as about case studies, although not of the conventional qualitative research type, and not as a detached academic exercise but as a process co-produced with practitioners and their ‘case work’, whether individuals, organisations or places. I draw on complexity theory to understand ‘cases’ as systems, and variables as descriptions of system conditions rather than abstracted measures in an unreal statistical space. I argue that where the policy scenario is one of intervening in multiple systems of a particular type, we can use the method of qualitative comparative analysis (QCA) to derive contextualised explanations of causal processes and why interventions work or not. In concluding, I suggest that QCA also has promise as a way of designing interventions.

Complexity and methods

Quantitative analyses are still often regarded as ‘hard’ evidence, despite the difficulty of interpreting what a correlation coefficient or odds ratio actually implies for practical intervention and the refusal of many aspects of the social world to conform to the assumptions that underpin common quantitative techniques (Cooper & Glaesser, Citation2010). In particular, great care is needed in characterising the world with means and variances when ‘non-averageness’ is typical and a mean, even qualified by a measure of the spread of values around it, has no embodiment in any actual cases (Chapman, Citation2004; Liebovitch & Scheurle, Citation2000; Pawson & Tilley, Citation1997). Similarly, an effect may be demonstrated statistically, but as already noted, it can still be unclear whether it will occur in a particular case because the context of each case can be so different. In some straightforward situations, this approach may be adequate: a correlation may be strong enough among a homogeneous population of cases for an intervention, which shifts the value of an ‘independent variable’, to achieve a desired outcome across many cases as traced by the ‘outcome variable’. But, overall, a lack of understanding of context as well as a dearth of information about the causal complexity involved are major drawbacks of much quantitative policy research (Ahmed, Bourton, Dechartres, Durieux, & Ravaud, Citation2010; Exworthy, Bindman, Davies, & Washington, Citation2006; Hoddinott, Britten, & Pill, Citation2010; Petticrew et al., Citation2009).

While it might be argued that knowing that something is probably true or likely to occur is better than nothing, treating context as a confounding factor, this does not help us explain causal processes, which needs a case-based approach. It is the cases we need to know about, not generalisations abstracted from them. We might, however, also problematise the term ‘cases’ – rather than only challenge the reality of variables – and indeed how cases are framed is an interpretive decision (Byrne & Ragin, Citation2009; Fischer, Citation2003). But these are judgements we make to the best of our knowledge. Cases, as social systems, are posited as representations of actual reality while variables are not real and exist only insofar as they are embodied in cases.

This is one of the most serious criticisms of the claim that randomised controlled trials (RCTs) represent the strongest form of evidence about an intervention, since the evidence is based on relationships between variables and does not apply to any particular case but to population averages. For many actual cases, the outcomes could be quite different (Cartwright, Citation2007). While RCTs may estimate efficacy (can it work?) from large random samples and techniques based on looking for the independent effects of variables on outcomes, they are much less helpful when it comes to effectiveness (does it work?) for actual cases. Nevertheless, ‘evidence’ from RCTs has justified some major policy and spending commitments in health care, such as the growth of statins and antihypertensives for the preventative treatment of cardiovascular disease (Järvinen, Sievänen, Kannus, Jokihaara, & Khan, Citation2011). The reality is that the effectiveness and cost-effectiveness of these commitments remain largely unknown because evidence is needed about what happens to people (cases) receiving these drugs in real-world settings (contexts).

A further problem with variable-based methods is that in commonly averaging across many individuals, they can fail to account for determinants that operate at levels above and beyond individuals (Kelly, Citation2010). The statistical technique of multi-level modelling addresses this to some extent, including being able to explore interaction across levels of data such as individual pupils and schools, but is still a method concerned with explaining association between individual-level variables (Byrne, Citation2011). This kind of reductionism runs the risk of mis-directing policy action to a less effective or ineffective level of intervention at the individual level (Ormerod, Citation2010). Seddon (Citation2005, p. 33) reflects this in his example of managers misconstruing underperformance as a problem needing intervention in the performance of individual workers rather than in ‘the system’ and ‘the way the work works’, which are characteristics of the organisation and not individuals. It is also an important argument with regard to the current policy fashion in the USA and UK of applying behavioural economics to ‘nudge’ individuals into making better choices, rather than regulating behaviour at a collective level. This has largely individualised explanations of behaviour, ignoring causal systems beyond the individual.

People are in fact often supportive of collective action at a larger system level to tackle policy challenges, recognising and accepting that their individual behaviour can be determined by policy action at this level, such as banning smoking in enclosed public spaces (Maryon-Davis & Jolley, Citation2010). Reductionism to individuals can also run a considerable risk of not seeing an emerging property that becomes a condition of a larger system level where it has transformative and damaging effects on individuals. It may even be thought that individual actions are avoiding these effects. The global economic crisis that started in 2008 is a case in point. Financiers successfully justified their pyramid selling of credit risk by arguing it helped to make the financial system more resilient by dispersing risk. Instead, their products were creating a critical vulnerability in whole economic systems by making very high risk so opaque that its escalation was not identified, assessed or acted on (Tett, 2009). This is an example of complex causation, with massive high-risk debt the emerging property. The escalation to a global crisis that first manifested itself in a wave of US mortgage defaults was caused by a combination of conditions: over-dependency on the finance sector for income and profits; the sector's incentive-driven search for profits that built a real estate bubble; and neoliberal government policies thought to be promoting growth and employment with light regulation (Tomaskovic-Devey & Lin, Citation2011).

Outcomes, therefore, happen as a result of particular causal combinations of conditions. These occur in systems but how these systems are defined is not always straightforward. It may be an empirical decision, such as the operational definition of a household or local economy, or theoretically driven, such as a comparison of national welfare regimes. It is often policy-driven, such as administrative units, professional groups or service users. Meadows’ (2009, p. 11) general description of a system as ‘an interconnected set of elements that is coherently organized in a way that achieves something’ is useful. She adds that, ‘… a system must consist of three things: elements, interconnections, and a function or purpose’. This helps us to think in terms of wholes and their parts at the level where causes are transformatory and produce differences in kind.

Meadows' definition also recognises systems as themselves agents, so by definition, any outcome from an intervention in a system is co-produced by its pre-existing conditions (such as resources, boundaries, ways of working and reasons for doing things), the intervention and the interaction between pre-existing conditions and the intervention.

Complexity and causation

Having made an argument about the system level of intervention needed for many policy challenges, this section develops the notion of a system further by drawing directly on complexity theory. Social systems have a structural character that arises from the persistence and distinctiveness of members' behaviours. These create stable systems – systems of the same type – during normal times. In abnormal times, the system may change type, such as in response to an external stimulus that causes one or more conditions that define the system to alter their state, with an effect that creates a qualitatively different outcome to that before the change. This process is known as emergence and is very different from the incremental change that may be seen during normal times. It is a change in kind.

What is nearly always evident from investigating emergence is that it is seldom simple, from unravelling the chains of cause and effect to issues of framing and interpretation. Applied social science is often most interested in cause and effect in terms of what conditions lead to an outcome. One example we will consider below is the conditions in a local area associated with a particular change in kind: whether health inequality is widening or narrowing. But framing and interpretation can be fraught with difficulties (Fischer, Citation2003; Nutley, Walter, & Davies, Citation2007). These range from defining the system of interest, such as in the health inequality example what kind of local area, and with what conditions (see, e.g. Macintyre, Maciver, & Sooman, Citation1993), to understanding the meaning of phenomena, such as how health inequality is constructed by the nature of policy discourse for the purpose of intervention, and therefore what constitutes widening or narrowing (see, e.g. Graham, Citation2004). Although policy often resolves these issues with its own definitions, critical debate may continue about how appropriate these are.

The complexity of emergence has become a field of interdisciplinary study in its own right over the past three decades. Complexity theory has identified important features of emergent social phenomena, notably how changes in kind do not happen incrementally in a linear fashion but as discontinuities marked by qualitative transformations from one type to another (Byrne, Citation1998; Castellani & Hafferty, Citation2009; Cilliers, Citation1988; Eve, Horsfall, & Lee, Citation1997). The future state of a social system, therefore, can rarely be read from linear projections. It will be one of a number of possible future states generally not sitting on linear trajectories: scenarios rather than predictable single futures. These possible scenarios are called ‘attractors’ in complexity theory because they are dynamic but relatively stable states from which a system moves after it is unsettled by an internal or external change and to which it arrives after a transformation. Attractors are types of system state: in other words, distinct combinations of conditions that represent the outcome of a causal process that has operated as a pathway to that outcome. What drives the change is combination, or complex causality. What future attractor comes about depends on the system's initial conditions and then what causal combinations occur as a result of a change such as a policy intervention. QCA brings to this, as we shall see, a means to investigate these combinations empirically by grouping cases into sets and applying logical reasoning to reduce causal combinations of conditions to ‘the heart of the story’ (Hedström & Ylikoski, Citation2010, p. 61).

Other insights from complexity theory capture the dynamics of social systems very well. These include how the effect of two or more conditions that act together may not be the sum of their separate effects (as assumed by many conventional statistical techniques) but a contingent and emergent outcome of the relationship between them (as described by many qualitative case studies, but often not in a way that is generalisable regarding the mechanisms involved). The same condition may also be associated with different outcomes depending on its combination with other conditions. Complexity theory bridges the quantitative and qualitative, especially with its focus on qualitative states and the thresholds at which systems transition between states that are regarded on objective or normative grounds to be qualitatively different.

Causal pathways, actual or hypothetical, are also ‘theories of change’ (Weiss, Citation1998). These are ‘if … then’ statements of how and why something is expected to happen that link activities, contexts and outcomes. The task of describing theories of change is a fundamentally qualitative one because these are stories, which is often how scenarios in scenario planning exercises are represented. Although stories can be rich ways of achieving an understanding of what something is like this is not so true of why something happens. Systematic comparison is often missing, without which we cannot get to the mechanisms of change, making generalisation very difficult (Ragin, Citation2000). Causal explanations call for a method that is more than narrative. For policy purposes, I suggest an important part of the answer here is tales of two halves: binary scenarios.

Despite the complexity faced by researchers and policy practitioners, it is in fact useful to regard causation as a binary question of two different states: a cause or an outcome is either present or absent. This is illustrated in justice systems where doubt and probability feature but the goal is to establish innocence or guilt, a binary outcome. Policy practitioners generally want to know whether to do (a) or (b), choices that will often need narrative to describe the two options, such as existing practice and a new practice, but which are binary choices between different conditions, just as whether a policy objective is achieved or not is a binary question (or should be, if the policy objective is clearly described).

My position that causation is usefully treated as binary in policy-related research does not mean that it is not combinational. Outcomes often happen from more than one causal condition operating together as sufficient to change an outcome. Single conditions may still be important, either because they are necessary in a combination or even sufficient on their own. But rather than expressing this using odds ratios or descriptions of degree, causal explanations need knowledge about whether a single condition is sufficient (on its own) or necessary (in combination) for an outcome to occur: for a system to change. In this approach, binarisation of conditions as either present or absent not only resonates with how decisions are often made but helps achieve clarity in looking at combinations that would otherwise be very complicated if ordinal or continuous values are used. This will be apparent when we come to below.

Figure 1. A truth table of spearhead areas, binary conditions and outcomes.

Figure 1. A truth table of spearhead areas, binary conditions and outcomes.

Change is what drives research questions but also policy projects. A concern of policy making is to change the state of systems. Variables can represent the state of a system on key parameters such as income, health or performance, and it is these conditions – as they characterise cases – rather than relationships between variables abstracted from any grounding in cases that are the targets for policy intervention. Systems thinking is therefore a crucial conceptual aid in understanding empirical data about system states. This is put very well in a World Health Organisation report on the topic edited by de Savigny and Adam (Citation2009, p. 19), with its warning about unintended consequences a good introduction to the case study that follows:

Work in fields as diverse as engineering, economics and ecology shows systems to be constantly changing, with components that are tightly connected and highly sensitive to change elsewhere in the system. They are non-linear, unpredictable and resistant to change, with seemingly obvious solutions sometimes worsening a problem.

A case study

By way of illustration, the following example is taken from a study I led with colleagues at Durham University funded by the UK National Institute for Health Research (Blackman, Wistow, & Byrne, Citation2011). This sought to understand how to narrow geographical inequalities in health in England. The cases were local authority areas with high rates of deprivation and premature mortality. The worst 20% of local authorities based on these measures were designated by the government of the time as ‘Spearhead areas’. This status meant that health improvement initiatives were focused on them in an attempt to narrow their mortality rates compared with the national average (Blackman, Citation2006). The cases are therefore local systems with elements that included: local services and contextual conditions in the area; interconnections between these; and the joint purpose of these services – as mandated by Spearhead status – to narrow health inequalities.

Some of these areas were succeeding in narrowing their mortality gap while others were not, raising the question of what accounted for this difference in outcomes. The study's research design was based on identifying local conditions thought to be relevant to improving population health and testing whether the presence or absence of these conditions explained whether the mortality gap narrowed over the period 2005–2007 among a sample of 27 Spearhead areas. Analysis was undertaken separately for the major causes of premature mortality; this example uses the cancer analysis. The local conditions were:

  • How well commissioning local services conformed to ‘best practice’ in deploying services to tackle cancer mortality.

  • The quality of strategic partnership working across services.

  • The quality of workforce planning.

  • How often local health partnerships reviewed progress with tackling the area's premature mortality rate from cancer compared with the national average (its ‘cancer gap’).

  • Whether tackling the area's cancer gap was individually championed in the area.

  • Whether the area's organisational culture was aspirational or complacent.

  • The area's score on a national index of multiple deprivation.

  • The area's spending on cancer services.

  • The local crime rate.

  • The quality rating of the local NHS Primary Care Trust made by a national audit body.

Some of these conditions were assessed using primary data from a structured survey of key informants in the areas and some drew from secondary data sources. All were constructed as binary measures of whether the condition was present or absent, such as exemplary or good practice compared with basic or less than basic practice, and a higher or lower index of deprivation. The version of QCA known as crisp-set QCA was used to identify sets of such conditions associated with either a narrowing cancer gap or a not narrowing gap. Crisp-set QCA uses binarised conditions in the analysis, in contrast to fuzzy-set QCA that uses ordinal or continuous values but for the reasons discussed above is not my favoured version. There was extensive consultation with practitioners about what data to collect, what thresholds to use for binarisation and how to interpret the findings, sharing the research design and analysis as co-production.

Before showing how QCA worked in this example, the following section reports conventional linear regression analyses of the data. This was undertaken for this article to illustrate the different picture that emerges and its limitations in capturing the complexity behind whether the cancer gap narrowed.

A multiple regression was first carried out, using the statistical package SPSS. The outcome variable was the change in relative differences in mortality between a local area and the national average. The independent variables were all retained as continuous or ordinal except for working culture, which was dummy coded because it is a categorical variable. Stepwise analysis eliminated all the variables except for working culture, with an R2 of 0.43, pointing to this one factor – whether or not narrowing the cancer gap was individually championed in the area – as the key explanation for whether the gap narrowed. Various interactions were explored that were also eliminated. While ‘explaining’ 43% of the variation in the mortality gap outcome might be regarded as quite a good result, it is impossible to apply this variable-based finding to particular cases. This is because in any particular case the finding may not apply, and the finding itself is based on holding all other conditions constant, an unreal situation. Moreover, most of the variation in the outcome remains unexplained.

We can use the binarised variables in a logistic regression analysis, which is a technique that begins to move us from the unreal world of disembodied variables to the real world of cases. It is still a technique based on ‘explaining’ the values of a single variable, but the use of categorical variables takes us closer to system states as the focus of explanation. The stepwise method in logistic regression produced a best fit model that included three of the variables, defined as follows by the value associated with a narrowing gap: a basic standard of commissioning (this is a surprising finding discussed below); individual championing; and a higher level of spending on cancer. These variables successfully predicted the outcomes for all the narrowing cases and all but two of the not narrowing cases. It therefore appears that these three conditions need to be in place for narrowing to occur. However, this is not correct because some of the narrowing areas have a higher standard of commissioning or lower spend on cancers. All have individual championing, but so do some of the areas where the gap was not narrowing. Logistic regression fails to capture the complexity of what is going on. To do that, it is necessary to get closer to the individual cases, which means turning to a case-based method, QCA.

A first step in QCA is to organise the cases and their conditions in what is known as a ‘truth table’. This is shown in , where each of the 27 cases is listed together with their conditions (binarised as present or absent) and outcomes (narrowing or not narrowing). Running QCA organises the cases into sets, or combinations of conditions with common outcomes, and enables these to be minimised by logical reduction so that only conditions that consistently differentiate the sets are included. There can be more than one set associated with the same outcome, and the same condition may behave differently in different sets. This is what we see. Six sets were generated, three associated with narrowing gaps and three associated with not narrowing gaps. The narrowing sets were as follows, with the number of narrowing cases compared with all cases with that combination shown in parentheses:

  1. Individual championing and higher spending on cancers (9/9).

  2. Individual championing and a basic level of workforce planning and less frequent monitoring and higher deprivation and higher crime and an aspirational organisational culture (3/3).

  3. Individual championing and lower deprivation (7/9).

As indicated by the regression analyses, individual championing appears to be necessary to achieve a narrowing gap since it is in all three narrowing combinations. This finding validates the emphasis placed upon championing in recent Department of Health guidance on reducing cancer inequality in England (National Cancer Action Team, Citation2010). However, this condition is not sufficient to achieve a narrowing gap since it is always in a combination that it has its effect.

Turning to the three sets with outcomes that were not narrowing, these are not simply the opposite of the narrowing sets but different combinations – to be avoided – that appear to stop narrowing from happening. These were:

  1. Good/exemplary commissioning and good/exemplary strategic partnership working and good/exemplary workforce planning and an aspirational organisational culture (6/6).

  2. Complacent working culture and higher crime and lower spend on cancer and lower PCT rating and lower deprivation (2/2).

  3. Complacent working culture and higher crime and basic workforce planning and less frequent monitoring (4/5).

There are some surprising results here in both the narrowing and not narrowing combinations. Seemingly poor practices – basic workforce planning and less frequent monitoring – appear in narrowing sets. The first combination listed in the not narrowing sets appears to comprise entirely what might be regarded as best practice. Some conditions – an aspirational organisational culture, basic workforce planning, higher crime and lower deprivation – have different effects depending on the combination they are in.

This complexity is a feature of the real world, as are surprising results that initially seem counter-intuitive. The QCA findings are fully discussed in Blackman et al. (Citation2011) and raise interesting questions about what is allegedly best practice actually causing distracting and counter-productive bureaucracy beyond a certain effective level of planning and organising. While ‘cause’ needs to be used with care when discussing results like this based on association, establishing association is necessary for establishing causation, even though the actual causal arguments require theoretical explanation (and in this example included validation drawing on practitioner experience).

QCA successfully handles this kind of complexity, despite the apparent simplicity of its truth tables and Boolean expressions used to find sets that essentially represent different sub-types of the cases. Things get more complicated with fuzzy-set QCA, which is beyond the scope of this article but, more importantly, does not deliver the decisiveness that is an attraction of crisp-set QCA and its requirement that conditions are binary. However, to regard the crisp-set technique as simple is very misleading since considerable thought and work has to go into data collection, data reduction, definitions and threshold judgements. This includes judgements about thresholds at which causal combinations are regarded as sufficient when some contradictory cases are evident (Cooper & Glaesser, Citation2010; Rihoux, Rezsöhazy, & Bol, Citation2011).

The technique is not alone among analytical methods in entailing judgement and simplifying assumptions, and has the advantage that these are made explicit. If there are competing explanatory theories, these can be tested with the same method of transparent and systematic comparison based on actual cases rather than reified variables. This improves the robustness of case study research by using a systematic approach akin to the experimental method but, unlike much experimental research, QCA is able to reflect real-world contexts where outcomes often arise from combined rather than independent effects. Where single conditions exercise effects regardless of any combination with other conditions, or in a limited range of such combinations, this is treated as evidence of sufficient or necessary causation rather than an effect artificially isolated by controlling for the effects of other variables.

Conclusions

This article has sought to show how QCA handles the complexity faced by policy practitioners in a theoretically informed way that overcomes the serious limitations of other quantitative and qualitative methods. It captures causal complexity by operationalising it as processes of combination, while working with the binary decision-making options that practitioners also face. It creates opportunities for practitioners to contribute their tacit knowledge and define important thresholds between options and between policy achievement or failure.

The article has also argued for a systems approach that avoids reductionism by thinking in terms of wholes and their parts at the level where causes are transformatory and produce differences in kind. Thus, the conditions used in the case study of health inequalities were at quite a high degree of granularity, but this was because they were defined at the level of a whole system, the Spearhead area. This was a system with real meaning for practitioners, both because it was their sphere of activity and framed outcome measurement.

The descriptors used to define conditions and levels of achievement against them meant that the learning made possible was far more than conventional performance management and its key performance indicators because the mechanisms are described. This goes beyond misconceiving policy practice as a technical process based on scientific evidence to recognising in co-production ‘… a complex blend of factual statements, interpretations, opinions, and evaluations’ (Majone, Citation1989, p. 63). As Head (Citation2008, p. 115) comments: ‘… the fundamental challenge for researchers is to develop new thinking about the multiple causes of problems, opening up new insights about the multiple pathways and levels required for better solutions, and gaining broad stakeholder acceptance of shared strategies and processes’.

Evidence, however, is just one of multiple streams into policy action and is rarely if ever unfiltered by bureaucratic, business and political interests and considerations (Flitcroft, Gillespie, Salkeld, Carter, & Trevena, Citation2011; Kingdon, Citation1995; Schön & Rein, Citation1994; Stevens, Citation2011). The article has argued that ‘evidence’ itself can have serious limitations, if not actually mis-represent the social world in the case of much variable-based research or provide little by way of practical tools in the case of many intensive qualitative studies. Both of these kinds of research can have their place in exploring data but, by identifying the mechanisms at work using practically based descriptors, QCA can be used to furnish decision-making tools.

It is interesting to note that the technique of logic modelling sometimes used to plan programme interventions has parallels with QCA. Logic models developed as a tool for planning complex, comprehensive programmes aimed at systemic change in an outcome and drawing on theory-based evaluation (Befani, Ledermann, & Sager, Citation2007; Kellogg Foundation, Citation2004; Weiss, Citation1998). A logic model is a roadmap of ‘if … then’ statements describing how a programme is expected to work. These statements link descriptions of resources, activities, outputs, outcomes and impacts, all of which can be framed as conditions for the purpose of translation into a QCA model. Thus, a health programme to introduce ‘cancer equality champions’ into disadvantaged areas not narrowing their cancer inequality gap would be expected to work in certain configurations of conditions, all explicitly stated in an openly available model that could be re-visited to evaluate whether this did indeed work and, if not, in what conditions it failed to do so. This could provide a common reference point for both sharing the assumptions of a policy intervention and informing debate about whether it works. In addition, QCA offers intriguing possibilities for moving beyond observed configurations to designing new configurations that could be more effective or efficient in achieving policy goals (Fiss, Citation2007).

I have attempted to demonstrate that between complexity theory and QCA, we have a means of organising our knowledge of social systems in a way that accords well with how the world works. While investigating continuous or ordinal values may be interesting social science in exploring how incremental conditions vary together, for policy and practice purposes there is a case for going binary: this option rather than another. Binarisation does not rule out multiple pathways and there may not be one ‘right’ option, so causal complexity is still captured. Similarly, for any one condition or set, there may be different options for binarisation, using alternative thresholds that produce different outcomes. But overall there is much to be said for the clarity of the crisp-set approach. Not only does it identify what is necessary and sufficient, but in doing so may identify activities thought to be necessary to an outcome but revealed not to be so, and levels of performance thought to be best practice but that are in fact unnecessary or even counter-productive.

Knowledge from investigations using these techniques will always be uncertain given the contingencies of random variation and measurement and coding errors. However, by being explicit about cases, conditions, thresholds and outcomes – discussed and co-produced as part of a joint research and policy endeavour – there is the promise of agreeing that we have the best possible knowledge, theoretically and practically grounded, about how to move a system from one state to another.

Acknowledgements

Thanks are due to Jon Bannister, Irene Hardill, Dave Byrne and four anonymous referees for helpful comments. The National Institute for Health Research (NIHR) Service Delivery and Organisation (SDO) programme provided the funding for the QCA project. Further details can be found at: http://www.sdo.nihr.ac.uk/projdetails.php?ref=08-1716-203. The views expressed in this report are those of the authors and do not necessarily reflect the views of the NHS or Department of Health.

Notes on contributor

Tim Blackman is Professor of Sociology and Social Policy at The Open University, and Pro Vice-Chancellor for Research, Scholarship and Quality. His main interests are in how the social sciences can help to make decisions in complex conditions.

References

  • Ahmed, N., Bourton, I., Dechartres, A., Durieux, P., & Ravaud, P. (2010). Applicability and generalisability of the results of systematic reviews to public health practice and policy: A systematic review. Trials, 11(1), 20 doi: 10.1186/1745-6215-11-20
  • Befani, B., Ledermann, S., & Sager, F. (2007). Realistic evaluation and QCA: Conceptual parallels and an empirical application. Evaluation, 13(2), 171–192. doi: 10.1177/1356389007075222
  • Blackman, T. (2006). Placing health, Bristol: Policy Press.
  • Blackman, T., Wistow, J., & Byrne, D. (2011). A qualitative comparative analysis of factors associated with trends in narrowing health inequalities in England. Social Science & Medicine, 72(12), 1965–1974. doi: 10.1016/j.socscimed.2011.04.003
  • Brousselle, A., & Lessard, C. (2011). Economic evaluation to inform health care decision-making: Promise, pitfalls and a proposal for an alternative path. Social Science & Medicine, 72(6), 832–839. doi: 10.1016/j.socscimed.2011.01.008
  • Bunge, M. (2004). How does it work? The search for explanatory mechanisms. Philosophy of the Social Sciences, 34(2), 182–210. doi: 10.1177/0048393103262550
  • Byrne, D. (1998). Complexity theory and the social sciences, London: Routledge.
  • Byrne, D. (2011). Applying social science, Bristol: Policy Press.
  • Byrne, D., & Ragin, C. (2009). The SAGE handbook of case-based methods, London: Sage.
  • Cartwright, N. (2007). Are RCTs the gold standard? BioSocieties, 2(1), 11–20. doi: 10.1017/S1745855207005029
  • Castellani, B., & Hafferty, F. W. (2009). Sociology and complexity science: A new field of inquiry, Berlin: Springer.
  • Chapman, J. (2004). System failure: Why governments must learn to think differently, London: Demos.
  • Cilliers, P. (1988). Complexity and postmodernism, London: Routledge.
  • Cooper, B., & Glaesser, J. (2010). Contrasting variable-analytic and case-based approaches to the analysis of survey datasets: Exploring how achievement varies by ability across configurations of social class and sex. Methodological Innovations Online, 5(1), 4–23.
  • Coote, A., Allen, J., & Woodhead, D. (2004). Finding out what works: Understanding complex, community-based initiatives, London: King's Fund.
  • Eve, R., Horsfall, S., & Lee, M. (1997). Chaos, complexity, and sociology: Myths, models, and theories, Thousand Oaks: Sage.
  • Exworthy, M., Bindman, A., Davies, H., & Washington, E. A. (2006). Evidence into policy and practice? Measuring the progress of U.S. and U.K. policies to tackle disparities and inequalities in U.S. and U.K. health and health care. The Milbank Quarterly, 84(1), 75–109. doi: 10.1111/j.1468-0009.2006.00439.x
  • Fischer, F. (2003). Reframing public policy: Discursive politics and discursive practices, Oxford: Oxford University Press.
  • Fiss, P. C. (2007). A set-theoretic approach to organizational configurations. Academy of Management Review, 32(4), 1180–1198. doi: 10.5465/AMR.2007.26586092
  • Flitcroft, K., Gillespie, J., Salkeld, G., Carter, S., & Trevena, L. (2011). Getting evidence into policy: The need for deliberative strategies? Social Science & Medicine, 72(7), 1039–1046. doi: 10.1016/j.socscimed.2011.01.034
  • Graham, H. (2004). Tackling inequalities in health in England: Remedying health disadvantages, narrowing health gaps or reducing health gradients. Journal of Social Policy, 33(1), 115–131. doi: 10.1017/S0047279403007220
  • Head, B. W. (2008). Wicked problems in public policy. Public Policy, 3(2), 101–18.
  • Hedström, P., & Ylikoski, P. (2010). Causal mechanisms in the social sciences. Annual Review of Sociology, 36, 49–67. doi: 10.1146/annurev.soc.012809.102632
  • Hoddinott, P., Britten, J., & Pill, R. (2010). Why do interventions work in some places and not others: A breastfeeding support group trial. Social Science & Medicine, 70(5), 769–778. doi: 10.1016/j.socscimed.2009.10.067
  • Järvinen, T. L.N., Sievänen, H., Kannus, P., Jokihaara, J., & Khan, K. M. (2011). The true cost of pharmacological disease prevention. British Medical Journal, 342, d2175 doi: 10.1136/bmj.d2175
  • Kellogg Foundation (2004). Using logic models to bring together planning, evaluation, and action: Logic model development guide, Battle Creek: Kellogg Foundation.
  • Kelly, M. P. (2010). The axes of social differentiation and the evidence base on health equity. Journal of the Royal Society of Medicine, 103(7), 266–272. doi: 10.1258/jrsm.2010.100005
  • Kingdon, J. W. (1995). Agendas, alternatives and public policies, New York: Addison-Wesley.
  • Liebovitch, L. S., & Scheurle, D. (2000). Two lessons from fractals and chaos. Complexity, 5(4), 34–43. doi: 10.1002/1099-0526(200003/04)5:4<34::AID-CPLX5>3.0.CO;2-3
  • Macintyre, S., Maciver, S., & Sooman, A. (1993). Area, class and health: Should we be focusing on places or people? Journal of Social Policy, 22(2), 213–234. doi: 10.1017/S0047279400019310
  • Majone, G. (1989). Evidence, argument and persuasion in the policy process, New Haven: Yale University Press.
  • Maryon-Davis, A., & Jolley, R. (2010). Healthy nudges – When the public wants change and politicians don't know it, London: Faculty of Public Health.
  • Meadows, D. H. (2009). Thinking in systems: A primer, London: Earthscan.
  • National Cancer Action Team (2010). Reducing cancer inequality: Evidence, progress and making it happen, London: Department of Health.
  • Nutley, S. M., Walter, I., Davies, H. T. O. (2007). Using evidence: How research can inform public services, Bristol: Policy Press.
  • Oliver, A., & McDaid, D. (2002). Evidence-based health care: Benefits and barriers. Social Policy & Society, 1(3), 183–190. doi: 10.1017/S1474746402003020
  • Ormerod, P. (2010). N squared: Public policy and the power of networks, London: Royal Society of Arts.
  • Pawson, R., & Tilley, N. (1997). Realistic evaluation, London: Sage.
  • Petticrew, M., Tugwell, P., Welch, V., Ueffing, E., Kristjansson, E., Armstrong, R., & Waters, E. (2009). Cochrane update: Better evidence about wicked issues in tackling health inequities. Journal of Public Health, 31(3), 453–56. doi: 10.1093/pubmed/fdp076
  • Ragin, C. (2000). Fuzzy-set social science, Chicago: University of Chicago Press.
  • Rihoux, B., Rezsöhazy, I., & Bol, D. (2011). Qualitative comparative analysis (QCA) in public policy analysis: An extensive review. German Policy Studies, 7(3), 9–82.
  • de Savigny, D., & Adam, T. (2009). Systems thinking for health systems strengthening, Geneva: World Health Organization.
  • Schön, D. A., & Rein, M. (1994). Frame reflection: Toward the resolution of intractable policy controversies, New York: Basic Books.
  • Seddon, J. (2005). Freedom from command and control: A better way to make the work work, Buckingham: Vanguard Education.
  • Stevens, A. (2011). Telling policy stories: An ethnographic study of the use of evidence in policy-making in the UK. Journal of Social Policy, 40(2), 237–256. doi: 10.1017/S0047279410000723
  • Tett, G. (2009, March 10). Lost through destructive creation. Financial Times, p. 11
  • Tomaskovic-Devey, D., & Lin, K.-H. (2011). Income dynamics, economic rents, and the financialization of the U.S. economy. American Sociological Review, 76(4), 538–559. doi: 10.1177/0003122411414827
  • Weiss, C. H. (1998). Evaluation, Upper Saddle River, NJ: Prentice-Hall.