3,834
Views
2
CrossRef citations to date
0
Altmetric
Empirical Papers

Causality in psychotherapy research: Towards evidential pluralism

ORCID Icon, , &
Pages 1004-1018 | Received 15 Jun 2022, Accepted 17 Dec 2022, Published online: 31 Dec 2022

ABSTRACT

Identifying causal relationships is at the heart of all scientific inquiry, and a means to evidence base practices and to guide policymaking. However, being aware of the complexities of interactions and relationships, scientists and academics are cautious about claiming causality. Researchers applying methods that deviate from the experimental design generally abstain from causal claims, reserving them for designs that adhere to the evidential ideals of empiricism (e.g., RCTs), motivated by the Humean conceptions of causality. Accordingly, results from other designs are ascribed lower explanatory power and scientific status. We discuss the relevance of also other perspectives of causality, such as dispositionalism and the power perspectives of various realist approaches, which emphasize intrinsic properties and contextual variations, as well as an inferentialist/epistemic approach that advocates causal explanations in terms of inferences and linguistic interaction. The discussion will be illustrated by the current situation within psychotherapy research and the APA Policy Statement on Evidence-Based Practice. The distinction between difference-making and causal production will be proposed as a possible means to evaluate the relevance of designs. We conclude that clarifying causal relationships is an ongoing process that requires the use of various designs and methods and advocate a stance of evidential pluralism.

Clinical or methodological significance of this article: We illustrate how various conceptualizations of causality allow the inclusion of a variety of research approaches needed to identify robust causal relationships in psychotherapy. This in turn serves to deepen our understanding of mechanisms of change and outcome, such as the effects of specific techniques, common factors, clinical theories, and the relationship between them.

The limitations of experimental designs such as Randomized Controlled Trials (RCTs) have been increasingly highlighted within the social sciences during the past few decades (e.g., Anjum et al., Citation2020; Carey & Stiles, Citation2016; Clarke et al., Citation2014; Deaton & Cartwright, Citation2018; Gillies, Citation2019; Krause & Lutz, Citation2009; Maxwell, Citation2004, Citation2021; Russo & Williamson, Citation2007; Wampold & Imel, Citation2015). Yet, the very same designs seem to persistently hold their status as ideal for identifying causal relationships, as reflected in for example the American Psychological Association (APA) Policy Statement on Evidence-Based Practice (APA, Citation2005, Citation2006), which states that “RCTs and their logical equivalents (efficacy research) are the standard for drawing causal inferences about the effects of interventions” (APA, Citation2006, p. 274). With reference to recent debates, we will discuss the status of causality claims in psychotherapy research, including the limitations of research designs conforming to the empiricist Humean regularity conception of causality. We will present alternative conceptions of causality, such as that of causal powers/dispositionalism, critical realism, and inferentialism/epistemic causality, and the methodological stance of evidential pluralism as a possible foundation for sound knowledge development within psychotherapy research.

Hume and Empiricism—Causation in Terms of Regularity, Counterfactuals, and Manipulability

Causality has been discussed in philosophy at least since Aristotle, who saw cause (aitios) as the driver of change in nature. As his views are traditionally understood, he saw causal principles as being both intrinsic and extrinsic, implying that no change would happen if either was missing. That is, all things have some intrinsic potential for change and a purpose (or telos) of becoming, which requires the coming together of internal and external types of causes, only one of which is external (Broadie, Citation2009). To Hume and later empiricists, no such drivers or purposiveness are constitutive of causation. Rather, Hume’s influential model of causality relies on observable regularities in the relations between discrete events. According to Hume, we cannot have any evidence or knowledge about potential causal mechanisms beyond observed regularities together with two other observable features: temporality (the cause must happen before the effect) and contiguity (the cause and effect must meet in time and space). A cause may thus be defined as

an object followed by another, and where all the objects, similar to the first, are followed by objects similar to the second. Or, in other words, where, if the first object had not been, the second never had existed. (Hume acc. to Lewis, Citation1973, p. 556)

Hume (Citation1739/Citation1978) recognized our tendency to form ideas of a necessary connection between these objects—a “causal sense”—but argued that this is not an observable feature and can therefore not be part of an empirically grounded concept of causality. The origin of our knowledge of necessary connections arises out of observed regularity or constant conjunction of certain events across many instances. All scientists can do is observe the repeated regularity of cause and effect and impose meaning. Hume’s deep skepticism of causes spawned three theories/ideas of causal identification: regularity (e.g., Psillos, Citation2009), counterfactual reasoning (e.g., Lewis, Citation1973), and manipulability (e.g., Woodward, Citation2003), which all three constitute the pillars of modern experimental designs. Regularity refers to the repetition of regular patterns, which may not, it has been argued, account for epiphenomena and spurious relationships, or asymmetry (i.e., smoking may cause cancer without cancer causing one to smoke). In his seminal paper on counterfactual reasoning, Lewis (Citation1973) set out to get around the limitations of the regularity conception by picking up on another aspect of Hume’s definition, stating that:

We think of a cause as something that makes a difference, and the difference it makes must be a difference from what would have happened without it. Had it been absent, its effects – some of them at least, and usually all – would have been absent as well. (Lewis, Citation1973, p. 557)

Counterfactual reasoning thus emphasizes the unidirectional dependency of an effect (E) upon a cause (C). With manipulation-based accounts of causality, this dependency is demonstrated by intervening on the target variable, implying that C is identified as a cause if and only if E is changed by manipulating C (Woodward, Citation2003, Citation2009). These three theories have in each their ways nourished crucial sophistications of the experimental designs (e.g., Illari & Russo, Citation2014; Shadish et al., Citation2002).Footnote1 Irrespective of their distinctions, they share some characteristics that reflect the scientific landscape of academia during the Enlightenment from the seventeenth century, often referred to in terms of positivism, which seems to have turned into a metaphor of the zeitgeist as a whole. According to Hacking (Citation1983), “In the major West European languages ‘positive’ had overtones of reality, utility, certainty, precision … ” (p. 42), which we may say capture the essence of modernity and modern science, and the subsequent ideals of scientific method. While signaling optimism and forward movement that spawned myriads of methods and empirical research, the same ideals carried with them a mistrust of other aspects of knowledge production, such as untestable propositions, unobservable entities, causes, and deep explanation (Hacking, Citation1983). Correspondingly, claims of causality were linked to demonstrations of regularities, time precedence (the cause precedes the effect in time), and control over confounding variables, ideally free of context, values, or theoretical assumptions of causal mechanisms and purposiveness. While these characteristics reflect the heritage of empiricism and positivism, counterfactual reasoning and manipulability have also been related to hypothesizing, agency, and rationality (e.g., Woodward, Citation2009) that signal a departure from strict empiricism over to realism (e.g., Hacking, Citation1983) and critical rationalism (e.g., Popper, Citation1959). This reflects the sometimes blurred distinctions between the philosophical underpinnings of causal theories and, in turn, research methods and designs, a point we will get back to.

Humeanism has been highly influential—if not constitutive—of modern science, as mirrored in several acknowledged criteria for the evaluation of scientific quality, of which those of internal validity are deemed vital to ensure the control needed to identify causal relationships. Within psychotherapy research, along with other disciplines within the social sciences, experiments, such as RCTs, have come to represent the gold standard of rigorous knowledge production aimed at establishing causality.

The Evidence Crisis in Psychotherapy Research

Within psychotherapy research, we have witnessed a debate on the concept of evidence that flared up during the 1990s and early 2000 (e.g., Bohart, Citation2000; Elliott, Citation1998; Lampropoulos, Citation2000). Besides a growing disaffection for the RCT’s lack of ecological validity and the science-practice gap (cf. APA, Citation2006), a crucial driving force of the debate was the recurring failure to demonstrate comparative effects of various specific models used for specific diagnoses or effects of certain theory specific techniques. Rather than documentation of specificity, numerous studies, meta-analyses, and reviews identified factors that cut across the specific treatment models. Factors such as the working alliance between the therapist and the client, motivation, expectations, and various other aspects of the treatment processes consistently explained more of the outcome variance than the theory-specific models or techniques themselves (e.g., Lambert, Citation2013; Laska et al., Citation2014; Wampold & Imel, Citation2015; Wampold & Owen, Citation2021). The expectations of demonstrable, specific effects of methods or techniques, cautiously tested by clinical experiments, overall were not met. For therapy to work, it is not indifferent who the therapist is, nor the client’s particular circumstances, expectations, and wishes. Therapists’ way of tailoring the techniques to the particular client significantly impacts therapeutic processes and outcome, allowing change factors such as client motivation, engagement, and involvement to work (e.g., Bohart & Wade, Citation2013; Stiles, Citation2009b). Wampold (Citation2001) replaced the “medical model” with the “contextual model” to be able to explain the somewhat counter-intuitive findings from the field. The main point is that contextual factors, such as the therapists and the way they apply the techniques, and the clients’ experiences, responses, and contributions (such as conceptualized and measured in terms of the working alliance) seemingly influence the therapeutic effect more than the theories and the techniques themselves.

The contextual model was only one (albeit strong and influential) manifestation of the voices against the gold standard of RCTs within psychotherapy research and the field of mental health (see also Carey & Stiles, Citation2016; Kazdin, Citation1998; Krause & Lutz, Citation2009; Norcross et al., Citation2013; Stiles, Citation2009b; Talley et al., Citation1994). Practical, ethical, as well as methodological considerations were claimed to restrict the utility and status of RCTs as providers of relevant knowledge about change. Objections typically centered on the separation of factors into “dependent” and “independent” variables. For example, in line with the arguments in favor of the contextual model, Krause and Lutz (Citation2009) pointed out that one of the central premises of the RCTs, i.e., mutual causal independence of the independent variables, is a form of mathematical ideal that does not reflect the realities of psychotherapy. Factors continuously and inevitably influence each other; the effectiveness of a particular treatment will be influenced by the actual performance of the given therapist, who in turn will be influenced by the given client’s responsiveness, who in turn is influenced by the relational context of that therapist, and so forth (see also, Stiles, Citation2009b). Overall, a conclusion that may be drawn from the concerns raised by central scholars in the psychotherapy field during the past decades, is that, while the RCTs and their equivalents are useful for identifying relationships where these conditions make sense, they may not be optimal to explain the complex and total picture of processes and change in psychotherapy.

The debate on the status of RCTs and experimental designs culminated following the APA Division 12 Task Force on “Promotion and Dissemination of Psychological Procedures’. The Task Force identified 18 “empirically supported treatments’ for particular disorders (Chambless et al., Citation1996, Citation1998), which on the one hand instigated enthusiasm about the robust support of the overall effect of psychotherapy, while on the other hand sparked massive critique for its unilateral trust in RCTs as the proper source of evidence (APA, Citation2006). A new Task Force resulted in the APA’s (Citation2005) Policy Statement on Evidence-Based Practice in Psychology, which reflected a marked turn in favor of multiple research designs, and emphasis on clinical utility. The weight had shifted from a sole emphasis on evidence-based methods to evidence-based practice, leading to the definition: “Evidence-based practice in psychology (EBPP) is the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (APA, Citation2005). Besides emphasizing psychotherapy as a contextually based activity, the report and policy statement explicitly highlighted the value of basing evidence on multiple research designs, applied for each their purposes, and not only RCTs. These included clinical observation, qualitative research, systematic case studies, single-case experimental designs, public health and ethnographic research, process–outcome studies, studies of interventions as these are delivered in naturalistic settings (effectiveness research), RCTs and their logical equivalents (efficacy research), and meta-analyses (APA, Citation2006, p. 274; Levant, Citation2005, pp. 7–8).

However, and of interest for our present purpose, causality was still exclusively associated with experiments, as already noted: “RCTs and their logical equivalents (efficacy research) are the standard for drawing causal inferences about the effects of interventions (context of scientific verification)” (APA, Citation2006, p. 274). Correspondingly, types of evidence were ranked according to their “contribution to conclusions about efficacy,” with “sophisticated empirical methodologies, including quasi-experiments and randomized controlled experiments or their logical equivalents” on top, followed by the clarification that among sophisticated empirical methodologies “randomized controlled experiments represent a more stringent way to evaluate treatment efficacy because they are the most effective way to rule out threats to internal validity in a single experiment” (APA, Citation2006, p. 275). Thus RCTs—and their equivalents—were still identified as most fit to establish causality, and the argument was based on the aim of ensuring internal validity, which equals causality within the experimental paradigm (Shadish et al., Citation2002). Moreover, the previous distinction between efficacy (fit for drawing causal inferences) on the one hand, and clinical utility/generalizability on the other, was maintained:

With respect to evaluating research on specific interventions, current APA policy identifies two widely accepted dimensions […] The first dimension is treatment efficacy, the systematic and scientific evaluation of whether a treatment works. The second dimension is clinical utility, the applicability, feasibility, and usefulness of the intervention in the local or specific setting where it is to be offered. This dimension also includes determination of the generalizability of an intervention whose efficacy has been established. (APA, Citation2006, pp. 274–275)

This distinction between the two dimensions of efficacy and utility/generalizability, implying that they are operating separately, is noteworthy, although common and reasonable from an experimental perspective. The distinction may be read as a strong recognition that ignoring a research design’s potential threat to ecological validity may perpetuate an unfortunate science-practice gap. Nevertheless, separating a particular design and utility/generalizability into two independent dimensions maintains an understanding of causality as solely associated to internal validity in line with the premises of experimental designs. Findings by other designs, however useful, are deemed inferior to explaining the effects of interventions, because the standard criteria for drawing causal inferences about the effect of interventions, and thereby the opportunities for predictions and control, are not met. As we will get back to, other conceptualizations of causality not restricting causality to internal validity as understood within the experimental tradition pave the way for the identification of causal processes that may be of higher utility.

Recurring Concerns—Improvements and Developments Within the Experimental Tradition—Moving on from Hume’s Regularity Conception

The arguments against the gold standard of RCTs are reflected in numerous publications also of recent date (e.g., Carey et al., Citation2017; Carey & Stiles, Citation2016; Laska et al., Citation2014; Philips & Falkenström, Citation2021). In their meta-analytic evaluation of recent meta-analyses of the efficacy of psychotherapy and pharmacotherapy for mental disorders in adults, Leichsenring et al. (Citation2022) concluded that, after decades of research and thousands of RCTs demonstrating limited effect sizes, “[a] paradigm shift in research seems to be required to achieve further progress” (p. 141). Recurring concerns have been raised as to the RCT’s role as a guarantor of scientific trust within disciplines where the phenomena in question, by definition, include context in all its variants, spanning from mental states to external social structures. The claim of causality has been the subject of debate within as well as without the experimental research literature, and efforts are made to ensure the sophistication of designs aimed at identifying causal relationships. Experimental designs and mediation analyses have been put under scrutiny, and specific methodological steps are outlined to ensure proper causal inferences where requirements of RCTs cannot be met (such as randomization, and with immutable, unmanipulable factors; e.g., Antonakis et al., Citation2010; Bullock et al., Citation2010). Scholars have explicitly warned against avoiding claims of causality (e.g. by resorting to claims of correlations) that may camouflage implicit causal claims (Grosz et al., Citation2020) and hinder sound improvement of designs and methods (e.g., Antonakis et al., Citation2010; Bullock et al., Citation2010; Hèrnan, Citation2018; Höfler et al., Citation2021). Moreover, concerns have been raised about the unreflective checklist use of “causal criteria” (Höfler, Citation2005). Regarding psychotherapy research more specifically, see also, Crits-Christoph and Connolly Gibbons (Citation2021) and Baldwin and Goldberg (Citation2021) for reviews of recent sophistication of statistical methods to ensure causal inference in process-outcome research, and Kazdin (Citation2022) for a recent detailed account of the potentials of RCTs, including an evaluation of what random assignment can and cannot do with respect to drawing causal inferences about effects of therapeutic interventions.

Concerns about the utility of RCTs and their equivalents have thus been raised both from within and without the empiricist tradition, nurturing solutions from both within and without, accordingly. Seemingly, however, the gold standard status of RCTs still thrives, which might be explained by its strong association with internal validity, which in turn equals causal identification within the experimental paradigm (e.g., Shadish et al., Citation2002). We will now turn to perspectives of realism where this premise for causal identification is challenged.

Causality Beyond Hume

Realist Causal Power Perspectives

The Humean skepticism to identifying underlying causes is challenged by various realist perspectives. Scientific realism is a philosophical perspective that is characterized by the premise that “the entities, states, and processes described by correct theories do exist” (Hacking, Citation1983, p. 21). As to causality, a realist perspective typically contrasts with an empiricist anti-cause/skeptic stance by its recognition of causal powers or intrinsic capacities/properties of entities. Picking up on Aristotle and the ancient tradition a relatively recent approach within analytic philosophy holds intrinsic properties—that is, causal powers or dispositions—as vital causal drivers (Bird et al., Citation2012; Cartwright, Citation1989; Groff & Greco, Citation2013; Jacobs, Citation2017; Marmodoro, Citation2010; Meincke, Citation2020). Allegedly, these dispositional properties can belong to a particular thing, individual, process, communities or social systems, thus spanning from the very concrete to higher-level, abstract entities. By recognizing an unobservable level of reality these theories shift the attention away from identifying observable law-like regularities to understanding the underlying dispositions and causal capacities that produce such regularities. A focal assertion is that powers or dispositions are dependent on actualizing factors, which within the various powers/dispositional perspectives are referred to as “mutual manifestation partners,” “support factors,” or “interactive variables” (e.g., Anjum & Mumford, Citation2018a, Citation2018b; Deaton & Cartwright, Citation2018; Mumford & Anjum, Citation2011). These may not be understood as mere conditions for the real cause to work. Rather, they are themselves part of the causal process, together with all the other factors that account for the outcome. Causality thus exists in a unique web of interacting dispositions working as manifestation partners towards the emerging results. Unlike with the regularity model, there is no temporal priority involved, rather a dynamic of symmetric, reciprocal relations between properties simultaneously interacting with each other. Hence, causation neither refers to pure observable contingencies nor some separate, underlying driver, but applies to the whole process of interacting dispositions or powers that altogether are responsible for some particular effect. This denotes a holistic take on causality where powers are a characteristic of a system rather than a particular object (Illari & Russo, Citation2014). Accordingly, causal powers/dispositionalism advocate process-oriented, complex understandings of the mechanisms of causality (e.g., Anjum & Mumford, Citation2018a, 2018b). By stating the irreducible existence of dispositional properties these perspectives call for a clarification of the ontology of causality. Thus these are ontological perspectives devoted to the fundamental question of what causation is rather than how to go about studying it. Critical realism, which we now turn to, bears on this ontology while at the same time including an epistemological aspect of relativism.

Critical Realism—Causality at the Level of Generative Mechanisms, Manifested in Open Systems. Critical realism is a philosophical perspective that was developed during the last part of the twentieth century, and which aimed to conceptualize “the independent reality of being […] in the face of the relativity of our knowledge” (Bhaskar, Citation1998, p. x), thus reconciling ontological realism, epistemological relativism, and judgmental reality. To Bhaskar (Citation1998), “society is both the condition and outcome of human agency and human agency both reproduces and transforms society” (p. xvi). Claiming to bridge the classic dichotomy of positivism and hermeneutics, Bhaskar asserted that:

positivists are wrong to expect the social sciences to find constant conjunctions in the human world, for they are scarce enough in the natural; while hermeneutics are wrong to conclude from the absence of such conjunctions that the human sciences are radically unlike natural sciences. (Bhaskar, Citation1998, p. xv)

As opposed to what he referred to as “the epistemic fallacy” (i.e., “what is”, ontology, is reduced to “what we know”, epistemology), Bhaskar argued for the intransitive dimension of knowledge, which are the mechanisms/objects of knowledge that exist and operate before and independently of their discovery, beyond language and experience. These mechanisms may be actualized due to causal powers and abilities that are intrinsic to things. The “things” may be events, structures, intentions, reasons, and so on, that in virtue of their intrinsic structures possess powers and abilities that constitute “the ontological ties that bind the cause and effects together” (Harré & Madden, as cited by Bhaskar & Lawson, Citation1998, p. 8). Put simply, a piece of iron has got its intrinsic structure determining its potentials/abilities to influence the world, while a person’s intentions have other potentials. Powers and abilities are intrinsic tendencies that come into play as mechanisms if triggered, and the effects of these mechanisms are actualized (or not) depending on the play of countervailing mechanisms. Hence, a dynamic of intrinsic structures, powers, abilities/capacities, and mechanisms serves to generate or facilitate the phenomena that we experience, or do not experience, but which nevertheless is the causal effect (Bhaskar & Lawson, Citation1998).

While these generative mechanisms reflect the ontology of critical realism, the epistemological aspect of causality is reflected in the concept of “retroduction” (often also referred to as “inference to the best explanation” or “abductive reasoning”), which is the process by which we draw causal inferences or construct a theory of mechanism, such as “if it [the mechanism] existed and acted in the postulated manner, [it]could account for the phenomenon singled out for explanation” (Lawson, Citation1998, p. 156), which is then tested empirically. As this is a process drawing on researchers’ perspectives, various theories, and scientific approaches likely explain phenomena differently, although they, according to the ontology of critical realism, reflect the same intransitive objects (e.g., Richards, Citation2018). This will typically be the case within social sciences. We will get back to how this epistemological aspect of causal theory paves the way for conceptual, theoretical, and communicative generalizability, that is, other forms of generalizability than those based on definite concepts, random sampling, and the premise of universalism that mark the empiricist tradition.

Moreover, in “open systems,” such as the social world (as opposed to the “closed systems” of experimental conditions), the generative mechanisms are typically unsynchronized or out of phase with the actual effects, leaving an absence of one-to-one relationship between the (proposed) causal law and the particular manifestation of its effects (Bhaskar & Lawson, Citation1998). We shall shortly see how this asynchrony of causal mechanisms and effects may shed light on, or serve as a proper explanatory framework of, the relationship between processes and change in psychotherapy, including delayed and unconscious effects. In sum, from a critical realist perspective, causal mechanisms operate in a complex dynamic independently of our identification of them, often unsynchronized with our observations of their effects (if observed), and likely conceptualized by differing explanatory theories.

To sum up on the main points shared by critical realism and dispositionalism: Causal mechanisms may be explicit and identified—or implicit or not-yet articulated, but still crucial causal drivers. Causes (powers, dispositions) are tendencies that constitutively relate to manifestation partners, which implies a complex web of factors interacting with each other and that all form part of the causal picture. Moreover, causal or generative mechanisms may be unsynchronized with their manifest causal effects. These are premises that may serve to nuance explanations of the effects of psychotherapy.

For example, as clinicians we may recognize delayed effects and implicit, unconscious processes. Therapy may have effects that are not fully experienced or articulated until a while after the treatment ended and the client has been wandering about in the world, potentially affected this world in a slightly different way, in turn actualizing different responses, and gradually become more aware of the difference. After having acted and reflected upon these changes new insights may be integrated and the client may finally be able to articulate what has happened, at least parts of it. It would not be unreasonable to attribute some of this effect to what happened during therapy (e.g., the emotional climate, interventions used, responsively tailored to this particular client, etc.), although it might be difficult for the client to identify these factors and their interactions if asked during or right after therapy. The client may resort to formulations that are easy to explicate albeit not capturing the total picture or experience. Or they simply cannot find a proper answer although they have a clear (or vague) feeling that “this was good for me,” “something else,” “my therapist is good,” “totally needed.” Likewise, either as therapists or supervisors, we may experience that clients or former students come to us, revealing that “now, at last, I fully understand what we were talking about.” Change, whether in therapy or training, frequently happens unsynchronized with the actual interventions, and after interaction with a host of other drivers such as time, new actions, relations with others, own thinking, etc. Neither of these are the “real” causal drivers. Their effects are all dependent on the mutual facilitation of the web of other factors. Recognition of manifestation partners as part of causation itself is vital to a dispositionalist/powers conceptualization of causality and directs attention to the whole process of various constitutive factors, rather than establishing causality at the level of observable contingencies, where one factor has the temporal priority over the other.

Implications of a Powers/Dispositionalist View on Causality. An essential take-home message from powers/dispositionalist perspectives is that the identification of unique causal mechanisms, such as underlying structures and processes, is key to ensuring causal knowledge of a phenomenon. In this view, RCTs can indicate causal processes without that implying that they capture the essence of causation, as asserted by Kerry et al. (Citation2012): “RCTs may be very good at displaying symptoms of causation, but they are not constitutive of causation.“ (p. 8; for a more complete reasoning of the philosophical rationale, see, e.g., Rocca & Anjum, Citation2020; Kerry et al., Citation2012; Lindstad, Citation2020). From a scientific perspective, we recognize a rather radical stance that reflects a focus on the ontology of causation, dealing with the fundamental structures of powers and dispositions. It is important to note, however, that albeit being radical as to causal ontology, proponents of these perspectives do not dismiss RCTs and experimental designs as forming vital part of the repertoire of designs needed to identify causal relationships (e.g., Deaton & Cartwright, Citation2018; Rocca & Anjum, Citation2020), a point we will get back to under the heading of difference-making vs. causal production. Nevertheless, the weight on the unique webs of powers and their manifestation partners/support factors—and an emphasis on causal tendencies as opposed to discrete events—favors in-depth studies of how these manifest in single instance cases, as “[t]his is where the real nature of causation is witnessed. The interaction between causal agents; subtractive and additive forces tending towards and away from an effect; causal powers being passed from one partner to another” (Kerry et al., Citation2012, p. 10).

Relatedly, concerns have been raised that the restrictive demands of experimental designs, aimed to ensure internal validity, hamper the identification of vital causal processes as these naturally unfold, thus limiting the usefulness of the results. For example, as argued by Cartwright (Citation2014), findings by RCTs may be poor at predicting results in new settings, as you will not be sure that “enough individuals in the target population have the right arrangement of support factors“ (p., 318, our cursive). Likewise, Deaton and Cartwright (Citation2018), in their critique of what they observed as an undue trust in RCTs in social sciences and medicine, raised concerns that the findings have limited predictive value as long as they are separated from support factors, which are responsible for the differing manifestations of effects across situations (see also Cartwright, Citation2010; Cartwright & Hardie, Citation2012; Rothwell, Citation2005, Citation2006). Here we recognize the priority of the unique, local before the general, universal that takes precedence within empiricism. The empiricist ideal of randomization and discrete entities (to control for confounding and to ensure representativeness) reflects a premise of universalism and determinism, most clearly illustrated by Hempel’s “covering law” model and the thesis of the “symmetry between explanation and prediction”: To explain a phenomenon is to demonstrate empirically that it is an instance of a general law, which in turn ensures prediction (Groff, Citation2011; Hempel, Citation1942).Footnote2 Following the argumentation of powers/dispositional views, though, such predictions may be of limited value as, due to the experimental conditions themselves, the essential causal processes are not ensured. The procedures aimed at ensuring internal validity (and thus causality) may hamper the usefulness of results, that is, their external and ecological validity. The tension between internal and external validity is well known within experimental research (e.g., Shadish et al., Citation2002) and seems to be hard to solve within the experimental paradigm itself. We will get back to how theorizing as a means of generalization is recognized both within and without the experimental tradition.

Thus an overall message from the proponents of realist perspectives is that we cannot get to the core of causal explanations without studying phenomena as they appear in contexts that are constitutive of the phenomena themselves. In all, the powers/dispositionalist perspectives display a pervasively different view of the path through which knowledge on causal relationships develops than that of the empiricist tradition: In accordance with the Aristotelian view, this perspective holds that knowledge on the universal is developed through an inquiry into the particular, rather than the other way around.

Inferential and Epistemic Approaches to Causality—Causality at the Level of Human Mind and Linguistic Interaction

Facing the empiricist demands and the prominent status of the adjacent experimental designs, one stance to the question of causality within humanistic social sciences has been to lean on a Diltheyan distinction between explanations (Erklärung) and understanding (Verstehen) as two pervasively different modes of knowing (for a thorough discussion, see von Wright, Citation1971). While reserving explanations for experimental designs that provide knowledge about dependent (including causal) relationships between definite variables, understanding has been associated with qualitative, contextualized knowledge developed from language-based, interpretative, hermeneutic-oriented approaches. The argument goes that understanding captures the complex, intricate relationships between incidents and experiences that may not fit in with an approach whose aim is to establish observable, causal relationships between definite variables. In line with this, one would dismiss the claim of causality (e.g., Smedslund, Citation2015; Valsiner & Brinkmann, Citation2016), as punctuating a sequence of acts in the way proclaimed by the regularity model would hamper access to complex relationships, and thereby violate the phenomena themselves.

While this dismissive stance to claims of causality has long been embraced by various interpretative, post-structuralist, and postmodern perspectives, a more recent philosophical approach that covers various causal theories may serve to bridge the gap between causality and the broad neo-Kantian tradition (the Geisteswissenschaften) within humanistic social sciences. In these recently developed theories, causality is placed at the level of human mind in terms of inferences, reasoning, and social, linguistic interaction (Illari & Russo, Citation2014; Reiss, Citation2015; Williamson, Citation2013). One example is Reiss' (Citation2012, Citation2015) inferentialism, which deals with the semantics of causal claims. More specifically, it locates the meaning of causal claims “in their inferential connections to claims about evidence for causal claims and cognitively or practically useful claims about explanation, prediction, and possible interventions.” (Reiss, Citation2012, p. 776). Another example is that of epistemic causality, where:

causality is a feature of the way we represent reality rather than a feature of agent independent reality itself; it is neither reducible to patterns of difference making nor to physical mechanisms. Our causal beliefs help us with our dealings with the world, since they allow us to predict, to explain and to control reality. We have these causal beliefs because of this inferential utility, not because there is some non-epistemic causal relation that is the object of those beliefs. (Williamson, Citation2013, p. 268)

Both inferentialism and an epistemic account of causality advocate views in line with contextualism (e.g., Kincaid, Citation2004; Reiss, Citation2012, Citation2015) implying that what counts as valid causal claims will vary according to the local community of researchers who are participating in the current discourse or interaction. There are no global standards to which all causal claims have to conform:

Standards are always local and contextual. They are local in that they apply first and foremost to specific episodes of scientific reasoning, which obtain in particular fields, periods and sometimes regions, and not to science as a whole. They are contextual in that they are relative to specific purposes of the inquiry, the questions asked, and the background knowledge that can be presumed. (Reiss, Citation2012, p. 772)

Interestingly, these perspectives might be seen as further developments of Hume’s (anti)causation theory as they elaborate on our tendency to make causal inferences. These theories are compatible with the epistemological aspect of critical realism (Bhaskar, Citation1998) and are based on the broad tradition of neo-Kantian approaches (Geisteswissenschaften), such as hermeneutics, phenomenology, Wittgensteinian language philosophy, and poststructuralism, which traditionally have not devoted themselves to causality.

Recent Causal Theories: Process, Context, and Theorizing

Although representing different philosophical traditions, the powers or dispositionalist perspectives, critical realism, and the recent inferentialism and epistemic causality share some fundamental arguments regarding causality: Causality is a processual and contextual phenomenon requiring an understanding of complex webs of interrelated factors or concepts that cannot be disentangled without changing the phenomenon in question. There is no global standard by which a causal claim may be justified, as the quality of justification will vary according to the local context, whether that context is a system, an organism, a social community, etc. Variation is crux, and the abstracted space of conceptualization and theorizing forms a vital part of causality, a point we will get back to further down.

A question that might be raised is whether recognition of the complex web of constitutive factors in reality requires any other research design than the experiment and its equivalents, as long as they include a necessary set of the presumed interacting factors. Obviously, all kinds of data and findings must be expressed, or operationalized, in some form that might compromise their ecological validity. No finding, however concise or contextually rich, is directly transferable to practice. The question is what form suits best, for which purposes? In the following, we will argue that no design or research method alone takes care of causality. All designs—in each their ways—put constraints on what may be identified. Hence, a valid approach to the intricate networks of human and social phenomena is to critically evaluate the capacities for causal explanations of all designs.

Causal Approaches: Difference-Making vs. Production

As witnessed by the various causal theories, causality means different things and may thus be established in different ways (see also, Cartwright, Citation2004). Here we may be reminded of the distinction between difference-making and production approaches to causality, which according to the “Russo-Williamson Thesis” (RWT) denotes different kinds/objects of evidence when deciding causality (Illari & Russo, Citation2014). While difference-making is associated with, for example, probability raising, counterfactuals, and manipulability, causal production typically deals with processes, mechanisms, and information. The first deals with whether causal relations exist or not, including the strength of the causal relationships, and the latter with how causal relations work. These may be viewed as complementary sources of evidence to build robust, nuanced knowledge on causal relations of a phenomenon. However, the differences between methods may be blurred, which in many respects suggest a continuum rather than two distinct groups.

While not going into detail on the various possible designs and methods to build knowledge on causal relations, we will line out just a few examples to get a picture of the possible repertoire of difference-making and production-oriented methods and how they may be combined, either within or between studies or research programmes. For example, RCTs are fit for picking out difference-making due to control conditions that allow inferences of the effects of therapeutic interventions. Nevertheless, due to the intricate interactions of theory-specific and common factors in therapy, we can hardly be sure about the specific causal mechanisms of an intervention. To cite Kazdin (Citation2022): “[In] an RCT, we may have evidence that the intervention caused the change, but what facet of that intervention or why the change occurred can remain ambiguous.” (p. 9) Thus we may have established that the intervention worked (difference-making) without necessarily knowing how it worked (causal production). As pointed out by Shadish et al. (Citation2002) in their extensive account of experimental and quasi-experimental designs for causal inference: Experimental designs are not fit for causal explanations, that is, the mechanisms reflecting how causal relations work. That must be dealt with by the use of other methods. Before we move over to production-oriented methods, however, we should notice variants of difference-making designs that may work well for drawing causal inferences despite the lack of the strict conditions of usual RCTs. Like discussed above, the experimental control of RCTs may ensure internal validity at the expense of external and ecological validity. However, pragmatic trials that allow more flexible use of manuals in response to patient responses than in the usual RCTs are, according to Kazdin (Citation2022), an underused option to ease the transition of results from the experimental to the natural setting. Observational studies are other examples that may be fit for drawing causal inferences where the experimental control of RCTs, such as random assignment or manipulability, are not feasible for pragmatic or ethical reasons (e.g., Höfler et al., Citation2021; Kazdin, Citation2022). Examples of quantitative designs that are perhaps in the twilight between difference-making and production studies are variants of observational process studies, such as nonexperimental process-outcome studies (see Baldwin & Goldberg, Citation2021). One example is lagged multilevel models that examine how alliance at one session predicts outcome at the next, and vice-versa (e.g., Falkenström et al., Citation2013).

Designs fit for identifying causal production are perhaps most typically exemplified by various single-case designs (e.g., Kazdin, Citation2021) and qualitative methodology (e.g., Maxwell, Citation2004, Citation2021). Examples of research projects that reflect production approaches are those of Bohart et al. (Citation2011) applicating the Jury Trial Model to evaluate the validity of descriptive and causal statements about psychotherapy process and outcome, and Elliott’s (Citation2002) use of hermeneutic single-case efficacy design (HSCED), which by use of a combination of quantitative and qualitative methods illustrates a rigorous process of identifying causal mechanisms, and linking treatment processes to outcome. Mixed-methods designs allow the combination of different methods’ individual strengths in identifying causal relationships, which might serve to illuminate more of the complex web of causal drivers, such as RCTs in combination with in-depth qualitative analyses focused on identifying causal relationships. Relatedly, inquiries of so-called paradoxical outcomes, that is, cases where there is a marked difference between clients’ subjective reports on outcome and standardized outcome scores, have the potential to reveal other possible causal dispositions than what may be captured by use of predefined, categorical outcome measures (Stänicke & McLeod, Citation2021). Finally, an elaborate example of identifying sequences of change across studies is that of synthesizing studies by use of “ecological sentence synthesis,” as illustrated by Sandelowski and Leeman (Citation2012). Here, findings from different studies are organized into thematic sentences by the following structure: “with this intervention,” “these outcomes occur,” and so on.

These examples and combinations are not new ideas, and are by no means exhaustive. The point here is to be aware of the particular causal focus that might be strengthened at all steps, either when using one single method/design, or when combining different methods within a mixed-methods design. In a similar vein, Rocca and Anjum (Citation2020) have offered an overview of the advantages and disadvantages of various research methods in identifying intrinsic dispositions and their manifestation partners, demonstrating the necessity of combining the various methods and letting them play different roles in different stages during the research process. Moreover, from the perspective of critical realism as well as an inferential/epistemic approach to causality, one would emphasize the epistemological aspect as forming a crucial part of causal identification, whichever research design one is using for a specific analysis. Reiss (Citation2012) for example argues:

Descriptions of specific applications of methods of causal inference such as experiments, randomised trials, regressions, applications of structural equations models, Bayes’ nets, expert judgements, meta-analyses and so on are all possible members of the inferential base for a specific causal claim. (Reiss, Citation2012, p. 770, italics here)

Thus, claims of causality may be based on various sources that are considered fit to the particular research questions and goals of the particular community of researchers. That is, the claims of credibility will vary, and “Causal claims are not normally established for their own sake but rather because they are considered useful for the attainment of certain purposes” (Reiss, Citation2012, p. 770).

Acknowledging the need for diversity may help making sense of seeming contra-intuitive empirical findings, such as overall scarce evidence of specific effects of particular methods and theory-specific interventions (e.g., Barkham & Lambert, Citation2021; Lambert, Citation2013; Wampold & Imel, Citation2015). How do techniques, theories and common factors (such as principles of change), the client and the therapist, operate together? Goldfried (Citation1980, Citation2009) and Hatcher and Barends (Citation2006) presented a straightforward model to illustrate the relationship between the various factors accounting for therapeutic change. According to this model specific techniques, common factors and clinical theories operate at different abstraction levels and must be understood as conceptually and empirically interwoven and reciprocally dependent. Rather than simply asking how much factors such as theory-specific techniques and common factors such as the working alliance contribute to change, these factors are perceived as conceptually dependent. There is no working alliance without theory-specific interventions (Bordin, Citation1979). Such an understanding is not conceivable by the experimentally produced findings alone, or by use of the experimental design itself. In whatever fashion we approach the complexities empirically no design on their own seem to help us understand the full picture of what works in psychotherapy. RCTs have been valuable to establish absolute effects of a treatment (i.e., better than controls), the effects of common factors, and also the general lack of comparative effects between different theoretical traditions and theory-specific interventions, to mention just a few examples. Thus, various difference-making designs may be used if the factors are practically identifiable and do not lose their “essence” by being operationalized in definite terms. Various non-experimental designs help examining the processes (observational process studies, narrative analyses, qualitative inquiries of sessions, etc.). Then again, we will need conceptual clarifications, theorizing, and theory-building studies (e.g., Stiles, Citation2009a, Citation2015) to make sense of the whole picture—such as conceptual relations between techniques, clinical theories and common factors, which is in line with a realist as well as an inferential/epistemic approach to causality.

Knowledge Transfer—Variation as the Gateway to Theorizing and Generalization

As we have seen, causality as understood by the approaches differing from Hume’s regularity conception all include causal inferences and the abstracted space of the mental world as a vital part of causality itself. We recall an emphasis on “retroduction” (inference to the best explanation) in critical realism, where causal explanations “must refer to something which, if true, would account for the observed pattern, and something which given background knowledge, could well be true” (Benton & Craib, Citation2011, p. 36). Likewise, theorizing—allowing abstractions that go beyond data, often a priori—is well recognized in most statistical models and analyses, as they are based on some presumed structure that guides hypothesizing, such as witnessed in models based on postpositivism/critical rationalism (Kuhn, Citation1970; Popper, Citation1959/Citation2014, Citation1978) as well as in critical realism. Within inferentialism and epistemic causality, the causal identification operates in the conceptual or linguistic space. Thus, we recognize various types of causal explanations, based on different ontologies and epistemologies, but which all acknowledge the abstract reality as forming part of causality.

Abstractions inevitably imply variation and ambiguity. The more abstract, the more variants of a phenomenon may be included and captured by the concept or a theory. These various manifestations in turn allow identification of certain common features across the particular variants, serving to elaborate the meaning of the concept (e.g., Blumer, Citation1954). Herein lies the potential for generalizability. The ambiguity is crucial for the development of an adequate, comprehensive understanding that may be transferred to other, similar situations, either by elaborated conceptualizations or more or less developed theories. This dynamic of generalizability is widely recognized in the qualitative research literature and referred to by various terms such as “analytic generalization” (Kvale, Citation1996; Maxwell & Chmiel, Citation2014; see also Levitt, Citation2021) and “explanatory power” or “predictive ability” (Strauss & Corbin, Citation1998). Also proponents of the experimental tradition emphasize causal explanations as a means for knowledge transfer across variations in persons and settings. Referring to Bhaskar, Shadish et al. (Citation2002) describes how knowledge of a complete causal system or of the total set of causally efficacious components “makes it easier to reproduce a given causal connection in a wide variety of forms and settings, including previously unexamined ones […] This is why causal explanation is often considered a primary goal of science” (p. 369).

Within psychotherapy research, Stiles (Citation2009a, Citation2015) has elaborated on the dynamics of theorizing and empirical observations/analyses, illuminating different strategies for communication between theory and the empirical world, as well as different purposes of research (theory building, enriching, and fact gathering), which all may apply to quantitative as well as qualitative studies, and with each their implications for generalizability. Common to all these forms of generalizability is that they contrast with the mentioned principle of “symmetry of explanation and prediction,” where the applicability of findings is established in advance according to the premise of covering laws/universalism (Hempel, Citation1942). The various conceptions of knowledge transfer in terms of analytic generalization, explanatory power, conceptual development, enriching studies, and theory building all place the evaluation of applicability later on in the research process—either in the form of developed theories or elaborated concepts to be received by the relevant audience. The same line of reasoning may apply to causal explanations more specifically; variation by use of different methods—offering different viewpoints at different points in time of the research process—allows development of well-founded causal explanations, which is at the same time integral to theorizing. No one method is in a unique position to establish causality. We need empirical findings and rigorous theoretical arguments and discussions from various perspectives to complement the picture of causal relationships of a particular phenomenon. To ensure the inclusion of the variety of these knowledge sources, we will need to transcend the perspective of empiricism, not to exclude it, but to include approaches based in also other philosophical perspectives.

Evidential Pluralism—the Benefit of Variation

Speaking from the philosophy of pluralism one would assert that as the premises on which evaluations are based differ so fundamentally between the various systems of belief, value, and action (spanning from religion to reason to communication, etc.), any attempt at a consensus on truth—or valid evaluations of truth—is utopian (e.g., Rescher, Citation1993). Rather than aiming for ideal conditions of harmony, one would take a pragmatic position and recognize the unique viability of each system and arrangement—not because any of them are ideal, but because there is no alternative stance without undue conflicts and power struggles. Adopting the same approach to the methodological plurality within psychotherapy research, we would not only advocate the unique contribution of each design and method but refrain from any attempt at ascribing particular status to one specific design. Similar thoughts are advocated by Rocca and Anjum (Citation2020), Höfler et al. (Citation2021), and Deaton and Cartwright (Citation2018) when they assert that the generalization of causal relationships hinge on prior knowledge in terms of various methods and theory within a cumulative program, which altogether help us understand not only “what works’—as established within experimental conditions—but “why things work” (see also Cartwright, Citation2004). Although not addressing causality itself, we recognize the same lines of reasoning by Carey et al. (Citation2017) in their arguments for an increased repertoire of research methodologies to improve professional psychology practice, and by Smith et al. (Citation2021) in their pluralistic perspective on research in psychotherapy. Moreover, similar ideas are advocated in the perspectives of methodological pluralism (Howard, Citation1983; Roth, Citation1987; Slife & Gantt, Citation1999), dialectical pluralism (Johnson, Citation2017; Johnson & Schoonenboom, Citation2016), and causal mosaic (Illari & Russo, Citation2014). Others have argued for a “realist synthesis’ approach (Emmel et al., Citation2018; Pawson, Citation2006), which is a perspective on evidence-based policy aimed at providing a viable alternative to standard systematic reviews. Thus, the examples are numerous, all recognizing the benefit of variation. Likewise, establishing causal relationships is to be conceived of as a processual endeavor involving a multitude of approaches to ensure the variation needed to develop knowledge of a phenomenon and its causal relationships.

Final Considerations

The distinctions between what may count as based in empiricism or variants of positivism and other philosophical approaches, such as realism, critical rationalism, and dispositionalism when it comes to research methods and designs may at times be blurred. For example, counterfactual reasoning (which is frequently tested by the use of experimental designs) may be understood as based on empiricism, but also realism and critical rationalism (due to the recognition of agency and hypothesizing/imagined conditions). Thus, the transition from one to the other philosophical perspective may be indefinite. As such, one might question the need for any discussion of the philosophical underpinnings at all as, naturally, there is not always a one-to-one relationship between research designs and underpinning philosophy. Indeed, and naturally, scientists go on developing proper designs without checking with the philosophical premises, leading to designs and methods that may have no clear philosophical grounding or may fit with several philosophies depending on what aspect of their method one pay attention to. Nevertheless, we would argue that an awareness of the philosophical underpinnings is helpful to make sure we do not unreflectively and habitually embrace some methods and designs while rejecting others that may be equally valuable, although leaning on fundamentally other premises. Moreover, increased awareness of the various philosophical conceptions of causation may inspire the developments of novel designs and methods fit to identify causal relations in new and creative ways. It is important to note that the powers and dispositions perspectives are ontological theories of causation, dealing with the fundamental question of what causation is, not necessarily how to go about studying it. As pointed out by Illari and Russo (Citation2014), with some exceptions, less attention has been paid to the implications of powers/capacities/dispositions perspectives for evidence in scientific practice. Like we have seen, Cartwright (e.g., Citation2014; Deaton & Cartwright, Citation2018), Maxwell (Citation2004, Citation2021), and proponents of dispositionalism (e.g., Kerry et al., Citation2012; Rocca & Anjum, Citation2020) have raised criticisms of RCTs in several publications over the last years. Developments of alternative methods and designs may, however, be an endeavor for the future.

For those who find philosophical subtleties to be of lesser use in developing proper designs and methods, an increased awareness of the distinctions between designs aimed at difference-making and causal production may still be a fruitful future path. Experiments—whether based in empiricism, realism, or critical rationalism—may be classified as designs devoted to difference-making, while designs fit for identifying causal production, that is, causal processes and mechanisms, are typically not based in empiricism, but realism and the various perspectives in the continental philosophy/neo-Kantian tradition. Being attentive to which purpose the various studies are meant to fulfill may increase our understanding of the implications and limitations of our causal claims, as well as sharpen our evaluation by use of relevant criteria.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Notes

1 We are not going into details on the various experimental designs and methods, but for the interested reader we recommend Illari and Russo (Citation2014) “Causality: Philosophical theory meets scientific practice.” For further details on the range of experimental designs, we recommend Shadish et al. (Citation2002) “Experimental and quasi-experimental designs for generalized causal inference,” and for comprehensive accounts and discussions of the various philosophies of causality, “The Oxford Handbook of Causation” (Beebee, Hitchcock, & Menzies, 2014).

2 We need to say that the fundamentality of laws of nature have been disputed and nuanced among scholars adhering to experimental theories of causation, although they may still stick to determinism (albeit softened) and a reductionist view of nature (e.g., Lewis, Citation1973; Psillos, Citation2009). Nevertheless, it is fair to say that the reported principles of knowledge production and generalizability still prevail within the broad experimental tradition as reflected in the ideal of randomization.

References

  • American Psychological Association. (2005). APA Policy Statement on Evidence-Based Practice, retrieved May 2020 from https://www.apa.org/practice/guidelines/evidence-based- statement.
  • American Psychological Association. (2006). Evidence-based practice in psychology. American Psychologist, 61(4), 271–285. https://doi.org/10.1037/0003-066X.61.4.271
  • Anjum, R. L., Copeland, S., & Rocca, E. (2020). Medical scientists and philosophers worldwide appeal to EBM to expand the notion of ‘evidence’. BMJ Evidence-Based Medicine, 25(1), 6–8. https://doi.org/10.1136/bmjebm-2018-111092
  • Anjum, R. L., & Mumford, S. (2018a). Causation in science and the methods of scientific discovery. Oxford University Press.
  • Anjum, R., & Mumford, S. (2018b). Dispositionalism: A dynamic theory of causation. In J. Dupré, & D. Nicholson (Eds.), Everything flows: Towards a processual philosophy of biology (pp. 61–75). Oxford University Press.
  • Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2010). On making causal claims. A review and recommendations. The Leadership Quarterly, 21, 108. https://doi.org/10.1016/j.leaqua.2010.10.010
  • Baldwin, S. A., & Goldberg, S. B. (2021). Methodological foundations and innovations in quantitative psychotherapy research. In M. Barkham, W. Lutz, & L. G. Castonguay (Eds.), Bergin and Garfield’s handbook of psychotherapy and behavior change. 50th anniversary edition. (pp. 19–49). Wiley.
  • Barkham, M., & Lambert, M. J. (2021). The efficacy and effectiveness of psychological therapies. In M. Barkham, W. Lutz, & L. G. Castonguay (Eds.), Bergin and Garfield’s handbook of psychotherapy and behavior change. 50th anniversary edition (pp. 135–189). Wiley.
  • Benton, T., & Craib, I. (2011). Philosophy of social science. The foundations of social thought (second edition). Palgrave Macmillan.
  • Bhaskar, R. (1998). General introduction. In M. Archer, R. Bhaskar, R. Collier, T. Lawson, & A. Norrie (Eds.), Critical realism: Essential readings (pp. ix–xxiv). Routledge.
  • Bhaskar, R., & Lawson, T. (1998). Introduction. Basic texts and developments. In M. Archer, R. Bhaskar, R. Collier, T. Lawson, & A. Norrie (Eds.), Critical realism: Essential readings (pp. 3–15). Routledge.
  • Bird, A., Ellis, B., & Sankey, H. (2012). In A. Bird, B. Ellis, & H. Sankey (Eds.), Properties, powers and structure. Routledge.
  • Blumer, H. (1954). What is wrong with social theory. American Sociological Review, 19(1), 3–10. https://doi.org/10.2307/2088165
  • Bohart, A. C. (2000). Paradigm clash: Empirically supported treatments versus empirically supported psychotherapy practice. Psychotherapy Research, 10(4), 488–493. https://doi.org/10.1080/713663783
  • Bohart, A. C., Tallman, K. L., Byock, G., & Mackrill, T. (2011). The “research jury method”: The application of the jury trial model to evaluating the validity of descriptive and causal statements about psychotherapy process and outcome. Pragmatic Case Studies in Psychotherapy, 101–144. http://pcsp.libraries.rutgers.edu 7, Module 1, Article 8. https://doi.org/10.14713/pcsp.v7i1.1075
  • Bohart, A. C., & Wade, A. G. (2013). The client in psychotherapy. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change. Sixth edition (pp. 219–257). Wiley.
  • Bordin, E. S. (1979). The generalizability of the psychoanalytic concept of the working alliance. Psychotherapy: Theory, Research and Practice, 16(3), 252–260. https://doi.org/10.1037/h0085885
  • Broadie, S. (2009). The ancient Greeks. In H. Beebe, C. Hitchcock, & P. Menzies (Eds.), The Oxford handbook of causation (pp. 21–39). Oxford University Press.
  • Bullock, J. G., Green, D. P., & Ha, S. E. (2010). Yes, but what’s the mechanism? (Don’t expect an easy answer). Journal of Personality and Social Psychology, 98(4), 550–558. https://doi.org/10.1037/a0018933
  • Carey, T. A., Tai, S. J., Mansell, W., Huddy, V., Griffiths, R., & Marken, R. S. (2017). Improving professional psychological practice through an increased repertoire of research methodologies: Illustrated by the development of MOL. Professional Psychology: Research and Practice, 48(3), 175–182. https://doi.org/10.1037/pro0000132
  • Carey, T., & Stiles, W. B. (2016). Some problems with randomized controlled trials and some viable alternatives. Clinical Psychology and Psychotherapy, 23(1), 87–95. https://doi.org/10.1002/cpp.1942
  • Cartwright, N. (1989). Nature’s capacities and their measurements. Oxford University Press.
  • Cartwright, N. (2004). Causation: One word, many things. Philosophy of Science, 71(5), 805–819. https://doi.org/10.1086/426771
  • Cartwright, N. (2010). What are randomised controlled trials good for? Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 147(1), 59–70. https://doi.org/10.1007/s11098-009-9450-2
  • Cartwright, N. (2014). Causal inference. In N. Cartwright, & E. Montuschi (Eds.), Philosophy of social science (pp. 308–326). Oxford.
  • Cartwright, N., & Hardie, J. (2012). Evidence-based policy. A guide to do it better. Oxford University Press.
  • Chambless, D. L., Sanderson, W. C., Shoham, V., Bennett Johnson, S., Pope, K. S., Crits- Christoph, P., Baker, M., Johnson, B., Woody, S. R., Sue, S., Beutler, L., Williams, D. A., & McCurry, S. (1996). An update on empirically validated therapies. The Clinical Psychologist, 49, 5–18.
  • Chambless, D. L., Sanderson, W. C., Shoham, V., Johnson, S. B., Pope, K. S., Crits-Christoph, P., Baker, M., Johnson, B., Woody, S. R., Sue, S., & Beutler, L. (1998). Update on empirically validated therapies II. The Clinical Psychologist, 51, 3–16.
  • Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2014). Mechanisms and the evidence hierarchy. Topoi, 33(2), 339–360. https://doi.org/10.1007/s11245-013-9220-9
  • Crits-Christoph, P., & Connolly Gibbons, M. B. C. (2021). Psychotherapy process-outcome research: Advances in understanding causal connections. In M. Barkham, W. Lutz, & L. G. Castonguay (Eds.), Bergin and garfield’s handbook of psychotherapy and behavior change. 50th anniversary edition (pp. 263–295). Wiley.
  • Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine, 210, 2–21. https://doi.org/10.1016/j.socscimed.2017.12.005
  • Elliott, R. (1998). Editor's introduction: A guide to the empirically supported treatments controversy. Psychotherapy Research, 8(2), 115–125. https://doi.org/10.1080/10503309812331332257
  • Elliott, R. (2002). Hermeneutic single-case efficacy design. Psychotherapy Research, 12(1), 1–21. https://doi.org/10.1080/713869614
  • Emmel, N., Greenhalgh, J., Manzano, A., Monaghan, M., & Dalkin, S. (eds.). (2018). Doing realist research. Sage.
  • Falkenström, F., Granström, F., & Holmqvist, R. (2013). Therapeutic alliance predicts symptomatic improvement session by session. Journal of Counseling Psychology, 60(3), 317–328. https://doi.org/10.1037/a0032258
  • Gillies, D. (2019). Causality, probability, and medicine. Routledge.
  • Goldfried, M. R. (1980). Toward the delineation of therapeutic change principles. American Psychologist, 35(11), 991–999. https://doi.org/10.1037/0003-066X.35.11.991
  • Goldfried, M. R. (2009). Searching for therapy change principles: Are we there yet? Applied & Preventive Psychology, 13(1-4), 32–34. https://doi.org/10.1016/j.appsy.2009.10.013
  • Groff, R. (2011). Getting past hume in the philosophy of the social science. In P. M. Illari, F. Russo, & J. Williamson (Eds.), Causality in the social sciences (pp. 296–316). Oxford University Press.
  • Groff, R., & Greco, J. (eds.). (2013). Powers and capacities in philosophy. The new Aristotelianism. Routledge.
  • Grosz, M. P., Rohrer, J. M., & Thoemmes, F. (2020). The taboo against explicit causal inference in nonexperimental psychology. Perspectives on Psychological Science, 15(5), 1243–1255. https://doi.org/10.1177/1745691620921521
  • Hacking, I. (1983/2005). Representing and intervening. Introductory topics in the philosophy of natural science. Cambridge University Press.
  • Hatcher, R. L., & Barends, A. W. (2006). How a return to theory could help alliance research. Psychotherapy: Theory, Research, Practice, Training, 43(3), 292–299. https://doi.org/10.1037/0033-3204.43.3.292
  • Hempel, C. G. (1942). The function of general laws in history. The Journal of Philosophy, 39(2), 35–48. https://doi.org/10.2307/2017635
  • Hernán, M. A. (2018). The C-Word: Scientific euphemisms do not improve causal inference from observational data. AJPH. Public Health of Consequence, 108, 616–619.
  • Howard, G. S. (1983). Toward methodological pluralism. Journal of Counseling Psychology, 30(1), 19–21. https://doi.org/10.1037/0022-0167.30.1.19
  • Höfler, M. (2005). The Bradford Hill considerations on causality: A counterfactual perspective. Emerging Themes in Epidemiology, 2(1), 1–9. https://doi.org/10.1186/1742-7622-2-11
  • Höfler, M., Trautmann, S., & Kanske, P. (2021). Qualitative approximations to causality: Non-randomizable factors in clinical psychology. Clinical Psychology in Europe, 3(2), 1–12. https://doi.org/10.32872/cpe.3873
  • Hume, D. (1739/1978). A treatise of the human nature, Selby–Bigge edn. Clarendon Press. 1888.
  • Jacobs, J. (2017). Causal powers. (Ed.). Oxford University Press.
  • Johnson, R. B. (2017). Dialectical pluralism: A metaparadigm whose time Has come. Journal of Mixed Methods Research, 11(2), 156–173. https://doi.org/10.1177/1558689815607692
  • Johnson, R. B., & Schoonenboom, J. (2016). Adding qualitative and mixed methods research to health intervention studies: Interacting With differences. Qualitative Health Research, 26(5), 587–602. https://doi.org/10.1177/1049732315617479
  • Kazdin, A. E. (1998). Research design in clinical psychology (3rd ed.). Allyn & Bacon.
  • Kazdin, A. E. (2021). Single-case research designs: Methods for clinical and applied settings. Oxford University Press.
  • Kazdin, A. E. (2022). Drawing causal inferences from randomized controlled trials in psychotherapy research. Psychotherapy Research. https://doi.org/10.1080/10503307.2022.2130112
  • Kerry, R., Eriksen, T. E., Lie, S. A. N., Mumford, S. D., & Anjum, R. L. (2012). Causation and evidence-based practice. An ontological review. Journal of Evaluation in Clinical Practice, 18(5), 1006–1012. https://doi.org/10.1111/j.1365-2753.2012.01908.x
  • Kincaid, H. (2004). Contextualism, explanation and the social sciences. Philosophical Explorations, 7(3), 201–218. https://doi.org/10.1080/1386979045000258312
  • Krause, M. S., & Lutz, W. (2009). Process transforms inputs to determine outcomes: Therapists are responsible for managing process. Clinical Psychology: Science and Practice, 16(1), 73–81. https://doi.org/10.1111/j.1468-2850.2009.01146.x
  • Kuhn, T. S. (1970). The structure of scientific revolutions. Second edition, enlarged. The University of Chicago Press.
  • Kvale, S. (1996). Interviews. An introduction to qualitative research interviewing. Sage.
  • Lambert, M. J. (2013). The efficacy and effectiveness of psychotherapy. In M. J. Lambert (Ed.), Bergin and garfield’s handbook of psychotherapy and behavior change. Sixth edition (pp. 169–218). Wiley.
  • Lampropoulos, G. K. (2000). A reexamination of the empirically supported treatments critiques. Psychotherapy Research, 10(4), 474–487. https://doi.org/10.1093/ptr/10.4.474
  • Laska, K. M., Gurman, A. S., & Wampold, B. E. (2014). Expanding the lens of evidence-based practice in psychotherapy: A common factors perspective. Psychotherapy, 51(4), 467–481. https://doi.org/10.1037/a0034332
  • Lawson, T. (1998). Economic science without experimentation. In M. Archer, R. Bhaskar, R. Collier, T. Lawson, & A. Norrie (Eds.), Critical realism: Essential readings (pp. 144–169). Routledge.
  • Leichsenring, F., Steinert, C., Rabung, S., & Ioannidis, J. P. A. (2022). The efficacy of psychotherapies and pharmacotherapies for mental disorders in adults: An umbrella review and meta-analytic evaluation of recent meta-analyses. World Psychiatry, 21(1), 133–145. https://doi.org/10.1002/wps.20941
  • Levant, R. F. (2005). Report of the 2005 Presidential Task Force on Evidence-Based Practice, American Psychological Association. Retrieved May 2020 from https://www.apa.org/practice/resources/evidence/evidence-based-report.pdf.
  • Levitt, H. M. (2021). Qualitative generalization, not to the population but to the phenomenon: Reconceptualizing variation in qualitative research. Qualitative Psychology, 8(1), 95–110. https://doi.org/10.1037/qup0000184
  • Lewis, D. (1973). Causation. The Journal of Philosophy, 70(17), 556–567. https://doi.org/10.2307/2025310
  • Lindstad, T. G. (2020). The relevance of dispositionalism for psychotherapy and psychotherapy research. In R. L. Anjum, S. Copeland, & E. Rocca (Eds.), Rethinking causality, complexity and evidence for the unique patient (pp. 179–199). Springer Publ.
  • Illari, P., & Russo, F. (2014). Causality: Philosophical theory meets scientific practice. Oxford University Press.
  • Marmodoro, A. (2010). The metaphysics of powers (A. Marmodoro, ed.). Routledge.
  • Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3–11. https://doi.org/10.3102/0013189X033002003
  • Maxwell, J. A. (2021). The importance of qualitative research for investigating causation. Qualitative Psychology, 8(3), 378–388. https://doi.org/10.1037/qup0000219
  • Maxwell, J. A., & Chmiel, M. (2014). Generalization in and from qualitative analysis. In U. Flick (Ed.), The SAGE handbook of qualitative data analysis (pp. 540–553). SAGE.
  • Meincke, A. S. (2020). Dispositionalism. Perspectives from metaphysics and the philosophy of science. (Ed.) synthese library 417. Springer Publications.
  • Mumford, S., & Anjum, R. L. (2011). Getting causes from powers. Oxford University Press.
  • Norcross, J. C., Beutler, L. E., & Levant, R. F. (eds.). (2013). Evidence-based practices in mental health. Debate and dialogue on the fundamental questions. American Psychological Association.
  • Pawson, R. (2006). Evidence-based policy. A realist perspective. Sage.
  • Philips, B., & Falkenström, F. (2020). What research evidence is valid for psychotherapy research? Front. Psychiatry, 11 January. https://doi.org/10.3389/fpsyt.2020.625380
  • Popper, K. (1959/2014). The logic of scientific discovery. Martino Fine Books.
  • Popper, K. (1978). Three worlds. The Tanner Lecture on Human Values. Delivered at the University of Michigan April 7, 1978.
  • Psillos, S. (2009). Regularity theories. In H. Beebe, C. Hitchcock, & P. Menzies (Eds.), The Oxford handbook of causation (pp. 131–157). Oxford University Press.
  • Reiss, J. (2012). Causation in the sciences: An inferentialist account. Studies in History and Philosophy of Biological and Biomedical Sciences, 43(4), 769–777. https://doi.org/10.1016/j.shpsc.2012.05.005
  • Reiss, J. (2015). A pragmatist theory of evidence. Philosophy of Science, 82(3), 341–362. https://doi.org/10.1086/681643
  • Rescher, N. (1993). Pluralism. Against the demand for consensus. Oxford.
  • Richards, H. (2018). On the intransitive objects of the social (or human) sciences. Journal of Critical Realism, 17(1), 1–16. https://doi.org/10.1080/14767430.2018.1426805
  • Rocca, E., & Anjum, R. L. (2020). Causal evidence and dispositions in medicine and public health. International Journal of Environmental Research and Public Health, 17(6), 1813. https://doi.org/10.3390/ijerph17061813
  • Roth, P. A. (1987). Meaning and method in the social sciences: A case for methodological pluralism. Cornell University Press.
  • Rothwell, P. M. (2005). External validity of randomised controlled trials: “to whom do the results of this trial apply?”. Lancet, 365(9453), 82–93. https://doi.org/10.1016/S0140-6736(04)17670-8
  • Rothwell, P. M. (2006). Factors that can affect the external validity of randomised controlled trials. PLoS Clinical Trials, 1, e9. https://doi.org/10.1371/journal.pctr.0010009
  • Russo, F., & Williamson, J. (2007). Interpreting causality in the health sciences. International Studies in the Philosophy of Science, 21(2), 157–170. https://doi.org/10.1080/02698590701498084
  • Sandelowski, M., & Leeman, J. (2012). Writing usable qualitative health research findings. Qualitative Health Research, 22(10), 1404–1413. https://doi.org/10.1177/1049732312450368
  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
  • Slife, B. D., & Gantt, E. E. (1999). Methodological pluralism: A framework of psychotherapy research. Journal of Clinical Psychology, 55(12), 1453–1465. https://doi.org/10.1002/(SICI)1097-4679(199912)55:12<1453::AID-JCLP4>3.0.CO;2-C
  • Smedslund, J. (2015). The value of experiments in psychology. In J. Martin, J. Slugarman, & K. Slaney (Eds.), The wiley handbook of theoretical and philosophical psychology: Methods, approaches, and New directions for social sciences (pp. 359–373). Wiley-Blackwell.
  • Smith, K., McLeod, J., Blunden, N., Cooper, M., Gabriel, L., Kupfer, C., McLeod, J., Murphie, M.-C., Oddli, H. W., Thurston, M., & Winter, L. A. (2021). A pluralistic perspective on research in psychotherapy: Harnessing passion, difference and dialogue to promote justice and relevance. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.742676
  • Stänicke & McLeod (2021). Paradoxical outcomes in psychotherapy: Theoretical perspectives, research agenda and practice implications. European Journal of Psychotherapy & Counselling, 23(2), 115–138. https://doi.org/10.1080/13642537.2021.1923050
  • Stiles, W. B. (2009a). Logical operations in theory-building case studies. Pragmatic Case Studies in Psychotherapy, 5(3), 9–22. http://jrul.libraries.rutgers.edu/index.php/pcsp/article/view/973/2384. https://doi.org/10.14713/pcsp.v5i3.973
  • Stiles, W. B. (2009b). Responsiveness as an obstacle for psychotherapy outcome research: It’s worse than you think. Clinical Psychology, 16, 86–91. https://doi.org/10.1111/j.1468-2850.2009.01148.x
  • Stiles, W. B. (2015). Theory building, enriching, and fact gathering: Alternative purposes of psychotherapy research. In O. Gelo, A. Pritz, & B. Rieken (Eds.), Psychotherapy research. Foundations, process, and outcome (pp. 159–179). Springer.
  • Strauss. A., & Corbin, J. (1998). Basics of qualitative research. Techniques and procedures for developing grounded theory (2nd ed.). SAGE.
  • Talley, P. F., Strupp, H. H., & Butler, S. F. (eds.). (1994). Psychotherapy research and practice: Bridging the gap. Basic Books.
  • Valsiner, J., & Brinkmann, S. (2016). Beyond the “variables”: developing metalanguage for psychology. In S. H. Klempe, & R. Smith (Eds.), Centrality of history for theory construction in psychology, annals of theoretical psychology 14 (pp. 75–90). Springer.
  • von Wright, G. H. (1971). Explanation and understanding. Cornell University Press.
  • Wampold, B. E. (2001). The great psychotherapy debate. Models, Methods, and Findings. Lawrence Erlbaum Associates.
  • Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy debate. The evidence for what makes psychotherapy work. Routledge.
  • Wampold, B. E., & Owen, J. (2021). Therapist effects: History, methods, magnitude, and characteristics of effective therapists. In M. Barkham, W. Lutz, & L. G. Castonguay (Eds.), Bergin and garfield’s handbook of psychotherapy and behavior change. 50th anniversary edition (pp. 297–326). Wiley.
  • Williamson, J. (2013). How can causal explanations explain? Erkenntnis, 78(S2), 257–275. https://doi.org/10.1007/s10670-013-9512-x
  • Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford University Press.
  • Woodward, J. (2009). Agency and interventionist theories (2009). In H. Beebe, C. Hitchcock, & P. Menzies (Eds.), The Oxford handbook of causation (pp. 234–262). Oxford University Press.