223
Views
0
CrossRef citations to date
0
Altmetric
Review Article

Allegiance effects in cognitive processing therapy (CPT) for posttraumatic stress disorder (PTSD): a systematic review and meta-analysis

& ORCID Icon
Pages 79-93 | Received 04 Jun 2023, Accepted 18 Apr 2024, Published online: 13 May 2024

ABSTRACT

Objective

We sought to determine whether there is evidence of researcher allegiance bias in the reporting of cognitive processing therapy (CPT) for posttraumatic stress disorder (PTSD).

Method

We used a reprint analysis approach – whereby papers were coded for indications of potential bias – to determine the presence and magnitude of researcher allegiance in published randomized controlled trials (RCTs) of CPT.

Results

Twenty trials met inclusion criteria. Evidence of allegiance to CPT rather than the respective comparison conditions was typically small to negligible. A meta-regression analysis of the 17 studies which included an active comparison group did not find an association between allegiance scores and study effect size for the reduction of PTSD symptoms (95% CI: −0.05, 0.19).

Conclusion

There is no evidence at present that the CPT literature has been unduly influenced by allegiance held to CPT or the comparator conditions.

Key Points

What is already known about this topic:

  1. Cognitive processing therapy (CPT) is an empirically supported treatment for posttraumatic stress disorder.

  2. Researcher “allegiance effects” can include allegiance to a given therapeutic modality.

What this topic adds:

  1. Reprint analysis indicated that researcher allegiance to CPT was small to negligible.

  2. A meta-regression of 17 included studies did not find evidence of an association between allegiance scores and study-level effect size.

Introduction

While randomized controlled trials (RCTs) have come to be considered the “gold standard” of measuring efficacy of a novel therapy, many biases can compromise their integrity. These biases include, at the study level, selection bias and response bias, and, in the broader literature, publication bias. Such biasing factors can distort the evidence presented either within a given paper or via the decision to publish or not publish results and thus diminish the academic value of a study’s outcomes.

A researcher’s own belief in the superiority of a given therapy included in a study, dubbed “researcher allegiance” (RA), is a key bias that can compromise clinical trials. The therapeutic allegiance of the researcher was first referred to in the psychotherapy literature by Luborsky (Citation1975), who posited that a researcher could inadvertently distort the true effects of psychotherapies depending on their pre-existing belief in the superiority of one of the treatments. In this way, the researcher’s allegiance may lead to an “allegiance bias”, whereby allegiance partially or wholly accounts for treatment effects (Leykin & DeRubeis, Citation2009).

Definitions of allegiance have varied since this initial hypothesis was proposed. Hollon (Citation1999) described allegiance as an expression of both interest and expertise in one treatment, and neither in the alternative treatment. More recently, Yoder et al. (Citation2019) defined it as a form of conflict of interest involving higher level of skill or training leading to reduced objectivity. As these definitions imply, allegiance can stem from vested interests, such as financial or reputation gain if a study’s results were to support superiority of a given treatment. If the developer of a given treatment conducted a clinical trial evaluating this treatment, they may favour results that show support for their therapy. However, allegiance can also stem from more innocuous sources such as convenience or availability, and thereby have unintended consequences. A researcher can bias a study by selecting a less effective competing treatment, by providing more extensive training to therapists in one condition compared to the other, selecting therapists who have had more exposure or have higher skill in one therapy, or instructing therapists to implement the therapies with different levels of fidelity to original protocols (Dragioti et al., Citation2015).

Mechanisms and measurement

With reference to the mechanisms underlying allegiance’s effect on study outcomes, McLeod (Citation2009) suggested that the therapist’s or investigator’s allegiance leads to better treatment protocol adherence and competence through more effective training or more motivation to learn the treatment. They claim that investigators may intentionally select highly motivated therapists. Additionally, Yoder et al. (Citation2019) hypothesised that researcher allegiance can also affect outcomes via publication bias, positing that an allegiant author is more likely to elect to not publish results of a “negative” trial, in which no significant difference was found between the study intervention and control. Yoder et al. state in their published protocol (Citation2019) that they intend to examine this hypothesis by analysing grant applications for indicators of allegiance and compare rates of non-publication of negative results, however their findings are yet to be published.

At present, allegiance is very rarely explicitly reported in published research; Dragioti et al. (Citation2015) investigated disclosure of allegiance in published meta-analyses and RCTs and reported that 793 out of 1198 RCTs were “allegiant”. However, only 25 disclosed allegiance. Given this low rate of explicit self-reporting, various methods of coding and measuring allegiance have been devised. Leykin and DeRubeis (Citation2009) used four distinct methods to measure allegiance: “reprint analysis”, which examines attributes in the paper’s introduction and methodology that indicate likely allegiance; examination of previous publications from the author that may indicate a pre-existing bias; conducting interviews with colleagues of the authors; and interviews with authors themselves. Among these methods, reprint analysis would seem to be the most valid: Compared to the examination of previous papers by authors in a given area, reprint analysis allows an explicit weighting of each of a range of indicators of potential allegiance (not just past publications), as opposed to a somewhat subjective interpretation as to whether previous papers indicate an allegiance. The possibility of reliably identifying allegiance simply through interviewing an author or colleague of an author who may be unconscious of their own bias or reluctant to disclose it seems questionable. Finally, reprint analysis is also the most feasible and valid approach given that it relies on a composite of indicators.

With respect to reprint analysis, inferences about author allegiance are necessarily indirect. Authors and “experts” who have previously published papers outlining the development of a therapeutic intervention could be assumed to have a greater than chance likelihood of endorsing or showing allegiance to an approach in which they have invested substantial time and stand to gain career-related benefits. Likewise, if research team members receive more training in one approach than another, it may reflect an implicit favouritism towards delivering one approach correctly, with a potential assumption that quality control is less critical for a comparator arm.

Evidence for researcher allegiance

Mixed conclusions have been drawn in meta-analyses and reviews examining whether allegiance is associated with treatment outcomes (Munder et al., Citation2013). One early study conducted by Luborsky et al. (Citation1999) reported that 69% of variance in outcomes could be attributed to a researcher’s allegiance. However, more recent meta-analyses have yielded inconsistent findings. S. Miller et al. (Citation2008) found significant RA effects in studies of treatments for mental disorders in youth populations such that entering RA as a covariate reduced heterogeneity in outcomes. However, Tolin (Citation2010), using a similar methodology to investigate studies of cognitive behavioural therapy (CBT) used to treat a range of mental disorders, including anxiety and depression, found that entering RA as a covariate had no significant effect on outcome. Notably, several studies have investigated RA effects across a variety of interventions, potentially obfuscating existing biases by inappropriately combining highly heterogeneous data (Leykin & DeRubeis, Citation2009).

Several reviews have been more focused with respect to therapy modality. Gaffan et al. (Citation1995) conducted a re-analysis of a meta-analysis of CBT interventions for depression. They concluded that while CBT appeared to be consistently superior in efficacy, half the difference in effect size between CBT and other interventions could be predicted by RA. Using a similar methodology, Goldberg and Tucker (Citation2020) conducted a meta-re-analysis of mindfulness-based interventions (MBIs). While the original meta-analysis (Goldberg et al., Citation2018) had reported superiority of MBIs compared to other evidence-based treatments, Goldberg and Tucker found that RA was associated with reported superiority of MBIs, and additionally, that MBIs did not show superior efficacy where RA was absent or balanced. Additionally, in the field of PTSD research, Munder et al. (Citation2012) reported a significant RA-outcome association in a meta-analysis of studies of trauma-focused therapy which explained 12% of the variance in outcomes.

However, even in studies of narrower scope, some findings have been directly contradictory; several aforementioned studies detected an association between RA and outcomes in depression-focused therapies, while other meta-analyses failed to establish a similarly robust association. These studies include a meta-re-analysis of cognitive therapy for depression (Gaffan et al., Citation1995) which found that in older studies (pre-1987) half of outcome variation was predictable by RA, while more recent studies did not demonstrate a strong allegiance effect, and Tolin’s (Citation2010) meta-analysis of CBT for depression which found that even when controlling for allegiance, CBT continued to be associated with significantly stronger treatment effects. This suggests that certain psychotherapy literatures may be more prone to RA than others, with mixed findings regarding treatments for depression, and consistent but limited results in other fields, such as PTSD.

In addition to meta-analyses and reviews, a small number of experimental designs have examined allegiance, such as Wilson et al. (Citation2011) clinical trial conducted across two sites with self-identified differing allegiances. They found no difference between treatment outcomes on each site, indicating no apparent effect of allegiance on outcome.

Given the varied findings across the allegiance literature, it continues to be important to investigate this potentially significant bias in both new and established therapies. “Gold standard” treatments derived from RCTs in many respects have a rigorous evidence base. However, in the clinical psychology literature, there are numerous established therapies from which a non-trivial proportion of the evidence base is comprised of studies from one particular author or author-group. Without investigation of allegiance and other biases, it remains possible that treatment effects may be over-estimated.

Researcher allegiance in the posttraumatic stress disorder literature

The findings regarding researcher allegiance may thus vary according to the measurement approach and the domain of the literature assessed, with relatively less consistent evidence for researcher allegiance effects in studies of the depression literature (e.g., Gaffan et al., Citation1995; Tolin, Citation2010). With respect to posttraumatic stress disorder (PTSD), Munder et al. (Citation2012) reported a robust RA-outcome association in a meta-analysis of studies of trauma-focused therapy for PTSD, suggesting that the literature in this area may be prone to bias. Interestingly, Munder et al. (Citation2013) review of meta-analyses found that the strength of the association between RA and outcome did not vary depending on how RA was measured, suggesting that various methods of coding may be equally capable of effectively detecting allegiance. As such, there may be value in further investigating whether the PTSD literature is characterised by high rates of RA, including in more recently developed interventions.

Cognitive processing therapy (CPT) was developed in the late 1980s by Resick and has in recent decades come to be considered an efficacious treatment for PTSD on the basis of its successful symptom reduction in controlled trials (Asmundson et al., Citation2019). Not only is it a “gold-standard” intervention, it is also increasingly recommended in treatment guidelines (e.g., National Institute for Health and Care Excellence [NICE], Citation2018; Phelps et al., Citation2022). Given that CPT has gained such widespread acceptance as an effective treatment approach, it is therefore important to establish that the evidence base has not been undermined by research biases. While there have been different iterations of CPT developed, it remains a relatively homogenous and well-defined therapeutic approach when compared to the wide range of variations to other empirically supported approaches, such as prolonged exposure therapy. For this reason, CPT serves as a good candidate intervention approach for the examination of allegiance effects in the PTSD literature as the lead proponents and developers can be relatively reliably identified when compared with other, more diffuse, therapeutic approaches which have emerged from multiple research traditions.

The present study examined the association between researcher allegiance and outcomes of RCTs assessing the efficacy of CPT for PTSD. As the CPT literature has not previously been examined in this way, this paper aimed to provide insight into the role of RA within this specific field of research, as well as contribute to a broader understanding of RA bias in psychotherapy research. CPT was considered an appropriate candidate and selected as the therapy of focus for the present study on the basis that CPT emerged in recent decades and has come to be considered an efficacious intervention for PTSD (Asmundson et al., Citation2019; Jericho et al., Citation2021). To date, no other study has examined the association between RA and outcome measures in the CPT literature.

The existing allegiance literature has provided mixed results regarding allegiance-outcome associations. Without assuming directional hypotheses, the present study intended to explore whether researcher allegiance was associated with direction and magnitude of CPT treatment outcomes (reduction in PTSD symptoms).

Method

This study was pre-registered with PROSPERO, registration ID: CRD42021226145. The review was conducted in line with the PRISMA recommendations, and the PRISMA reporting checklist is included in Supplementary Material.

Inclusion and exclusion criteria

The present study aimed to capture all English-language RCTs that included CPT as a treatment group. Inclusion criteria were: publication in a peer-reviewed journal, study reported in English, inclusion of at least one non-CPT control or comparison group, with participants randomly allocated to condition, use of a structured or semi-structured diagnostic interview, all participants meeting full criteria for PTSD (however, given the time-frame of the studies included, participants met DSM-III-TR, DSM-IV, or DSM-5 criteria), and use of a validated measure for the reporting of PTSD symptom outcomes. We required all participants in each included study to meet criteria for PTSD on the grounds that CPT was developed for treatment of PTSD and given that studies of a clinical intervention – such as CPT – using non-clinical samples would likely be a minority in our overall pool of included studies and introduce unnecessary methodological heterogeneity into the review. We also wanted to ensure that our clinical outcome variable would not be affected by floor effects or a restriction of range if derived from non-clinical samples. Studies which reported secondary data were excluded.

Search strategy

For the present review and meta-analysis, a literature search was conducted on 24 February 2021 and repeated on 9 September 2023 using Ovid, including PsycInfo, Medline, and Embase databases (see full details of search in Supplementary Material). A research librarian was consulted, and search terms were generated using the PICO framework (S. A. Miller & Forrest, Citation2001). Our search strategy used search terms: “cognit* process* therapy” AND “post traumatic stress” OR “PTSD” OR “post-traumatic stress” OR “posttraumatic stress”.

Screening of records and data extraction

See for an overview of the study inclusion process.

Figure 1. PRISMA flow diagram of the study selection process.

Figure 1. PRISMA flow diagram of the study selection process.

The original 2021 systematic search returned a total of 1233 results, 577 of which were identified as duplicates (by Covidence software) and removed, leaving 656 studies in total. The remaining records were progressed to title-abstract review, which was completed by the author CM and an additional reviewer. Of the 656 studies, 533 (81.25%) were excluded by both reviewers, 65 (9.91%) were included by both reviewers, and 58 (8.84%) were in conflict, yielding a 91% agreement rate and Cohen’s Kappa of 0.64 (which is considered a “substantial” level of agreement by Landis & Koch, Citation1977). Conflicts were resolved through discussion between reviewers, and a total of 90 papers were included for full-text review.

Full-text review was conducted by author CM. A total of 14 papers were included for data extraction and analysis. For a summary of study characteristics, see .

Table 1. Study characteristics.

This search was repeated in September 2023, returning a total of 1880 results, 1485 of which were identified as duplicates and removed, leaving 395 studies. These records were progressed to title-abstract review which was completed by CM. Thirty-four papers were included for full-text review. Six new papers were included for data extraction and analysis, in addition to the original 14 that were included in 2021. A total of 20 papers were included. Data were extracted (means and SDs for post-treatment PTSD scores) by CM and checked by DB.

Coding procedures

As described earlier, many methods for coding allegiance have been devised, each of which may be able to adequately capture an allegiance-outcome association (Munder et al., Citation2013), however requiring different time and resources. Given the feasibility and time-efficiency of “reprint analysis” compared to contacting authors or their colleagues, much of the recent allegiance bias research has utilised this method (e.g., Goldberg & Tucker, Citation2020), whereby the content of the publication is analysed and features indicative of allegiance are coded, providing an allegiance “score”. Gaffan et al. (Citation1995) reported that researcher allegiance has frequently been coded on a two- or three-point scale, including positive and negative or positive, negative, and neutral. Highly simplified approaches to coding allegiance that are based on single-item assessment of authorship have also been used (e.g., Manea et al., Citation2017).

Allowing for specific details that indicate both positive and negative allegiance to be recorded and synthesised as a continuous score, Yulish et al. (Citation2017) devised a 7-item rating scale which includes authorship, as well as supervision of therapists, and any alterations to treatment manuals. These items attempt to target mechanisms of allegiance bias, rather than possible consequences thereof as used by Gaffan et al. (Citation1995). For the present study, Yulish et al’s (Citation2017) rating scale was selected as it has been used multiple times within the allegiance literature (Goldberg & Tucker, Citation2020). This rating scale (see ) is comprised of seven items that code indicators of both “positive” and “negative” allegiance to each therapy included in a study. The authors carefully reviewed each included study and discussed examples of potential allegiance. For example, the following was considered to be an instance of “advocating” for a particular treatment (adaptive disclosure [AD] in this instance): “An open trial showed that AD was well-received, well-tolerated and led to large effect size reductions in PTSD” (page 3 of Litz et al., Citation2021).

Table 2. Allegiance scoring tool.

Unlike other rating scales, it includes items that may indicate a non-preference for a given therapy, such as therapists being proscribed from conducting activities that would be considered a routine part of the therapy. It also allows for rating of both included therapies, as recommended by Yoder (Citation2019). To assist with the interpretation of results, we considered the difference score (i.e., allegiance score for CPT minus allegiance score for each respective comparator) to be “small” if it ranged from 0 to 2, medium sized if it ranged from 3 to 4, and large if it was >4, on the basis that rarely would all 7-items of allegiance be endorsed in any one treatment arm resulting in a score of 7 and given that differences between treatment arms would rarely be as great as the maximum in any given arm. These categorisations were also derived from careful consideration of the number of allegiance items that would need to be endorsed for any given category and the total range of scores possible for the allegiance difference score. While ultimately arbitrary, this categorisation is slightly more conservative than that used by Munder et al. (Citation2011), in that “large” levels of researcher allegiance required a great degree of discrepancy between allegiance scores for treatment vs comparator arms in the present study.

Data was extracted by the first author. Intent-to-treat (ITT) analyses were favoured whenever reported (n = 10 estimates were ITT). Outcome data extracted was based on availability and measure validity. The primary outcome data extracted were CAPS or CAPS-5 for most studies, however not all studies reported this measure at pre- and post-intervention. For these studies, PTSD Checklist-Specific (PCL-S), Modified PTSD Symptom Scale (MPSS-SR), or Impact of Events Scale-Revised (IES-R) were extracted, based on which primary outcome measure was reported in the study.

Coding for the researcher allegiance scores was completed by the first author and a colleague with 100% agreement. Where an item was not able to be answered based on the information provided in the published text, it was scored 0.

Meta-analysis

A random effects meta-analysis was conducted whereby the posttreatment means and pooled standard deviations were used to generate treatment outcome effect sizes between the CPT and comparator conditions using Comprehensive Meta-Analysis version 3 (CMA; Borenstein et al., Citation2013). In line with current recommended practice, effect sizes were calculated based upon posttreatment means and standard deviations, rather than pre-post intervention differences (Cuijpers et al., Citation2017). In instances where standard error (SE) was reported instead of standard deviation (SD), the SE was converted to SD, and when confidence intervals were reported, but not SD, the confidence intervals were converted to SD prior to synthesis of the results (see Higgins et al., Citation2023). Standardised mean differences were converted to Hedges g as a bias-corrected estimate of effect size that could then be associated with the calculated researcher allegiance scores (summary of extracted data available at: https://osf.io/ntsmy/). A meta-regression analysis was then conducted to determine whether calculated researcher allegiance scores explained heterogeneity in treatment outcome among studies. Only allegiance and no other study characteristics were included as predictor variables in these analyses given the relatively low statistical power for the regression (20 studies in total).

Results

Following the exclusion of unsuitable studies, 20 studies were included in the review (). A study by El Barazi et al. (Citation2022) was excluded on the basis that it compared CPT against a pharmacological intervention, precluding a valid assessment of allegiance for the pharmacological condition. The 20 included studies included a total of 3036 participants. Seven of the included studies comprised mixed trauma samples, nine studies only included veteran participants, two studies only included victims of child sexual abuse, one study included only victims of adult sexual assault, and one study included only victims of military sexual assault. Seventeen studies included an active intervention comparison condition, with some form of exposure therapy being the most common (four studies) and three studies included a waitlist control or otherwise non-active comparison condition.

Scores of allegiance to CPT, without consideration of the respective comparator conditions, varied between 0 and 4 (mean = 1.60). Allegiance to non-CPT conditions ranged between −1 and 3 (mean = 0.55). See Supplementary Material for an item-by-item summary of the scores for each study. This slightly lower level of allegiance is consistent with the nature of the included studies, the majority of which included CPT as the experimental condition and therefore may have had a “stake” in its success.

When difference scores were calculated between CPT allegiance and allegiance scores for each respective comparison condition for the 17 studies which compared CPT to an active control condition (as opposed to wait-list), the mean difference score was 0.88 (SD = 1.65), corresponding to a small sized overall allegiance effect per paper.

Overall, the items that accrued the most positive scores were items 1, 2, and 3, relating to authorship and supervision/training. It therefore appears that the quality and the quantity of supervision and training, as well as authorship and authors supervising therapists in their own studies, showed the most evidence of difference between studies, such that training was sometimes more extensive or closely supervised in the CPT condition than the comparator condition.

Meta-analysis

The random effects meta-analysis of the 20 included studies indicated a medium-to-large effect size for reductions in PTSD symptoms following CPT (Hedges g = 0.40, SE = 0.12). A forest plot of study effect sizes is provided in . Given that the determination of publication bias was not a priority for the present analyses, a funnel plot is not presented here, but is available for the interested reader in Supplementary Material. When we repeated the meta-analysis based only on the 17 studies which included an active intervention (including “treatment as usual”) the effect size (Hedges g) favouring CPT was 0.25 (SE = 0.11). When we repeated the analyses based only on the 17 of the 20 included studies which used structured or semi-structured interviews to assess PTSD symptoms, Hedges g was 0.42 (SE = 0.14).

Figure 2. Forest plot of effect sizes for cognitive processing therapy (N = 20 studies).

The posttreatment scores for (Sloan et al., Citation2018) were at 12 weeks for cognitive processing therapy (CPT) and 6-weeks for written exposure therapy (WET), given that the WET intervention was only a 6-week intervention.
The above forest plot includes 17 studies with an active comparison group (including treatment as usual; TAU) and three studies with a waiting list comparison group. The summary effect size should thus be interpreted with caution.
AD = adaptive disclosure; CPT = cognitive processing therapy; DBT = dialectical behaviour therapy; PCT = present-centered therapy; PE = prolonged exposure; TAU = treatment as usual; WA = written accounts; WET = written exposure therapy; WL = waiting list.
Figure 2. Forest plot of effect sizes for cognitive processing therapy (N = 20 studies).

The next step was to determine whether ratings of allegiance (on a scale −4 to 3) were associated with treatment outcomes (Hedges g for each study). A meta-regression was conducted whereby the study effect size (Hedges g) was regressed upon the researcher's allegiance difference score (i.e., allegiance score for CPT minus allegiance score for the control/comparison condition). No other predictor variables were included to preserve statistical power. Results indicated that the co-efficient for study allegiance difference score in predicting effect size was not significantly different to zero (95% CI: −0.02, 0.26), suggesting that allegiance was not associated with study effect size. See for a regression plot of effect size on allegiance difference scores. Likewise, when the CPT allegiance score alone (rather than the difference score) was considered, the results were also not significant (95% CI: −0.02, 0.26). When we repeated the regression analysis without the three studies which included non-active control comparisons, the results for the remaining 17 included studies also indicated that allegiance was not associated with study effect size (95% CI: −0.05, 0.19). The regression co-efficient of 0.07 for these 17 studies indicates that study effect size changes by only 0.07 for every 1-unit increase in researcher allegiance difference score.

Figure 3. Regression plot of effect size on allegiance difference scores.

Figure 3. Regression plot of effect size on allegiance difference scores.

We conducted three sensitivity analyses to determine whether variations in calculation of the total score for the allegiance measure were associated with any changes in the association between allegiance and treatment outcome. First, we repeated the analyses when the allegiance total score was derived only from the positive allegiance items. Next, we repeated the analyses by focusing only allegiance scores for the CPT (not comparator/control). Finally, we repeated the analyses after excluding items 1 (if the author advocates for treatment or developed treatment) and 5 (if supervisor is a recognised expert in treatment) in case our coding of expert status lacked reliability. In each case, there was no significant association between allegiance score and study effect size. A detailed summary of the results of the sensitivity analyses is reported in Supplementary Material for the interested reader.

Discussion

While allegiance bias has now been investigated across a variety of treatment literatures, this is the first investigation of allegiance bias in the CPT literature. The current study focused on RCTs of CPT given the methodological advantages of random allocation of participants to comparator conditions of CPT for PTSD.

A reprint analysis of the included papers found that allegiance scores for CPT were typically greater than those for the respective comparison conditions, but only an overall small-sized allegiance effect when considered as a difference score. Overall allegiance effect scores were mostly contributed to by papers where authors were developers of CPT, but not the comparator therapy. The reported supervision and training characteristics of each study also contributed to the scores. With this in mind, efforts to counter allegiance effects might aim to ensure that treatment approaches are rapidly disseminated to non-developer researchers for further efficacy testing or that comparisons with existing interventions involve developers or proponents of each approach. The non-inferiority trial of Sloan et al. (Citation2018) – whereby established experts and proponents of each intervention served as supervisors or each arm – serves as an example of how this might be done.

Our meta-analysis identified no significant relationship between scores of researchers’ allegiance to CPT and its performance in reducing PTSD symptoms in RCTs. Additionally, researchers’ allegiance to the various non-CPT comparator therapies was also not associated with therapeutic outcomes. The study therefore failed to detect an “allegiance effect” either of positive bias towards CPT or of a negative bias against comparators. Given these findings, there is no evidence that the existing body of clinical research investigating the effects of CPT on PTSD is biased by researcher allegiance. This provides confidence that the benefit derived from CPT in trials is indeed an effect of the therapy itself, rather than an artefact of researcher allegiance.

The present study’s failure to find an association between scores of researcher allegiance and study outcomes is not an isolated finding. While numerous studies have found a robust association between allegiance and outcome, others, such as Tolin (Citation2010) who investigated allegiance in studies of CBT for depression, did not establish that allegiance contributed strongly to treatment effects. At the experimental level, Wilson et al. (Citation2011) clinical trial conducted across two sites with differing allegiances found no difference between treatment outcomes on each site. As with all null findings, it is important to note that this absence of evidence is not evidence of absence, particularly in allegiance bias research where measurement is inherently difficult. It does not therefore prove that allegiance is not present in the study’s investigated literature base. Rather, it provides no evidence that there is reason for concern about the impact of researcher allegiance bias in this literature to date. We acknowledge that our review may have overestimated effect sizes, as only English language articles were included (Egger et al., Citation1997), and there is evidence to suggest that systematic reviews that exclude non-English articles may be of lower quality (see Moher et al., Citation2003). Future reviews of allegiance should endeavour to include non-English articles.

Regarding the existing research on allegiance bias in the PTSD literature, the present study contrasts with Munder et al. (Citation2012) meta-analysis of trauma-focused therapies for PTSD, which found that allegiance was a significant moderator of treatment outcomes, accounting for 12% of variance. However, Munder’s review included only two out of 20 studies which included CPT. It is therefore possible that CPT’s structured, manualised approach may be a protective factor against allegiance bias relative to other trauma-focused therapies.

Notably, the inclusion of positive and negative allegiance scoring items applied to both the CPT condition and the comparator therapy allowed for directional allegiance to be assessed in both therapies. This practice of rating the researcher’s allegiance to both interventions to allow for detection of directional or balanced allegiance in a study was encouraged by Yoder et al. (Citation2019) and therefore implemented in the present study, increasing the breadth of findings as allegiance to both interventions was estimated. The null finding in both CPT and comparator therapies indicates that not only is there no evidence that researchers biased conditions in favour of their preferred therapy, but there is also no evidence that they designed studies to undermine the comparator condition by implementing the therapy less rigorously.

While “reprint analysis” to estimate allegiance has become standard in the researcher allegiance literature and was used in the present study, Leykin and DeRubeis (Citation2009) cautioned against this method. They state that authors frequently write a study’s manuscript after seeing results and therefore the reprint method may detect the author’s post-result allegiance rather than a true pre-existing allegiance to the intervention. Berman et al. (Citation1985) tested this assertion by examining allegiance in researchers’ current and previous publications; if allegiance was influenced by findings in the present paper, a difference would be expected between markers of allegiance in the present paper compared to the author’s previous publications. No significant difference was found, suggesting that RA remained stable, supporting the reprint method as an effective measure. However, it remains possible that if a paper’s introduction and method were written after results were known, the results may have influenced the content of these sections. It is therefore recommended that this post-hoc writing practice is avoided and that best practice guidelines are adhered to, to avoid the undue influence of results on a paper’s reporting.

One inherent challenge of the allegiance bias literature is the difficulty of defining and measuring the concept of allegiance. Any measurement of allegiance based on scoring possible indicators within a publication is an estimate, rather than a factual statement of an allegiance. As no tool for measuring allegiance has yet been empirically validated, it could be questioned whether the method used for scoring allegiance was adequately sensitive to detect an allegiance effect in the present review. This limitation is compounded by an absence of detailed coding instructions and descriptors for some items which are difficult to quantify (e.g., how much does an author need to be “advocating” an approach for the advocating item to be endorsed). However, this same tool was used by Yulish et al. (Citation2017) and Goldberg and Tucker (Citation2020), who reported that outcomes of problem-focused therapies for anxiety and mindfulness-based therapies, respectively, could be predicted by allegiance, suggesting that the scoring tool is sufficiently sensitive to allegiance to have captured this construct in previous studies. Therefore, it is unlikely that the present study’s failure to find an effect is due to the selection of this scoring tool. Additionally, the use of reprint analysis is consistent with current best practice in researcher allegiance studies, as alternative methods rely on self-report or colleague-report, which fail to capture the potentially implicit nature of allegiance and may result in under-estimation of allegiance due to reluctance to self-disclose bias. It is therefore recommended that reprint coding systems continue to be refined and validated, as Yoder et al. (Citation2019) reported they would undertake in their published protocol. It will also be important that thresholds of the extent of allegiance are validated for any improved tool, rather than remaining arbitrary as they were in the present study. The validation of an allegiance scoring tool would provide significant safeguards for future allegiance research, ensuring that an association is able to be detected if present, increasing confidence in the accuracy of such studies.

While the selected tool for scoring allegiance had been used previously, it presented challenges in accurately answering all questions, primarily where the required data was not provided in the respective publications, resulting in several items providing too little information. For example, only one of the included studies explicitly described their supervisors as “non-expert” (Resick et al., Citation2002) in any given therapy (item 5 in the scoring tool). Neither Yulish et al. (Citation2017) nor Goldberg and Tucker (Citation2020) reported whether they operationalised this item in some way other than self-reported expert status in-text, so for the purposes of the present study and consistency with previous use of the tool, this item was scored zero for all other studies as it was not possible to establish with confidence that the supervisor was not of expert skill. Additionally, few studies provided complete descriptions of the type and amount of supervision provided to therapists in each condition. Items 2 and 3 relating to supervision therefore were not always able to be accurately assessed, and variation may have been underestimated.

As a result of these reporting practices that made it difficult to quantify researcher allegiance, it is likely that a greater number of items were scored zero than if all data had been provided. While an absence of information about allegiance is not strictly equivalent to a lack of allegiance, the allocation of a zero score for items where there was insufficient information is conservative in the sense that it does not actively contribute to a total score which would indicate the presence of possible allegiance. The zero scores for some items appear to have contributed to the limited range of total scores, which ranged from −1 to 4, indicative of a trend towards conservative ratings, which may have failed to capture the full variance in allegiances held by the respective study authors. This limitation reflects the insufficient information provided in many publications about therapy supervisors’ level of expertise and experience and details regarding manual use and therapy instruction. It is therefore recommended that future comparative studies report these details in aid of transparency regarding therapeutic practices.

There have been a range of responses to previous findings of allegiance effects. Some authors, including Berman and Reich (Citation2010), have suggested that if an allegiance effect is present, it should be statistically corrected for in RCTs, while others (e.g., Wilson et al., Citation2011) have emphasised the importance of reducing the potential for allegiance bias in the first instance. It has been noted that if an association is not present, and an attempt is made to statistically correct for it, further bias may be introduced rather than removed (Gaffan et al., Citation1995). The present study’s failure to find an association between allegiance and outcome supports the view that if an allegiance bias is only sometimes present, generalised recommendations of statistical correction may introduce further bias. Rather, it is recommended that controlled trials be held to high reporting standards, ensuring integrity and transparency in reporting practices, using guides such as the Journal Article Reporting Standards (JARS; Appelbaum et al., Citation2018). This would also allow for more accurate estimates of allegiance to be derived from reprint analysis, which remains the most feasible and efficient method of coding allegiance. Additionally, Munder et al. (Citation2011) found researcher allegiance-outcome associations to be significantly lower in comparisons with high internal validity, noting that where there were experimental control deficits, allegiance was more predictive of outcomes. As such, allegiance is not only more easily measured when rigorous reporting practices are followed, these practices may also themselves be a defence against the biasing effects of allegiance.

Summary and recommendations

CPT is an increasingly recommended treatment for PTSD. It appears to lead to significant therapeutic outcomes and symptom reduction. However, as with all emerging therapies, it is important that biases such as researcher allegiance be considered with respect to the evidence base. The current study did not find an association between researcher allegiance and therapy outcomes, either in the direction of creating favourable conditions for CPT or unfavourable conditions for the comparator condition, increasing confidence that the outcomes achieved by CPT in clinical trials are in fact a result of the therapeutic intervention and not an artefact of author allegiance. Nevertheless, there is a need to better operationalise and measure author allegiance and future trials, both in the CPT literature and in psychotherapy research broadly, need to implement transparent and thorough reporting practices, not only to allow for more accurate estimates of allegiance but also to control for the potentially biasing effects of researcher allegiance.

Author contributions

CM devised search terms, conducted the literature search, selected included articles, summarised and synthesised the data and drafted the paper. DB conducted the meta-analysis, assisted with the drafting of the paper and approved the final version of the paper for submission.

Supplemental material

Supplemental Material

Download MS Word (374.7 KB)

Acknowledgments

The authors would like to thank Brittany Smith who served as a co-rater for the inclusion of studies and the calculation of allegiance scores.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data used in the present study are available from the authors upon request.

Supplemental data

Supplemental data for this article can be accessed at https://doi.org/10.1080/13284207.2024.2347643.

References

  • Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA publications and communications board task force report. American Psychologist, 73(1), 3–25. https://doi.org/10.1037/amp0000191
  • Asmundson, G. J., Thorisdottir, A. S., Roden Foreman, J. W., Baird, S. O., Witcraft, S. M., Stein, A. T., Smits, J. A. J., & Powers, M. B. (2019). A meta-analytic review of cognitive processing therapy for adults with posttraumatic stress disorder. Cognitive Behaviour Therapy, 48(1), 1–14. https://doi.org/10.1080/16506073.2018.1522371
  • Berman, J. S., Miller, R. C., & Massman, P. J. (1985). Cognitive therapy versus systematic desensitization: Is one treatment superior? Psychological Bulletin, 97(3), 451–461. https://doi.org/10.1037/0033-2909.97.3.451
  • Berman, J. S. & Reich, C. M. (2010). Investigator allegiance and the evaluation of psychotherapy outcome research. European Journal of Psychotherapy and Counselling, 12(1), 11–21. https://doi.org/10.1080/13642531003637775
  • Bohus, M., Kleindienst, N., Hahn, C., Müller-Engelmann, M., Ludäscher, P., Steil, R., Fydrich, T., Kuehner, C., Resick, P. A., Stiglmayr, C., Schmahl, C., & Priebe, K. (2020). Dialectical behavior therapy for posttraumatic stress disorder (DBT-PTSD) compared with cognitive processing therapy (CPT) in complex presentations of PTSD in women survivors of childhood abuse: A randomized clinical trial. JAMA Psychiatry, 77(12), 1235–1245. https://doi.org/10.1001/jamapsychiatry.2020.2148
  • Borenstein, M., Hedges, L., Higgins, J., & Rothstein, H. (2013). Comprehensive meta-analysis version 3. Biostat.
  • Butollo, W., Karl, R., König, J., & Rosner, R. (2016). A randomized controlled clinical trial of dialogical exposure therapy versus cognitive processing therapy for adult outpatients suffering from PTSD after type I trauma in adulthood. Psychotherapy and Psychosomatics, 85(1), 16–26. https://doi.org/10.1159/000440726
  • Chard, K. M. (2005). An evaluation of cognitive processing therapy for the treatment of posttraumatic stress disorder related to childhood sexual abuse. Journal of Consulting and Clinical Psychology, 73(5), 965. https://doi.org/10.1037/0022-006X.73.5.965
  • Cuijpers, P., Weitz, E., Cristea, I. A., & Twisk, J. (2017). Pre-post effect sizes should be avoided in meta-analyses. Epidemiology and Psychiatric Sciences, 26(4), 364–368. https://doi.org/10.1017/S2045796016000809
  • Dedert, E. A., Resick, P. A., Dennis, P. A., Wilson, S. M., Moore, S. D., & Beckham, J. C. (2019). Pilot trial of a combined cognitive processing therapy and smoking cessation treatment. Journal of Addiction Medicine, 13(4), 322–330. https://doi.org/10.1097/ADM.0000000000000502
  • Dragioti, E., Dimoliatis, I., & Evangelou, E. (2015). Disclosure of researcher allegiance in meta-analyses and randomized controlled trials of psychotherapy: A systematic appraisal. BMJ Open, 5(6), e007206. https://doi.org/10.1136/bmjopen-2014-007206
  • Egger, M., Zellweger-Zähner, T., Schneider, M., Junker, C., Lengeler, C., & Antes, G. (1997). Language bias in randomised controlled trials published in English and German. The Lancet, 350(9074), 326–329. https://doi.org/10.1016/S0140-6736(97)02419-7
  • El Barazi, A., Badary, O. A., Elmazar, M. M., & Elrassas, H. (2022). Cognitive processing therapy versus medication for the treatment of comorbid substance use disorder and post-traumatic stress disorder in Egyptian patients (randomized clinical trial). Journal of Evidence-Based Psychotherapies, 22(2), 63–90. https://doi.org/10.24193/jebp.2022.2.13
  • Forbes, D., Lloyd, D., Nixon, R. D. V., Elliott, P., Varker, T., Perry, D., Bryant, R. A., & Creamer, M. (2012). A multisite randomized controlled effectiveness trial of cognitive processing therapy for military-related posttraumatic stress disorder. Journal of Anxiety Disorders, 26(3), 442–452. https://doi.org/10.1016/j.janxdis.2012.01.006
  • Gaffan, E. A., Tsaousis, J., & Kemp-Wheeler, S. M. (1995). Researcher allegiance and meta-analysis: The case of cognitive therapy for depression. Journal of Consulting and Clinical Psychology, 63(6), 966–980. https://doi.org/10.1037/0022-006X.63.6.966
  • Galovski, T. E., Blain, L. M., Mott, J. M., Elwood, L., & Houle, T. (2012). Manualized therapy for PTSD: Flexing the structure of cognitive processing therapy. Journal of Consulting and Clinical Psychology, 80(6), 968–981. https://doi.org/10.1037/a0030600
  • Goldberg, S. B. & Tucker, R. P. (2020). Allegiance effects in mindfulness-based interventions for psychiatric disorders: A meta-re-analysis. Psychotherapy Research, 30(6), 753–762. https://doi.org/10.1080/10503307.2019.1664783
  • Goldberg, S. B., Tucker, R. P., Greene, P. A., Davidson, R. J., Wampold, B. E., Kearney, D. J. & Simpson, T. L. (2018). Mindfulness-based interventions for psychiatric disorders: A systematic review and meta-analysis. Clinical Psychology Review, 59, 52–60. https://doi.org/10.1016/j.cpr.2017.10.011
  • Higgins, J. P. T., Thomas, K., Chandler, J., Cumpstom, M., Page, M. J., & Welch, V. A. (2023). Cochrane handbook for systematic reviews of interventions version 6.4. Retrieved December 22, 2023, from www.training.cochrane.org/handbook
  • Hollon, S. D. (1999). Allegiance effects in treatment research: A commentary. Clinical Psychology Science & Practice, 6(1), 107–112. https://doi.org/10.1093/clipsy/6.1.107
  • Jericho, B., Luo, A., & Berle, D. (2021). Trauma‐focused psychotherapies for posttraumatic stress disorder (PTSD): A systematic review and network meta‐analysis. Acta Psychiatrica Scandinavica, 145(2), 132–155. https://doi.org/10.1111/acps.13366
  • Kearney, D. J., Malte, C. A., Storms, M., & Simpson, T. L. (2021). Loving-kindness meditation vs cognitive processing therapy for posttraumatic stress disorder among veterans: A randomized clinical trial. JAMA Network Open, 4(4), e216604. https://doi.org/10.1001/jamanetworkopen.2021.6604
  • Kelly, U., Haywood, T., Segell, E., & Higgins, M. (2021). Trauma-sensitive yoga for post-traumatic stress disorder in women veterans who experienced military sexual tauma: Interim results from a randomized controlled trial. The Journal of Alternative and Complementary Medicine, 27(S1), 545–559. https://doi.org/10.1089/acm.2020.0417
  • Landis, J. R. & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics Bulletin, 33(1), 159–174. https://doi.org/10.2307/2529310
  • Leykin, Y. & DeRubeis, R. J. (2009). Allegiance in psychotherapy outcome research: Separating association from bias. Clinical Psychology Science & Practice, 16(1), 54–65. https://doi.org/10.1111/j.1468-2850.2009.01143.x
  • Litz, B. T., Rusowicz-Orazem, L., Doros, G., Grunthal, B., Gray, M., Nash, W., & Lang, A. J. (2021). Adaptive disclosure, a combat-specific PTSD treatment, versus cognitive-processing therapy, in deployed marines and sailors: A randomized controlled non-inferiority trial. Psychiatry Research, 297, 113761. https://doi.org/10.1016/j.psychres.2021.113761
  • Luborsky, L. (1975). Comparative studies of psychotherapies: Is it true that everyone has won and all must have prizes? Archives of General Psychiatry, 32(8), 995–1008. https://doi.org/10.1001/archpsyc.1975.01760260059004
  • Luborsky, L., Diguer, L., Seligman, D. A., Rosenthal, R., Krause, E. D., Johnson, S., Halperin, G., Bishop, M., Berman, J. S., & Schweizer, E. (1999). The researcher’s own therapy allegiances: A “wild card” in comparisons of treatment efficacy. Clinical Psychology Science & Practice, 6(1), 95–106. https://doi.org/10.1093/clipsy.6.1.95
  • Manea, L., Boehnke, J. R., Gilbody, S., Moriarty, A. S., & McMillan, D. (2017). Are there researcher allegiance effects in diagnostic validation studies of the PHQ-9? A systematic review and meta-analysis. BMJ Open, 7(9), e015247. https://doi.org/10.1136/bmjopen-2016-015247
  • Maxwell, K., Callahan, J. L., Holtz, P., Janis, B. M., Gerber, M. M., & Connor, D. R. (2016). Comparative study of group treatments for posttraumatic stress disorder. Psychotherapy Theory, Research, Practice, Training, 53(4), 433. https://doi.org/10.1037/pst0000032
  • McLeod, B. D. (2009). Understanding why therapy allegiance is linked to clinical outcomes. Clinical Psychology Science & Practice, 16(1), 69–72. https://doi.org/10.1111/j.1468-2850.2009.01145.x
  • Miller, S. A. & Forrest, J. L. (2001). Enhancing your practice through evidence-based decision making: PICO, learning how to ask good questions. Journal of Evidence Based Dental Practice, 1(2), 136–141. https://doi.org/10.1016/S1532-3382(01)70024-3
  • Miller, S., Wampold, B., & Varhely, K. (2008). Direct comparisons of treatment modalities for youth disorders: A meta-analysis. Psychotherapy Research, 18(1), 5–14. https://doi.org/10.1080/10503300701472131
  • Moher, D., Pham, B., Lawson, M. L., & Klassen, T. P. (2003). The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health Technology Assessment, 7(41), 1–90. https://doi.org/10.3310/hta7410
  • Monson, C. M., Schnurr, P. P., Resick, P. A., Friedman, M. J., Young-Xu, Y., & Stevens, S. P. (2006). Cognitive processing therapy for veterans with military-related posttraumatic stress disorder. Journal of Consulting and Clinical Psychology, 74(5), 898. https://doi.org/10.1037/0022-006X.74.5.898
  • Munder, T., Bruetsch, O., Leonhart, R., Gerger, H., & Barth, J. (2013). Researcher allegiance in psychotherapy outcome research: An overview of reviews. Clinical Psychology Review, 33(4), 501–511. https://doi.org/10.1016/j.cpr.2013.02.002
  • Munder, T., Flückiger, C., Gerger, H., Wampold, B. E., & Barth, J. (2012). Is the allegiance effect an epiphenomenon of true efficacy differences between treatments? A meta-analysis. Journal of Counseling Psychology, 59(4), 631. https://doi.org/10.1037/a0029571
  • Munder, T., Gerger, H., Trelle, S., & Barth, J. (2011). Testing the allegiance bias hypothesis: A meta-analysis. Psychotherapy Research, 21(6), 670–684. https://doi.org/10.1080/10503307.2011.602752
  • National Institute for Health and Care Excellence. (2018). Post-traumatic stress disorder [NICE Guideline No. 116]. https://www.nice.org.uk/guidance/ng116
  • Phelps, A. J., Lethbridge, R., Brennan, S., Bryant, R. A., Burns, P., Cooper, J. A., Forbes, D., Gardiner, J., Gee, G., Jones, K., Kenardy, J., Kulkarni, J., McDermott, B., McFarlane, A. C., Newman, L., Varker, T., Worth, C., & Silove, D. (2022). Australian guidelines for the prevention and treatment of posttraumatic stress disorder: Updates in the third edition. Australian & New Zealand Journal of Psychiatry, 56(3), 230–247. https://doi.org/10.1177/00048674211041917
  • Resick, P. A., Galovski, T. E., Uhlmansiek, M. O. B., Scher, C. D., Clum, G. A., & Young-Xu, Y. (2008). A randomized clinical trial to dismantle components of cognitive processing therapy for posttraumatic stress disorder in female victims of interpersonal violence. Journal of Consulting and Clinical Psychology, 76(2), 243–258. https://doi.org/10.1037/0022-006X.76.2.243
  • Resick, P. A., Nishith, P., Weaver, T. L., Astin, M. C., & Feuer, C. A. (2002). A comparison of cognitive-processing therapy with prolonged exposure and a waiting condition for the treatment of chronic posttraumatic stress disorder in female rape victims. Journal of Consulting and Clinical Psychology, 70(4), 867–879. https://doi.org/10.1037/0022-006X.70.4.867
  • Resick, P. A., Wachen, J. S., Mintz, J., Young McCaughan, S., Roache, J. D., Borah, A. M., Borah, E. V., Dondanville, K. A., Hembree, E. A., Litz, B. T., & Peterson, A. L. (2015). A randomized clinical trial of group cognitive processing therapy compared with group present-centered therapy for PTSD among active duty military personnel. Journal of Consulting and Clinical Psychology, 83(6), 1058–1068. https://psycnet.apa.org/doi/10.1037/ccp0000016
  • Schnurr, P. P., Chard, K. M., Ruzek, J. I., Chow, B. K., Resick, P. A., Foa, E. B., Marx, B. P., Friedman, M. J., Bovin, M. J., Caudle, K. L., Castillo, D., Curry, K. T., Hollifield, M., Huang, G. D., Chee, C. L., Astin, M., Dickstein, B., Renner, K., and Shih, M. (2022). Comparison of prolonged exposure vs cognitive processing therapy for treatment of posttraumatic stress disorder among US veterans: A randomized clinical trial. JAMA Network Open, 5(1), e2136921. https://doi.org/10.1001/jamanetworkopen.2021.36921
  • Simpson, T. L., Kaysen, D. L., Fleming, C. B., Rhew, I. C., Jaffe, A. E., Desai, S., Hien, D. A., Berliner, L., Donovan, D., & Resick, P. A. (2022). Cognitive processing therapy or relapse prevention for comorbid posttraumatic stress disorder and alcohol use disorder: A randomized clinical trial. Public Library of Science ONE, 17(11), e0276111. https://doi.org/10.1371/journal.pone.0276111
  • Sloan, D. M., Marx, B. P., Lee, D. J., & Resick, P. A. (2018). A brief exposure-based treatment vs cognitive processing therapy for posttraumatic stress disorder: A randomized noninferiority clinical trial. JAMA Psychiatry, 75(3), 233–239. https://doi.org/10.1001/jamapsychiatry.2017.4249
  • Sloan, D. M., Marx, B. P., Resick, P. A., Young McCaughan, S., Dondanville, K., Straud, C. L., Mintz, J., Litz, B. T., & Peterson, A. L. (2022). Effect of written exposure therapy vs cognitive processing therapy on increasing treatment efficacy among military service members with posttraumatic stress disorder: Randomized non-inferiority trial. JAMA Nework Open, 5(1), e2140911. https://doi.org/10.1001/jamanetworkopen.2021.40911
  • Surís, A., Link‐Malcolm, J., Chard, K., Ahn, C., & North, C. (2013). A randomized clinical trial of cognitive processing therapy for veterans with PTSD related to military sexual trauma. Journal of Traumatic Stress, 26(1), 28–37. https://doi.org/10.1002/jts.21765
  • Tolin, D. F. (2010). Is cognitive–behavioral therapy more effective than other therapies? A meta-analytic review. Clinical Psychology Review, 30(6), 710–720. https://doi.org/10.1016/j.cpr.2010.05.003
  • Watkins, L. L., LoSavio, S. T., Calhoun, P., Resick, P. A., Sherwood, A., Coffman, C. J., Kirby, A. C., Beaver, T. A., Dennis, M. F., & Beckham, J. C. (2023). Effect of cognitive processing therapy on markers of cardiovascular risk in posttraumatic stress disorder patients: A randomized clinical trial. Journal of Psychosomatic Research, 170, 111361. https://doi.org/10.1016/j.jpsychores.2023.111351
  • Wilson, G. T., Wilfley, D. E., Agras, W. S., & Bryson, S. W. (2011). Allegiance bias and therapist effects: Results of a randomized controlled trial of binge eating disorder. Clinical Psychology Science & Practice, 18(2), 119–125. https://doi.org/10.1111/j.1468-2850.2011.01243.x
  • Yoder, W. R., Karyotaki, E., Cristea, I. A., Van Duin, D., & Cuijpers, P. (2019). Researcher allegiance in research on psychosocial interventions: Meta-research study protocol and pilot study. BMJ Open, 9(2), e024622. https://doi.org/10.1136/bmjopen-2018-024622
  • Yulish, N. E., Goldberg, S. B., Frost, N. D., Abbas, M., Oleen-Junk, N. A., Kring, M., Chin, M. Y., Raines, C. R., Soma, C. S., & Wampold, B. E. (2017). The importance of problem-focused treatments: A meta-analysis of anxiety treatments. Psychotherapy Theory, Research, Practice, Training, 54(4), 321–338. https://doi.org/10.1037/pst0000144