Abstract
Objective
The aims of this study were to investigate the congruence and discrepancy between Chinese therapist trainees’ estimated client working alliance (WA) and their clients’ actual WA rating, and how the congruence and discrepancy predicted client symptom outcome.
Methods
Participants were 211 beginning therapist trainees and 1216 clients. Data from their 6888 sessions were analyzed using Truth and Bias Model and Response Surface Model.
Results and Conclusions
(i) Chinese trainees’ estimation of client WA was on average significantly lower than actual client WA. (ii) At the between-person level, whether the trainee generally over- or underestimated client WA was not related to the client's initial symptom level or symptom improvement rate. (iii) At the within-person between-session level, a session where a trainee accurately perceived high client WA, compared to a session where the trainee accurately perceived low client WA, was followed by greater client symptom relief before the next session. In the case of estimation bias, a session where the trainee underestimated client WA was followed by greater client symptom reduction in the next session, rather than the other way around when the trainee overestimated client WA. Implications on therapist training were discussed.
Disclosure Statement
No potential conflict of interest was reported by the author(s).
Notes
1 To contextualize the lag-1 autocorrelation of TECWA, we also calculated the lag-1 autocorrelation of TWA (r = .62, p < .001) and CWA (r = .63, p < .001), which were similar in magnitude with the autocorrelation of TECWA.
2 To align with existing TBM research and inspect the possible multi-collinearity between TECWA and TWA, we also tested a simpler TBM model without including the bias force (i.e., TWA rating). This model yielded results of the same pattern with the full TBM model: negative directional bias (G000 = −.391, SE = .066, p < .001) and significant positive truth force (G100 = .298, SE = .024, p < .001). Further, a multilevel correlational analysis showed that TECWA and TWA correlated only moderately at Level-1 (r1 = .54; at Level-2 r2 = .80, and at Level-3 r3 = .71; all ps < .001), which ameliorated the concern of multi-collinearity between TECWA and TWA.
3 This was conducted using the Mplus command “MODEL CONSTRAINT,” which created a new variable of the difference between the bias and the truth force estimates. The significance was estimated using their pooled SE in Mplus.
4 We also tested an additional model to examine whether the squared directional bias term would be predictive of client symptom relief. If this were the case, it would indicate that rather than having a positive or negative overall directional bias (representing on average overestimating or underestimating client WA), therapist accurately estimate client WA in general would be associated with greater client symptom relief. The “XWITH” command in Mplus was used to create latent variable interaction terms for the random intercept P0CT that represented the directional bias. Because Mplus would not allow for using this command with a three-level random model and because the therapist-level was just the null model in our original Part 1 analyses, we ran a two-level model (within-person/between-session and between-person) while accounting for therapist-level data nesting using the Huber-White “sandwich procedure” by invoking the TYPE = COMPLEX command (Muthen & Muthen, Citation2017). Results showed that, the quadratic term of directional bias was not associated with client initial CORE-10 score (estimate = .020, SE = .065, t = .305, p = .760), nor with client CORE-10 change slope (estimate = .015, SE = .018, t = .864, p = .387).
5 We used the average autoregressive coefficient for CORE-10 from the whole sample, rather than each individual dyad's specific autoregressive coefficient, because the variance components for the autoregressive coefficient were nonsignificant at the client level (variance = .005, p = .340) or the therapist level (variance = .000, p = .976). This estimate is similar with the autoregressive effects of client symptom obtained in existing research (e.g., .66 in Fitzpatrick et al., Citation2020) using other approaches (e.g., Random-Intercept Cross-Lagged Panel Model), which increases the confidence about the accuracy of our overall autoregressive effect estimate.
6 We did not directly add the current-session client symptom in the Level-1 model below because existing studies have pointed out the potential estimation bias of adding a lagged variable into the multilevel model (e.g., Falkenstrom et al., Citation2022). However, only using the next-session client symptom level as the dependent variable in the Level-1 model without accounting for current-session client symptom is also problematic because it omits an important confounding autoregressive effect (current-session symptom to next-session symptom) that is prevalent in psychotherapy research (Falkenstrom et al., Citation2022). Therefore, we elected to estimate an overall “autoregressive effect” and then calculate the residualized score to represent a subsequent client symptom score after accounting for the previous symptom level. More discussion about the considerations and limitations of this approach is provided in the Limitations section.
7 No random effects (i.e., error terms) were specified for the regression coefficients other than the intercept because the random effects were all nonsignificant at both the client and the therapist levels.
8 Nestler et al. (Citation2019) proposed another parameter a5 = b300-b500 as an additional criterion and suggested that the agreement effect requires a nonsignificant a5 estimate. In our study, this condition was met (a5 = .012, SE = .012, t = .970, p = .332).