539
Views
0
CrossRef citations to date
0
Altmetric
Editorial

What is evidence? bridging the gap between trials and treatment implementation

Pages 717-719 | Received 28 Mar 2023, Accepted 03 Apr 2023, Published online: 26 Jul 2023

How to bridge the gap between treatment and implementation into services has been discussed for many years and described in detail by Rand Europe (Wooding et al., Citation2014). For instance, why is family therapy for those with a diagnosis of schizophrenia only rarely used in services despite showing a high level of evidence (Fadden & Heelis, Citation2011; James et al., Citation2006)? Why is there a postcode (zipcode) lottery on prescribing certain medications or providing the number of psychological treatment sessions to have the hope of a reasonable benefit (MIND, Citation2019)? We could say it is resources, the people to provide the treatment, the background culture of a health service or just the biases of clinicians. There are many health service researchers who have spent countless years trying to understand these barriers which are often person-, service- or culture specific (Arnaez et al., Citation2020; Godier-McBard et al., Citation2022; Silva et al., Citation2022). Rather than reprising those findings this editorial will discuss a couple of ways that clinical academics might make it easier to move beneficial treatments into real world by changing how they evaluate them.

The best evidence

Most regulators demand evidence from randomised controlled trials so they are sure that they have the highest quality evidence of effectiveness or efficacy of interventions. Personally, I am not swallowing a pill unless it has received this sort of scrutiny. It is used to make decisions about what health professionals can prescribe or that health services will purchase. These trials are designed to be fair studies with an equal chance of finding a benefit or not, and that means that those running or participating should have clinical equipoise (no specific bias), which is very easy to say, but much harder in practice. Allocation to the new treatment or service should be random and carried out so that the process cannot be interfered with and no-one assessing the benefits should know which arm the person is allocated to. These are the general design “rules”, but they are not always adhered to, and so despite providing the highest status evidence, they can still be subject to bias because they have used high- or low-quality methods.

What affects regulator decisions?

Regulators take notice of the comparison treatment and generally prefer placebo treatment that accounts for the non-specific effects of treatment so that the key ingredient of the novel treatment is assessed. But placebos are almost impossible in psychological treatments or service interventions because they can provide a benefit (Bernstein & Brown, Citation2017; Szabo, Citation2013; Zhao et al., Citation2015). Some have also argued that placebos are unethical in the treatment of mental health difficulties (especially when standard care is beneficial), and that the placebo effect is complex (McQueen et al., Citation2013). Comparisons with treatment-as-usual can provide evidence of whether the new treatment provides a similar benefit but is cheaper. Less favoured is a wait list control as individuals will know that they will receive treatment later and so this may affect whether they are likely to improve naturally. However, in mental health services a waitlist control often means that the person receives absolutely no treatment for a considerable time, and it is problematic when there are issues of safety.

These methodological issues affect the interpretation of the strength of the results for ease of implementation, but the key to moving a novel treatment quickly from a trial to a service is the choice of the primary outcome. This is the outcome that captures improvement. It is often set by the research funder or the regulator or, in my experience, by the research team. This outcome is usually an assessment used in similar studies and is a valid and reliable measure that provides confidence that any noticeable improvements are real and not just an error of measurement. The current choices of primary outcome are not universally the ones chosen by either those with lived experience or those who provide mental health services. The valued outcome is affected by the type of mental health service, the diagnosis or set of symptoms, and the predominant culture. Any outcome needs to be acceptable for the group or groups it is intended for if the study results are going to overcome any staff or user resistance.

We already know that some trial designs may or may not be feasible or acceptable as most funders will expect that information in any application. But we (and the funders) rarely consider whether the choice of primary outcome is appropriate. In the next few paragraphs I want to examine what treatment acceptability might mean from the perspectives of those will receive it and those who will usually provide it.

Maximising treatment acceptability

There are many reasons why treatments that have shown benefit fail to be adopted by services or are declined by those using them. One downside is any side effect, particularly any adverse or serious adverse events. Drug treatments have always measured these effects and more recently tests of psychological treatment have also begun to log them. These are usually those effects associated with the treatment or the trial design that have harmful consequences and this monitoring could also pick up issues that will prevent a treatment being implemented. Those with lived experience are often more concerned with an unwanted effect that is not serious or life limiting, but that has a profound effect on day-to-day life and overall recovery. These specific concerns can also change over time and with stage of recovery. For instance, a slight shake of the hand might not be important until you want to use a smartphone regularly. What is needed is a measure that can account for this detail and is tied to the experience of people who use the novel treatment. Our team developed such a measure for antipsychotic medication in a mixed methods study. The final measure not only provides the opportunity to rate side effects by their frequency and the distress experienced, but service users can choose three of the most worrying ones in discussions about type and dose of medication with their doctor. This sort of measure would make a useful addition to defining the adverse effects of a treatment from the point of view of the participant rather than the academic researcher (Wykes et al., Citation2017). This example was for a drug study, but it is not beyond the wit of academics to develop similar measures for psychological treatment and service evaluations.

Who decides how to measure benefit?

It is hard to pinpoint what will be an accepted measure of outcome by regulators, funders or the academic community that assesses the final publication. One option is to ask the community that will be affected what treatment benefits might persuade them to adopt it into services. In the past many studies concentrated on improving global measures of symptoms for efficacy, although this might be important for clinicians this is not universally accepted by people with lived experience. Surveys and studies of acceptable outcomes have been identified in this journal and highlight these differences. For instance, Crawford and colleagues (Crawford et al., Citation2011) using nominal group methods to investigate the views of service users on twenty-four widely used outcome measures. The consensus was that many did not capture their experiences. Patient-rated measures were assessed as more relevant and appropriate than staff-rated measures, and concerns were raised about some widely used measures (e.g. Global Assessment of Functioning, European Quality of Life scale). This study shows that many global measures do not capture service user priorities and that we need to look much harder for those that will be acceptable to both service users and staff.

There is one example of a process of obtaining an acceptable measure in a recently published study of how cognitive remediation should be implemented (Wykes et al., Citation2023). This study adopted a measure that was acceptable to both clinicians and those who would receive the treatment. The study tested cognitive remediation (CIRCuiTSTM) in first episode psychosis services in the UK NHS. The treatment improves thinking skills to improve functioning and one option would be to adopt some of the global measures that fared less well in the Crawford study. But the participants were mostly young people and they had very varied recovery aims so a measure of general function was unlikely to capture their disparate goals. The research team consulted clinical staff and those with lived experience about a trial outcome that would persuade them that CIRCuiTSTM was worth investing in. Both groups said that it was whether the treatment helped people attain their personal goals. The research team then chose a measure with face validity and sound psychometric properties that is sensitive to change for individuals, perhaps more so than global measures. The measure is the Goal Attainment Scale (Kiresuk et al., Citation2014; Logan et al., Citation2022). It has the benefit of capturing heterogeneous personal goals and aspirations of those in early intervention services as some want to return to education, others were aiming for employment or even increasing their social activities. This method of consultation is likely to make the trial outcomes (whatever they are) more acceptable to those providing services. Now I admit that this is a project that I led and so I know that this outcome was not appealing to many of the reviewers of our final academic paper. Some suggested that it did not properly “measure psychosocial function”, others wanted the more usual measures and others did not accept it as it was a patient reported measure, despite many accepted outcomes being patient-reported.

Other methods of investigating acceptable outcomes include Discrete Choice Experiments (DCE) and Multi-Criterion Decision Modelling (MCDM). The DCE is a method borrowed from health economics that allows us to discover the preferences that service users and clinicians have for factors or outcomes by asking them to trade them off against each other. This method provides weights for the specific factors that can then be used to choose an outcome as the study is being designed. MCDMs are slightly different and less complex and can allow a retrospective view of the outcomes collected in a study. The various groups (service users, health care professionals, service providers and service payers) can all rate the importance of various outcomes, and these specific or overall weights can be used to understand the trial results in a different way that does not rely solely on the choice of a primary outcome. Both these methods are gaining traction and are recommended for further investigation by triallists (Khan et al., Citation2023; McCrone et al., Citation2021; Sotoudeh-Anvari, Citation2022).

In summary, the choice of primary outcomes for testing the effectiveness of treatments must consider the views of those who will receive the treatment and those who provide it. Often outcomes are specific and then a global measure might be used, but just as often, the aspirations of service users and the services they use are disparate and we need to be able to capture that variability. We can show services and service users that there is a gap in our treatment offers and we can show that there are treatments that are effective in resolving some problems, but if we do not have the right evidence that chimes with their preferences then we will still be waiting the 19 or so years found by the Rand Report for effective treatments to be implemented into services.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

Til Wykes acknowledges the support of NIHR Maudsley Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London. The views expressed are those of the author and not necessarily those of the NIHR or the Department of Health and Social Care.

References

  • Arnaez, J. M., Krendl, A. C., McCormick, B. P., Chen, Z., & Chomistek, A. K. (2020). The association of depression stigma with barriers to seeking mental health care: A cross-sectional analysis. Journal of Mental Health, 29(2), 182–190. https://doi.org/10.1080/09638237.2019.1644494
  • Bernstein, M. H., & Brown, W. A. (2017). The placebo effect in psychiatric practice. Current Psychiatry, 16(11), 29.
  • Crawford, M. J., Robotham, D., Thana, L., Patterson, S., Weaver, T., Barber, R., Wykes, T., & Rose, D. (2011). Selecting outcome measures in mental health: the views of service users. Journal of Mental Health, 20(4), 336–346. https://doi.org/10.3109/09638237.2011.577114
  • Fadden, G., & Heelis, R. (2011). The Meriden Family Programme: lessons learned over 10 years. Journal of Mental Health (Abingdon, England), 20(1), 79–88. https://doi.org/10.3109/09638237.2010.492413
  • Godier-McBard, L. R., Wood, A., Kohomange, M., Cable, G., & Fossey, M. (2022). Barriers and facilitators to mental healthcare for women veterans: a scoping review. Journal of Mental Health, 1–11. https://doi.org/10.1080/09638237.2022.2118686
  • James, C., Cushway, D., & Fadden, G. (2006). What works in engagement of families in behavioural family therapy? A positive model from the therapist perspective. Journal of Mental Health, 15(3), 355–368. https://doi.org/10.1080/09638230600700805
  • Khan, M. U., Balbontin, C., Bliemer, M., & Aslani, P. (2023). Using discrete choice experiment to investigate patients’ and parents’ preferences for initiating ADHD medication. Journal of Mental Health, 32(2), 373–385. https://doi.org/10.1080/09638237.2021.1979495
  • Kiresuk, T. J., Smith, A., & Cardillo, J. E. (2014). Goal attainment scaling: Applications, theory, and measurement. Psychology Press.
  • Logan, B., Jegatheesan, D., Viecelli, A., Pascoe, E., & Hubbard, R. (2022). Goal attainment scaling as an outcome measure for randomised controlled trials: a scoping review. BMJ Open, 12(7), e063061. https://doi.org/10.1136/bmjopen-2022-063061
  • McCrone, P., Mosweu, I., Yi, D., Ruffell, T., Dalton, B., & Wykes, T. (2021). Patient preferences for antipsychotic drug side effects: a discrete choice experiment. Schizophrenia Bulletin Open, 2(1), sgab046. https://doi.org/10.1093/schizbullopen/sgab046
  • McQueen, D., Cohen, S., St John-Smith, P., & Rampes, H. (2013). Rethinking placebo in psychiatry: how and why placebo effects occur. Advances in Psychiatric Treatment, 19(3), 171–180. https://doi.org/10.1192/apt.bp.112.010405
  • MIND. (2019). NHS figures reveal mental health spending postcode lottery. https://www.mind.org.uk/news-campaigns/news/nhs-figures-reveal-mental-health-spending-postcode-lottery.
  • Silva, M., Antunes, A., Azeredo-Lopes, S., Cardoso, G., Xavier, M., Saraceno, B., & Caldas-de-Almeida, J. M. (2022). Barriers to mental health services utilisation in Portugal–results from the National Mental Health Survey. Journal of Mental Health (Abingdon, England), 31(4), 453–461. https://doi.org/10.1080/09638237.2020.1739249
  • Sotoudeh-Anvari, A. (2022). The applications of MCDM methods in COVID-19 pandemic: A state of the art review. Applied Soft Computing, 126, 109238. https://doi.org/10.1016/j.asoc.2022.109238
  • Szabo, A. (2013). Acute psychological benefits of exercise: Reconsideration of the placebo effect. Journal of Mental Health (Abingdon, England), 22(5), 449–455. https://doi.org/10.3109/09638237.2012.734657
  • Wooding, S., Pollitt, A., Castle-Clarke, S., Cochrane, G., Diepeveen, S., Guthrie, S., Horvitz-Lennon, M., Larivière, V., Jones, M. M., & Chonaill, S. N. (2014). Mental Health Retrosight: Understanding the returns from research (lessons from schizophrenia): policy report. Rand Health Quarterly, 4(1), 8.
  • Wykes, T., Evans, J., Paton, C., Barnes, T., Taylor, D., Bentall, R., Dalton, B., Ruffell, T., Rose, D., & Vitoratou, S. (2017). What side effects are problematic for patients prescribed antipsychotic medication? The Maudsley Side Effects (MSE) measure for antipsychotic medication. Psychological Medicine, 47(13), 2369–2378. https://doi.org/10.1017/S0033291717000903
  • Wykes, T., Stringer, D., Boadu, J., Tinch-Taylor, R., Csipke, E., Cella, M., Pickles, A., McCrone, P., Reeder, C., Birchwood, M., Fowler, D., Greenwood, K., Johnson, S., Perez, J., Ritunnano, R., Thompson, A., Upthegrove, R., Wilson, J., Kenny, A., Isok, I., & Joyce, E. M. (2023). Cognitive remediation works but how should we provide it? an adaptive randomized controlled trial of delivery methods using a patient nominated recovery outcome in first-episode participants. Schizophrenia Bulletin, https://doi.org/10.1093/schbul/sbac214
  • Zhao, Y., Zhang, J., Yuan, L., Luo, J., Guo, J., & Zhang, W. (2015). A transferable anxiolytic placebo effect from noise to negative effect. Journal of Mental Health, 24(4), 230–235. https://doi.org/10.3109/09638237.2015.1021900

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.