702
Views
16
CrossRef citations to date
0
Altmetric
Editorial

Service user produced knowledge

Pages 447-451 | Published online: 06 Jul 2009

This issue of the Journal has papers which are authored by or include large contributions by service users, or consumers as they are sometimes known. The journal is known for providing encouragement to this sort of endeavour but it is perhaps now timely to consider where this encouragement might lead and what the persistent barriers might be to the acceptance of “service user produced knowledge”. This is the topic of this editorial.

There is an argument that the knowledge produced by service users is different to the knowledge produced by mainstream researchers. Importantly, the methods appropriate to service user produced knowledge are not the traditional ones used in psychiatry. This is sometimes described as the difference between quantitative and qualitative research but I will argue that this is a gross oversimplification of what these differences might be.

Randomized controlled trials and medication

Randomized Controlled Trials (RCTs) are argued to be the “gold standard” in medical research and by extension in psychiatric research. Part of the rationale for this prized methodology is that it is said to eliminate bias. Blinding means that researcher knowledge does not “contaminate” results and renders the researcher neutral with respect to the knowledge generated. There are both less and more complicated ways in which RCTs do not necessarily commend themselves to service user researchers. One is that outcome measures are nearly always devised by clinicians and clinical researchers who construct them from the perspective of what they see as desirable. This may not correspond to the views of service users themselves. For example, RCTs of medication often use scales of symptom reduction such as the PANNS or BPRS because clinicians believe that symptom reduction is the key to the health of the individual. However, service users may prefer to have some symptoms if it means that they do not have to suffer the debilitating side-effects which are common to nearly all psychiatric medication.

This raises another point – the requirement in an RCT to have a primary outcome measure. This is a statistical requirement as trials must be powered and this can only be done for one outcome. Other outcomes may be assessed – for example side-effects or quality of life – but these are relegated to the status of “secondary” outcomes. This can be conceptually misleading. For example, Moncrieff (Citation2008) argues that “primary” and “secondary” effects of medication have exactly the same status – all are effects of the drug. The allocation of the status of “primary” outcome reflects the perspective of clinicians and clinical researchers. It also has the practical effect of trying to persuade people unhappy with their medications to continue with them because “side-effects” are less important and also, of course, the consequence that many service users refuse to take their medication or alter and adjust the dose according to a self-monitoring regime.

Phenomenology

There is something more missing from the kinds of arguments used to justify RCTs in psychiatry. In general medicine, trials can have a fairly obvious and important outcome – survival rates. But psychiatry is not like general medicine, as has often been argued. One does not have to go to the lengths of Thomas Szasz to see that mental health is not only (if at all) a physical problem, it is a phenomenological one. This Karl Jaspers argued as long ago as 1912 (Jaspers (Citation1968 tr [1912]). Mental health problems occur in a world of meaning and psychiatric interventions can themselves alter that world. Receiving a diagnosis can be a relief but also can have profoundly damaging effects on a person giving rise to a “master status” whereby the whole person is reduced to only their diagnosis. Intervening when it is believed that the intervention is specific to the diagnosis compounds this. Making assumptions about what the affected person can and cannot do with their lives takes it one step further. If all this counts as “evidence-based medicine” then perhaps the gap between efficacy and effectiveness, the problem of “translation”, is because so many assumptions are at stake all the way from the bench to the bedside. In other words, RCTs are not neutral in the first place.

Randomized controlled trials of complex interventions

Other interventions are more complex than medications. There is space for only one example. The UK 700 study (Burns et al., Citation1999) concluded that bed-days were not affected by the case loads of case managers. Ultimately, small case-loads are not cost-effective. But the study declined to ask participants who had actually been admitted to hospital what they felt about this. It is well-known amongst service users and service user researchers that people differ in their attitudes to admission. For some it is a welcome break, for others it is a profoundly damaging experience (the latter is progressively more common as sectioning rates go up and conditions deteriorate). We do not know what the outcome would have been if the researchers had investigated the views of service users. It is not impossible to postulate that those whose case managers had a small case load had a better experience of hospital care, particularly with regard to admission and discharge. This would have been because the case managers had more time to smooth the admission process and more time to implement discharge plans.

User research and its critics

It is possible to produce user-defined outcome measures and this has been done at the Service User Research Enterprise (SURE) where the author works. It is a long process involving reference groups, focus groups and expert panels all of whom have experience of the treatment or service in question facilitated by researchers who also have this experience (e.g., Rose et al., Citation2008). This has been or is being done for continuity of care, cognitive therapy for psychosis, cognitive remediation therapy and experience of in-patient care. Although the latter is to be used in an RCT, these endeavours are not looked upon favourably by mainstream researchers. There are at least three reasons for this. Firstly, this approach to research is grounded in the experience of service users who are deliberately named “experts”. In the Cochrane hierarchy “evidence from experts” is the weakest form of evidence but it is still evidence. However, the “experts” in question are usually high-status professional ones and so user-produced evidence is no evidence at all. Second, there is no pretence here that researchers are neutral in the sense that they try to remove themselves as much as possible from the research process and its results. Consequently there are objections to bias, anecdote and user researchers being “over-involved”. Finally, and this is rarely made explicit, there is a perception that user researchers cannot “do” science. As Foucault has argued, since the Enlightenment with its premium on “reason” the mad have been positioned as its opposite, as “unreason” (Foucault, Citation1967). It is therefore a conceptual oxymoron to argue that those characterized by “unreason” can occupy the territory of “reason”. To put it bluntly, logic is closed to illogical people.

Biased and anecdotal?

Let us take the last two points in more detail. Is the knowledge produced by user researchers, and the methods that produce it, characterized by bias, anecdote and over-involvement? The question of bias returns us to what was argued at the beginning of this editorial. I would argue that all research is biased. In fact, I would get rid of the term itself and replace it with a concept that comes from feminist research, that of “standpoint”. All research comes from a particular standpoint that infuses its epistemology, its methodology and the knowledge produced as a result. Clinical researchers bring to their research a clinician's view of what is beneficial for a particular condition. As I said earlier, this may not match the service user's view of what kind of life they want to lead and the facilitators and barriers to this. Equally, blinding may be appropriate for some types of research but this does not mean that the research as a whole is neutral and unbiased. The charge of “anecdote” seems to be little more than the undermining of qualitative research and that is an argument long since obsolete with the advent of mixed methods research. However, one thing needs to be said. Qualitative research must be rigorous. Much as we may share experiences with our participants it is their voices, and they are heterogeneous, that we seek to enter into the arena of knowledge.

Is user-research carried out by people who are over-involved? From my personal experience of collaboration in research, I have yet to meet a clinical researcher who does not have an investment, either personal or topical, in their area of research. Most people have some passion for the research that they do (at least some of the time) and that is a good thing. To come back to the idea of “standpoint”, this is something about which researchers need to be explicit and I would argue that user researchers supersede mainstream researchers in this.

Unreason

Can unreason meet reason? Are we still experiencing the Enlightenment in psychiatry even whilst sociologists describe our society as post-Enlightenment or post-modern. It is my view that psychiatry still has not quite escaped the shackles of Auguste Comte. However, it may be that at this point in time we are experiencing an epistemological shift. A personal story can exemplify. In 1986 my local user group carried out a piece of research on in-patient views comparing their experiences in the old asylum and the acute unit which had replaced it. The local managers simply dismissed it out of hand and one said to me “why should I listen to a manic depressive like you”. Even though said manic depressive had three degrees and the research was good. I would argue that very few mainstream researchers and managers would react like that now – at least explicitly. User involvement in research has been mainstreamed through the setting up of units like INVOLVE by the Department of Health and the commissioning of peer-led evaluations within the UK health services. The question remains whether this embracing of user research is always wholehearted; whether some senior academics believe that their patients, or somebody else's patients, can really enter the portals of science. And as long as a user researcher is “somebody's patient” then there may be misgivings about their ability to carry out research. There is also a question of power in that mainstream researchers may not wish to relinquish power (see Trivedi & Wykes, Citation2002) or that they may not hold user research in high regard and continue to undermine it. This is what Foucault terms the “knowledge/power axis”.

User research and rigour

Because of the criticisms above, user researchers often feel that they have to work harder than anyone else to establish their credibility and the credibility of the knowledge they produce. It is therefore critical that user research is rigorous but this means rigour in its own terms. An aim of user research is to elicit the views of service users, in their own words, of their distress and the treatments and services they receive. Sometimes RCTs are indeed appropriate. For example, Sutherby et al. (Citation1999) conducted an RCT of Joint Crisis Plans with the outcome measures being admission to hospital either voluntarily or involuntarily. This work is currently being expanded and will include a study of what the process and outcome of Joint Crisis Plans mean to those who hold them. An involuntary admission to hospital is certainly something that most service users seek to avoid and it was found that this was reduced if the service user held a JCP.

So, service user researchers are not adamantly opposed to RCTs or indeed other forms of quantitative research. The service-user generated outcome measures mentioned above are another example of this. Although the process used to produce them is grounded explicitly in the experience of service users, the outcome is a quantitative scale.

Another approach used by user researchers is a mixed methods one which combines quantitative and qualitative research. This is becoming common and it well behoves quantitative researchers to take notice. For example, in a meta-analysis of the views of users who had received ECT, quantitative and qualitative analyses were used (Rose et al., Citation2003). Forest plots were constructed to examine satisfaction and long-term memory loss and qualitative analysis to enrich the quantitative data.

However, sometimes only qualitative research can fulfil the aim of eliciting the views of service users. Qualitative research is just as complex as quantitative research and this must be established by doing it well. Data must be collected through appropriately constructed samples as well as, usually, researchers with experience of the treatment or service being evaluated. Data analysis in qualitative research is complex especially with the advent of software packages which allow very detailed examination of the data collected. It is no good mainstream researchers simply dismissing this out of hand – they should examine, with unbiased eyes, what has been achieved.

Conclusion

Service user produced knowledge – or “evidence”– uses different methods to mainstream research and consequently produces a different view of the world of mental health. Mainstream researchers need to look at this seriously and not dismiss it with broad and less than serious arguments. It is my view that to resolve such issues we should pay attention to the different epistemologies that underlie the fractures between mainstream and service user research.

References

  • Burns T., Creed F., Fahy T., Thompson S., Tyrer P., White I. Intensive versus standard case management for severe psychotic illness: A randomised trial. The Lancet 1999; 353: 2185–2189
  • Foucault M. Madness and civilisation. Tavistock, London 1967
  • Jaspers K. The phenomenological approach to psychopathology. British Journal of Psychiatry 1968 tr) [1912]; 114: 1313–1323
  • Moncrieff J. The myth of the chemical cure: A critique of psychiatric drug treatment. Palgrave, Houndsmill, Basingstoke 2008
  • Rose D., Wykes T., Leese M., Bindman J, Fleischmann P. Patients' perspectives on electroconvulsive therapy: Systematic review. British Medical Journal 2003; 326: 1363–1366
  • Rose D., Wykes T., Farrier D., Dolan A-M., Sporle T, Bogner D. What do clients think of cognitive remediation therapy? A consumer-led investigation of satisfaction and side effects. American Journal of Psychiatric Rehabilitation 2008; 11(2)181–204
  • Sutherby K., Szmukler G. I., Halpern A., Alexander M., Thornicroft G., Johnson C, Wright S. A study of ‘crisis cards’ in a community psychiatric service. Acta Psychiatrica Scandinavica 1999; 100: 56–61
  • Trivedi P., Wykes T. From passive subjects to equal partners: User involvement in research. British Journal of Psychiatry 2002; 181: 468–472

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.