5,047
Views
79
CrossRef citations to date
0
Altmetric
Research Article

VOICE: Developing a new measure of service users' perceptions of inpatient care, using a participatory methodology

, , , , , , & show all
Pages 57-71 | Published online: 18 Jan 2012

Abstract

Background

Service users express dissatisfaction with inpatient care and their concerns revolve around staff interactions, involvement in treatment decisions, the availability of activities and safety. Traditionally, satisfaction with acute care has been assessed using measures designed by clinicians or academics.

Aims

To develop a patient-reported outcome measure of perceptions of acute care. An innovative participatory methodology was used to involve services users throughout the research process.

Method

A total of 397 participants were recruited for the study. Focus groups of service users were convened to discuss their experiences and views of acute care. Service user researchers constructed a measure from the qualitative data, which was validated by expert panels of service users and tested for its psychometric properties.

Results

Views on Inpatient Care (VOICE) is easy to understand and complete and therefore is suitable for use by service users while in hospital. The 19-item measure has good validity and internal and test–retest reliability. Service users who have been compulsorily admitted have significantly worse perceptions of the inpatient environment.

Conclusions

A participatory methodology has been used to generate a self-report questionnaire measuring service users' perceptions of acute care. VOICE encompasses the issues that service users consider most important and has strong psychometric properties.

Introduction

Dissatisfaction with adult acute inpatient care is not a new issue and is well documented both in Britain and internationally. Inpatient wards are often viewed by service users as untherapeutic and unsafe environments (Department of Health, Citation2002). Limited interaction between staff and service users is commonly reported and users express a need for good interpersonal relationships and support which is sensitive to individual needs (Edwards, Citation2008; Ford et al., Citation1998; Shattell et al., Citation2008). Poor levels of involvement and a lack of information associated with medication, care and treatment have also been identified (Walsh & Boyle, Citation2009). On many wards, there is little organised activity and service users experience intense boredom (MIND, Citation2004). Security is of particular concern: many service users feel they are not treated with respect or dignity, have significant safety concerns and report high levels of verbal and physical violence (MIND, Citation2004). Although there are objective measures of activities in the inpatient environment, as reviewed recently by Sharac et al. (Citation2010), these are not adequate as a reflection of the quality of inpatient care.

Recently there has been a focus on patient-reported outcome measures (PROMS) as a measure of quality and appropriateness of services and therapies. Despite service user involvement being considered an essential element in improving mental health services (Department of Health, Citation1999), PROMS are rarely developed using an inclusive methodology and research suggests user dissatisfaction with many outcome measures currently in use (Crawford et al., Citation2011). Service users can often have different perspectives from professionals and can provide insight into how services and treatments feel (Rose, Citation2003). Redefining outcomes according to users' priorities can help to make greater sense of clinical research and develop a more valid evidence base (Faulkner & Thomas, Citation2002; Trivedi & Wykes, Citation2002). Studies comparing the impact of traditional and user researchers on research show some differences in qualitative data analysis (Gillard et al., Citation2010) but none on quantitative research findings (Hamilton et al., Citation2011; Rose et al., Citation2011a, Citation2011b). Given this, we believe that research methodologies should aim to be as inclusive as possible.

What is needed in the literature on acute care is a psychometrically robust, brief, self-report measure reflecting service users' experiences of care. This type of measure would allow clear measurement of inpatient care changes following specific interventions to improve the environment and therapy provided. Our study was designed to generate such a measure.

Method

Sampling and recruitment

Ethical approval (07/H0809/49) was given for the study to be carried out in four boroughs within an inner city London NHS trust.

For the measure development phase, purposive sampling was adopted to reflect local inpatient demographics and participants were recruited through a local mental health voluntary organisation and community mental health teams across the four boroughs. The only inclusion criterion was that participants had been inpatients in the previous 2 years, although this may have excluded long-term forensic inpatients. Members of the reference group were recruited from local user groups and national voluntary mental health organisations.

Participants for the feasibility study were recruited from acute wards and psychiatric intensive care units, and test–retest participants were engaged on acute and forensic wards. For the larger psychometric testing phase, participants were recruited from acute wards. The inclusion criteria were that the person could provide informed consent and that they had been present on the ward for at least 7 days during the 4-week data collection phase. 45% of eligible people on the wards agreed to take part. All potential participants gave written informed consent following an explanation of the study.

Demographic and clinical data for focus group participants were collected on a self-report basis. For the large-scale data collection, age, gender, ethnicity and employment status were collected by self-report, while diagnosis, legal and admission status were taken from NHS records.

Measure generation

The measure Views on Inpatient Care (VOICE) was developed iteratively using an innovative participatory methodology to maximise the opportunity for service user involvement (Rose et al., Citation2009, Citation2011a, Citation2011b). This followed several stages. Firstly a topic guide was developed through a literature search, reference group and pilot study. Repeated focus groups of service users were convened to generate qualitative data (Morgan, Citation1993). One of the groups was specifically for participants who had been detained under the Mental Health Act (1983) as it was anticipated that they may have had different experiences. The data were thematically analysed by service user researchers, who then generated a draft measure which was refined by expert panels of users and the reference group.

Feasibility and acceptability

VOICE was evaluated in accordance with standard criteria for outcome measures (Fitzpatrick et al., Citation1998; Harvey et al., Citation2005), which include feasibility, acceptability, reliability and validity.

Psychometric testing

The internal reliability of VOICE was assessed using Cronbach's alpha (Cronbach, Citation1951), with data from a large sample of inpatients. Test–retest reliability was conducted with inpatients who completed VOICE on two occasions with an interval of 6–10 days. This was assessed by Lin's concordance coefficient (Lin, Citation1989), kappa and proportion of maximum kappa (Sim & Wright, Citation2005) to measure the level of agreement between total scores and individual item responses at time one and two, respectively.

Criterion validity was assessed by comparing scores on VOICE with responses on the Service Satisfaction Scale: residential services evaluation (Greenfield et al., Citation2008). This is a derivative measure adapted from the Service Satisfaction Scale-30 (Greenfield & Attkisson, Citation1989), designed to evaluate residential services for people with serious mental illness. The original SSS-30 has been used in a variety of settings and demonstrates sound psychometric properties (Greenfield & Attkisson, Citation2004). It was anticipated that some elements of a perceptions measure would overlap with services satisfaction but that there would also be key differences.

We expected differences in views between service users from different populations and clinical settings. So we assessed by one-way analyses of variance whether service users' perceptions differed by borough, gender, ethnicity, age, diagnosis, admission and legal status as predictive factors. The majority of the analyses were exploratory. However, we had specific hypotheses relating to ethnicity and legal status, where we expected poorer perceptions from participants who were compulsorily admitted and those from minority ethnic communities.

Results

Sample characteristics

As shows, a total of 397 participants were recruited for the study: 37 for the measure generation phase and 360 for the feasibility study and psychometric testing. Schizophrenia was the most frequent diagnosis for both groups and approximately half of all participants were from black and minority ethnic communities. In the measure development phase, 43% of participants were men and the median age was 45 (range 20–66). In the psychometric phase, 60% of participants were men and the mean age was 40 (range 18–75).

. Demographic data (approximately P7).

Measure generation

Thematic analysis of the full data set resulted in an initial bank of 34 items, which were formed into brief statements and grouped into domains. A six-point Likert scale was chosen, ranging from “strongly agree” to “strongly disagree” and optional free-text sections were included to capture additional qualitative data. The items were unweighted and one question was reverse scored. The self-report measure was designed to provide a final total score, with a higher score indicating a more negative perception. The inter-rater reliability of the focus group data coding, using NVIVO7, showed between 97% and 99% agreement. Item reduction based on relevance and preventing duplication produced 22 items. The expert panels considered the measure to be an appropriate length and breadth and following some minor changes in wording, the reference group concluded that the measure was appropriate for use by service users in hospital.

Feasibility and acceptability

Feasibility took place in two waves (n  =  40 and n  =  106). In the first wave, 98% of participants found the measure both easy to understand and complete and in the second, 82% of participants considered the measure to be an appropriate length. Two participants (2%) disliked completing the measure and six (6%) found some of the questions upsetting. VOICE took between 5 and 15 min to complete and was easy to administer. The measure was found to be suitable for completion by participants with a range of diagnoses and at levels of acute illness found on inpatient units. The Flesch Reading Ease score was 78.8 (ages 9–10) indicating the measure was easy to understand (Flesch, Citation1948). Following the feasibility study, one item was removed as it was considered to be a duplicate. This left the measure with 21 items.

Psychometric testing

Three hundred and sixty participants took part in testing the psychometric properties of the measure. One hundred and ninety-two of these had full data for all items on the VOICE scale and 348 participants had over 80% response to VOICE items. For participants responding to at least 80% of the items, a pro-rated score was calculated. Less than 80% response was considered as a missing total VOICE score.

Reliability

One hundred and ninety-two participants had complete data on the VOICE scale and were used in assessing the internal consistency. After removing items with poor reliability, this left a 19-item scale with high internal consistency (α  =  0.92). The test–retest reliability (n  =  40) was high (ρ  =  0.88, CI  =  0.81–0.95) and there was no difference in score between the two assessments.

Validity

The measure has high face validity. The wide range of items was determined by service users during the focus groups and the measure reflected the domains which they considered most important. The feasibility study participants felt that the measure was comprehensive and therefore had high content validity.

Pearson's correlation coefficient showed a significant association between the total scores on VOICE and the SSS: residential measure (r  =  0.82, p  <  0.001), indicating high criterion validity.

The ability of VOICE to discriminate between groups is indicated in . Bivariate analyses showed significant differences for legal status. Participants who had been compulsorily admitted had significantly worse perceptions (t  =  − 3.82, p  <  0.001). A multi-variate analysis revealed that legal status remains significant even when adjusted for the other factors (p  =  0.001).

. Differences in mean VOICE scores by demographic and clinical group (approximately P9).

The final measure is provided in Appendix and at www.perceive.iop.kcl.ac.uk.

Discussion

Using a participatory methodology, we have developed a service-user generated, self-report measure of perceptions of acute care. VOICE (Appendix) encompasses the issues that service users consider most important, has strong psychometric properties and is suitable for use in research settings. The internal consistency is high, which suggests that the items are measuring the same underlying construct. The measure has high criterion validity and test retest data show that it is stable over time. The full involvement of service users throughout the development of the measure has ensured that VOICE has good face and content validity and is accessible to the intended client group.

Can VOICE distinguish differences in views?

In this study, detained participants held more negative perceptions of inpatient services. This supports previous studies showing that service users who are admitted involuntarily are less satisfied with their care (Svensson & Hansson, Citation1994). More recently, lower levels of satisfaction have been linked with the accumulation of coercive events and perceived coercion (Iversen et al., Citation2007; Katsakou et al., Citation2010). This presents a more complex picture and one which is worth further analysis.

We anticipated, but did not find, any differences on either VOICE or SSS: residential scores for ethnicity. Methodology, timing and setting can all impact upon research findings (Wensing et al., Citation1994). Previous quantitative studies have shown differences for legal status but not ethnicity (Bhugra et al., Citation2000; Greenwood et al., Citation1999), whereas qualitative research has revealed that black and minority ethnic users hold relatively poor perceptions of acute care (Secker & Harding, Citation2002; The Sainsbury Centre for Mental Health, Citation2002). Our study was set in areas of London with high levels of ethnicity (Kirkbride et al., Citation2007; Morgan et al., Citation2006). Staff demographics tended to mirror those of inpatients and it may be that services were better tailored towards black and minority ethnic groups. Additionally, interviewing users while in hospital may well have inhibited openness and honesty, particularly on sensitive issues.

Is VOICE different from other measures?

Although the total scores were correlated, there were distinct differences in content between VOICE and the comparison satisfaction measure. We believe this is due to the use of a participatory methodology. In particular, safety and security issues were given more weight in VOICE and items on diversity were included which did not appear in the conventionally generated measure. Conversely, items regarding the physical environment and office procedures featured in the SSS: residential (Greenfield et al., Citation2008), but were not deemed as important by the users in our study and therefore not included in VOICE. Although the issue of discharge planning arose in our focus group data and as an item in the SSS: residential, we did not include it in the measure as the intention was to administer VOICE relatively soon after admission. We do recommend its inclusion, however, in future studies.

It is often assumed that the only construct to measure is satisfaction with acute care. However, there are difficulties in encapsulating complex sets of beliefs, expectations and evaluations in satisfaction measures. Caution should be taken when making inferences from the results of such measures as they may not accurately reflect the views of users (Williams, Citation1994). VOICE is unique in that it captures users' perceptions and we anticipate this will depict the inpatient experience more accurately.

Strengths and limitations

It is impossible to accurately assess inpatient care without involving the people directly affected by that service. Developing an outcome measure valued by service users is essential in evaluating and developing inpatient services. The main strength of this piece of research is that it fully exploits a participatory methodology: service users were involved in a collaborative way throughout the whole research process. VOICE is the only robust measure of acute inpatient services designed in such a way. This has resulted in a measure which encompasses the issues that service users prioritise and is both acceptable and accessible to people with a range of diagnoses and severity of illness.

This study was not designed to test hypotheses about differences in perceptions between clinical and demographic groups and may not have been large enough to detect such differences. The completion rate was twice that of a similar satisfaction survey (Care Quality Commission, Citation2009), suggesting that VOICE is more representative of users' views and this is higher than many other studies reported in the literature. We do not have data from non-responders, but we have little reason to consider that they were different from our sample. Our study was conducted in London boroughs with high levels of deprivation, ethnicity and psychiatric morbidity (Kirkbride et al., Citation2007; Morgan et al., Citation2006) and so may not be directly generalisable to other settings. Additionally, our sample included a high proportion of participants from black and minority ethnic communities. While this is a strength, it may be that different items would have been produced by other groups. We intend to develop versions of VOICE for use in other populations, including Mother and Baby units.

Conclusion

The study has demonstrated that a participatory methodology can generate items which are prioritised by users but not included in traditionally developed measures. VOICE is the first service-user generated, psychometrically robust measure of perceptions of acute care. It directly reflects the experiences and perceptions of service users in acute settings and as such, is a valuable addition to the PROMS library.

Acknowledgements

We also acknowledge the financial support of the NIHR Biomedical Research Centre for Mental Health, South London and Maudsley NHS Foundation Trust/Institute of Psychiatry (King's College London).

Declaration of Interest: This article presents independent research commissioned by the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research scheme (RP-PG-0606-1050). The views expressed in this publication are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.

References

  • Bhugra, D., La Grenade, J., & Dazzan, P. (2000). Psychiatric inpatients' satisfaction with services: A pilot study. International Journal of Psychiatry in Clinical Practice, 4, 327–332.
  • Care Quality Commission. (2009). Mental Health Acute Inpatient Service Users Survey 2009: South London and Maudsley NHS Foundation Trust. London: NatCen.
  • Crawford, M., Robotham, D., Thana, L., Patterson, S., Weaver, T., Barber, R., (2011). Selecting outcome measures in mental health: The views of service users. Journal of Mental Health, 20, 336–346.
  • Cronbach, L. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.
  • Department of Health. (1999). National Service Framework for Mental Health. London: HMSO.
  • Department of Health. (2002). Mental Health Policy Implementation Guide: Adult Acute Inpatient Care Provision. London: HMSO.
  • Edwards, K. (2008). Service users and mental health nursing. Journal of Psychiatric and Mental Health Nursing, 7, 555–565.
  • Faulkner, A., Thomas, P. (2002). User-led research and evidence based medicine. British Journal of Psychiatry, 180, 1–3.
  • Fitzpatrick, R., Davey, C., Buxton, M., & Jones, D. (1998). Evaluating patient based outcome measures for use in clinical trials. Health Technology Assessment, 2, 1–86.
  • Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32, 221–233.
  • Ford, R., Durcan, G., Warner, L., Hardy, P., & Muijen, M. (1998). One day survey by the mental health act commission of acute adult psychiatric inpatient wards in England and Wales. British Medical Journal, 317, 1279–1283.
  • Gillard, S., Borschmann, R., Turner, K., Goodrich-Purnell, N., Lovell, K., & Chambers, M. (2010). What difference does it make? Finding evidence of the impact of mental health service user researchers on research into the experiences of detained psychiatric patients. Health Expectations, 13, 185–194.
  • Greenfield, T., & Attkisson, C. (1989). Steps toward a multifactorial satisfaction scale for primary care and mental health services. Evaluation and Program Planning, 12, 271–278.
  • Greenfield, T., & Attkisson, C. (2004). The UCSF client satisfaction scales: II. The service satisfaction scale-30. M. Maruish, Psychological Testing: Treatment Planning and Outcome Assessment (pp. 813–837). London: Lawrence Erlbaum Associates.
  • Greenfield, T., Stoneking, B., Humphreys, K., Sundby, E., & Bond, J. (2008). A randomized trial of a mental health consumer-managed alternative to civil commitment for acute psychiatric crisis. American Journal of Community Psychology, 42, 135–144.
  • Greenwood, N., Key, A., Burns, T., Bristow, M., & Sedgwick, P. (1999). Satisfaction with in-patient psychiatric services. Relationship to patient and treatment factors. The British Journal of Psychiatry, 174, 159–163.
  • Hamilton, S., Pinfold, V., Rose, D., Henderson, C., Lewis-Holmes, E., Flach, C., (2011). The effect of disclosure of mental illness by interviewers on reports of discrimination experienced by service users: A randomized study. International Review of Psychiatry, 23, 47–54.
  • Harvey, K., Langman, A., Winfield, H., Catty, J., Clement, S., White, S., (2005). Measuring Outcomes for Carers for People with Mental Health Problems. London: NCCSDO.
  • Iversen, K., Høyer, G., & Sexton, H. (2007). Coercion and patient satisfaction on psychiatric acute wards. International Journal of Law and Psychiatry, 30, 504–511.
  • Katsakou, C., Bowers, L., Amos, T., Morriss, R., Rose, D., Wykes, T., (2010). Coercion and treatment satisfaction among involuntary patients. Psychiatric Services, 61, 286–292.
  • Kirkbride, J., Morgan, C., Fearon, P., Dazzan, P., Murray, R., & Jones, P. (2007). Neighbourhood level effects on psychoses: Re-examining the role of context. Psychological Medicine, 37, 1413–1425.
  • Lin, L. (1989). A concordance correlation coefficient to evaluate reproducibility. Biometrics, 45, 255–268.
  • MIND. (2004). Ward Watch: Mind's Campaign to Improve Hospital Conditions for Mental Health Patients. London: MIND.
  • Morgan, D. (1993). Successful Focus Group Interviews: Advancing the State of the Art. London: SAGE Publications.
  • Morgan, C., Dazzan, P., Morgan, K., Jones, P., Harrison, G., Leff, J., (2006). First episode psychosis and ethnicity: Initial findings from the AESOP study. World Psychiatry, 5, 40–46.
  • Rose, D. (2003). Collaborative research between users and professionals: Peaks and pitfalls. The Psychiatrist, 27, 404–406.
  • Rose, D., Evans, J., Sweeney, A., & Wykes, T. (2011a). A model for developing outcome measures from the perspectives of mental health service users. International Review of Psychiatry, 23, 41–46.
  • Rose, D., Leese, M., Oliver, D., Sidhu, R., Bennewith, O., Priebe, S., (2011b). A comparison of participant information elicited by service user and non-service user researchers. Psychiatric Services, 62, 210–213.
  • Rose, D., Sweeney, A., Leese, M., Clement, S., Burns, T., Catty, J., (2009). Developing a user-generated measure of continuity of care: Brief report. Acta Psychiatrica Scandinavica, 119, 320–324.
  • Secker, J., & Harding, C. (2002). African and African Caribbean users' perceptions of inpatient services. Journal of Psychiatric and Mental Health Nursing, 9, 161–167.
  • Sharac, J., McCrone, P., Sabes-Figuera, R., Csipke, E., Wood, A., & Wykes, T. (2010). Nurse and patient activities and interaction on psychiatric inpatients wards: A literature review. International Journal of Nursing Studies, 47, 909–917.
  • Shattell, M., Andes, M., & Thomas, S. (2008). How patients and nurses experience the acute care psychiatric environment. Nursing Inquiry, 15, 242–250.
  • Sim, J., & Wright, C. (2005). The kappa statistic in reliability studies: Use, interpretation and sample size requirements. Physical Therapy, 85, 257–268.
  • Svensson, B., & Hansson, L. (1994). Patient satisfaction with inpatient psychiatric care. Acta Psychiatrica Scandinavica, 90, 379–384.
  • The Sainsbury Centre for Mental Health. (2002). Breaking the Circles of Fear. A Review of the Relationship Between Mental Health Services and African and Caribbean Communities. London: The Sainsbury Centre for Mental Health.
  • Trivedi, P., & Wykes, T. (2002). From passive subjects to equal partners: Qualitative review of user involvement in research. British Journal of Psychiatry, 181, 468–472.
  • Walsh, J., & Boyle, J. (2009). Improving acute psychiatric hospital services according to inpatient experiences. A user-led piece of research as a means to empowerment. Issues in Mental Health Nursing, 30, 31–38.
  • Wensing, M., Grol, R., & Smits, A. (1994). Quality judgements by patients on general practice care: A literature analysis. Social Science and Medicine, 38, 45–53.
  • Williams, B. (1994). Patient satisfaction: A valid concept? Social Science and Medicine, 38, 509–516.

Appendix