5,545
Views
62
CrossRef citations to date
0
Altmetric
Web Papers

Attitudes to patient safety amongst medical students and tutors: Developing a reliable and valid measure

Pages e370-e376 | Published online: 09 Sep 2009

Abstract

Background: Patient safety education is an increasingly important component of the medical school curricula.

Aims: This study reports on the development of a valid and reliable patient safety attitude measure targeted at medical students, which could be used to compare the effectiveness of different forms of patient safety education delivery.

Methods: The Attitudes to Patient Safety Questionnaire (APSQ) was developed as a 45-item measure of attitudes towards five patient safety themes. In Study 1, factor analysis conducted on the responses of 420 medical students and tutors, revealed nine interpretable factors. The revised 37-item APSQ-II was then administered to 301 students and their tutors at two further medical schools.

Results: Good stability of factor structure was revealed with reliability coefficients ranging from 0.64 to 0.82 for the nine factors. The questionnaire also demonstrated good criterion validity, being able to distinguish between tutors and students across a range of domains.

Conclusions: This article reports on the first attempt to develop a valid and reliable measure of patient safety attitudes which can distinguish responses between different groups. The predictive validity of the measure is yet to be assessed. The APSQ could be used to measure patient safety attitudes in other healthcare contexts in addition to evaluating changes in undergraduate curricula.

Introduction

The frequency of medical errors and the consequences of error in healthcare are well documented (Leape et al. Citation1991; Vincent et al. Citation2001). Over the last decade, numerous healthcare interventions have been introduced in an attempt to reduce medical errors and to improve patient safety, but a major barrier has been the organisational culture of healthcare environments (Leape & Berwick Citation2005). An essential component of safety culture are the attitudes of doctors to medical error, such as disclosure lf-responsibility for error, and it has been suggested that this can be improved by appropriate education in medical schools (Aron & Headrick Citation2002; Walton & Elliott Citation2006).

However, despite calls to increase the emphasis on understanding error, systems thinking and patient safety in medical training curricula (Institute of Medicine (IOM) Citation1999; Department of Health 2001) what evidence there is suggests that few training courses contain taught components on these topics in the UK (Wakefield et al. Citation2005). In the US, a survey of medical school programme directors (Rosebraugh et al. Citation2001) revealed that only 16% provided formal lectures about medication errors, despite an acknowledgement of the need for such material: 65% indicated that if short modules were available they would incorporate them into the curriculum. The need to build patient safety into the undergraduate curriculum is a growing concern worldwide (Coyle et al. Citation2005; Halbach and Sullivan Citation2005; Madigosky et al. Citation2006; Walton et al. Citation2006), however there is also a need to evaluate the efficacy of these curriculum changes. Many components of the undergraduate curriculum in medicine concern the acquisition of information or skills and are delivered through lectures, tutorials, demonstrations or more experiential learning and assessed via exam, coursework or practicals at the end of the period of learning. However, patient safety learning often involves attitude and behaviour change as an objective, making the assessment of learning objectives and the efficacy of the education difficult using traditional methods.

Some attempts have been made to evaluate patient safety education. For example, Patey et al. (Citation2007) evaluated the development of a new patient safety module for medical students in the UK and, although the attitude measure they used to evaluate the success of the new module was theoretically derived, it was not validated. Muller and Ornstein (Citation2007) also measured attitudes of medical trainees, including medical students, this time focussing on the perceived consequences of making a medical error with more or less serious outcomes. However, the authors did not attempt to validate their scale and again its focus was narrow, being confined to the making of medical errors. Coyle et al. (Citation2005) investigated the effectiveness of a patient safety programme for medical graduates by collecting pre- and post-attitude and behaviour data. The measure they employed included 5 items and again these were limited to attitudes and behaviour to reporting adverse events; only one component of patient safety.

Thus, in the two studies we report here we aim to develop a measure of the patient safety attitudes of undergraduate students and their tutors that has face validity (Study 1), internal reliability (Studies 1 and 2) and criterion validity (Study 2).

Study 1 – Questionnaire design and initial testing

Methods

Participants

Questionnaires were disseminated to all undergraduate medical students in years 1–5 (N = 1226) and all tutors (N = 93) at a large University School of Medicine in the north of England, UK. Respondents were advised their participation was voluntary and were assured their responses would be completely anonymous. A small ‘prize draw’ incentive of £50 (in the form of a book token) was offered to maximise response rates.

Questionnaire development

A 45-item questionnaire was developed which aimed to measure the knowledge of and attitudes towards patient safety of both medical students and their tutors on an undergraduate medical training program at the University of Leeds. The authors identified five broad themes common within the patient safety literature which they agreed represented current thinking in the field. The five themes were; ‘general perception of errors’, ‘error causation’, ‘error improvement strategies’, ‘error reporting’ and ‘learning and teaching issues’. It was anticipated that had such issues been taught on the course, such as the difference between individual and ‘systems’ causes of error, the importance of error reporting for organisational learning and error likelihood, this questionnaire would be sufficiently sensitive to measure this knowledge in both student and tutor samples.

Fifteen questionnaire items from two existing validated measures of patient safety attitudes, which were judged by two psychologists and a medical education expert as reflective of one or more of the five themes, were included in the questionnaire. Three of these items were taken from the Operating Room Management Attitude Questionnaire (ORMAQ; Schaefer & Helmreich, Citation1993) which were reflective of health care professionals’ general perceptions of error (e.g. error inevitability). Nine items were taken from the Medical Student Survey (Sorokin et al. Citation2005) which reflected student attitudes towards error improvement strategies (e.g. vigilance vs. teamwork training, shift pattern and IT system changes, etc.). A further 3 items were taken from the Safety Attitudes Questionnaire (SAQ) Footnote1 (Sexton et al. Citation2003) which reflected general perceptions of error (e.g. preventability and professionalism issues) and reporting likelihood.Footnote2

A further 30 items, untapped by other sources were developed independently by two of the authors and reviewed for face validity by a third author who is a clinician. The questionnaire comprised items such as ‘carelessness’ or ‘unprofessionalism’ as predominant causes of errors (as opposed to systems explanations), working hours, workload planning, error reporting culture and teamwork issues. This reduced the tendency for response bias as for some items (those indicated by (R) in ) a high score indicated more negative beliefs about patient safety. These items were then recoded, so that in all analysis a high score on the items and factors is indicative of a more positive attitude to patient safety. A positive attitude to patient safety is one that accords with current thinking about systems causes, rather than individual blame for error. Thus a person with a set of positive beliefs acknowledges the importance of learning about patient safety, sees an important role for patient involvement, has confidence to report incidents, recognises the influence of local conditions on patient safety, etc. Items were measured on a Likert-type scale from 1 (strongly disagree) to 7 (strongly agree). Items were developed to avoid ambiguity and since published definitions of terminology has been found to be extremely variable within the patient safety literature (Yu et al. Citation2005), definitions of patient safety terms based on psychological human error theory (Reason Citation1990) were presented within the questionnaire. Medical error was defined as ‘the failure to properly carry out an appropriately planned action (slip) or successfully carrying out an incorrect action (mistake) where there is potential for patient harm’. A mistake was defined as ‘successfully carrying out an action you believe to be correct but which is not’. Finally, an adverse outcome was defined as ‘harm to the patient that is not anticipated in the process of care’.

Table 1.  Reliability coefficients, FL and inter-item correlations of a nine-factor rotated solution

Table 2.  Means (SD: standard deviations), loading ranges, reliability coefficients and mean inter-item correlations of a nine-factor rotated solution

Table 3.  APSQ-III items and corresponding FL

Procedure

The Attitudes to Patient Safety Questionnaire (APSQ) was developed to be completed on-line and as such was disseminated in electronically via email since it was considered to be the quickest and most cost-effective strategy and most appropriate for the sample (some of whom were not university based but on clinical placements). Participants were given 4 weeks to complete the questionnaire and sent a reminder email after 2 weeks. The on-line and anonymous nature of the questionnaire precluded reminders being sent out to non-responders. However, it was noted that dissemination of the questionnaire during exam periods may have been responsible for low response rates observed after the 4 weeks. Therefore, a third email was sent to all participants 2 weeks later which significantly improved overall participant responding. In addition to items on the attitude measures, data was also collected on gender, ethnicity, age and level of training. Ethical approval for both studies was granted by the Institute of Psychological Sciences’ ethics committee at the University of Leeds and complied with British Psychological Society (2006) ethical guidelines. Anonymity was assured and students were made aware of their right to withdraw. Consent was denoted by completion of the questionnaire which was done on an entirely voluntary basis.

Statistical analysis

Factor analysis was used to assess the dimensionality of the scale. Item analysis was also conducted based on the item means and standard deviations, Cronbach's alpha and the inter-item correlation.

Results

A total of 364 students (30% response rate) and 66 tutors (71% response rate) completed the questionnaire: a combined response rate of 33%. Students demographic data revealed ages ranging from 18 to 35 years (mean = 21.17 years); the majority were female (n = 232: 63.7%) and described themselves as ‘white British’ ethnicity (n = 256: 70.3%). This reflects the demographic of the medical student group at this university. In terms of their year of study, responses came from 57 first year, 89 second year, 61 third year, 76 fourth year and 52 fifth year students (29 students did not disclose their year of study). Tutors demographic data revealed ages ranging from 24 to 60 years (mean = 39.71 years), an equal number of male and female respondents, and approximately half of the tutors described themselves as ‘white British’ ethnicity (n = 31, 47%).

Preliminary analysis of the APSQ: Version 1

These results represent the data from 364 undergraduate students from years 1 to 5 and 66 tutors from the University of Leeds’ School of Medicine. Study 1 was the developmental phase of the APSQ, the themes created were deliberately broad, and so it was anticipated by the authors that exploratory data analysis from these results might yield a construct which did not correspond with the original 5 themes but would separate into smaller categories. Since it was the intention to conduct a follow-up study to evaluate the overall factor structure as well as the internal consistency of the sub-scales of a refined measure, this was not considered problematic.

In order to assess the factor structure of the questionnaire, the factorability of the 45 APSQ items was first examined. The Kaiser–Meyer–Olkin measure of sampling adequacy was 0.74, well above the recommended value of 0.6 (Hutcheson & Sofroniou Citation1999, pp. 224–5), and Bartlett's test of sphericity was also significant (χ2 (990) = 4741.08, p < 0.001) (Field Citation2005, pp. 640). Since values indicated factor analysis was appropriate for the sample data, principal components analysis with oblique rotation (Direct Oblimin) was performed on the 45 items. This analysis revealed no significant correlations between factors and therefore to facilitate interpretation of factors (direct Oblimin method produced multiple cross loadings), varimax rotation, using the parameters of eigenvalues >1 (Kaiser Citation1960) and with factor loadings (FL) ≥0.3 to sort items into factorsFootnote3 was applied. Results yielded a 13-factor solution which accounted for 60.2% of the variance (after rotation). Scree plot analysis was generally uninterpretable and a clear solution could not be obtained. However, factors 10–13 which comprised 9 items, accounted for only 13.6% of the overall variance and in terms of face validity, were agreed by the authors to be uninterpretable as factors and were subsequently removed. Therefore, a nine-factor rotated solution was accepted. After 6 additional items which had double or triple loadings were removed from further analysis, the 9 factors accounted for 62.1% of the variance in the data and comprised 30 items. shows item loadings, Cronbach's alpha and mean inter-item correlation for the items comprising each factor.

Since Study 2 involved testing the factor structure of the revised questionnaire (version 2), it was decided that each of the nine scales should comprise no less than 4 items (e.g. recommended to ensure good scale validity and internal consistency of each scale). Since factor analysis had involved the removal of several items, five factors (scales) had fewer than four items remaining (). To improve the internal consistency of these scales, another seven items were developed by the authors to reflect each scale, bringing the total number of items in the revised questionnaire (APSQ-II) to 37.

Study 2 – APSQ validation

Methods

Participants

The APSQ-II was administered to a further two medical schools (Manchester and East Anglia) to examine the factor structure of the revised questionnaire and the internal consistency of the sub-scales. The criterion validity of the scale was also investigated by comparing the scores of tutors and students on the nine sub-scales. It was predicted that tutors would show attitudes that conformed more to systems approaches to patient safety.

Data collection

The APSQ-II was disseminated electronically using the same methods described in Study 1 (via email with an online link) to all undergraduate medical students in years 1–5 (N = 1670) and all tutors involved with teaching some element of the undergraduate medical degree (N = 775) at Manchester and East Anglia University Schools of Medicine. Participants were given a total of 4 weeks to complete the questionnaire and were sent two email reminders; one sent 2 weeks after the initial email and the other 2 weeks after the 4-week deadline had passed to improve response rates. In line with Study 1, participants were given the option of being entered into a small ‘prize draw’ with the opportunity of winning £50 (in the form of a book token) or remaining anonymous.

Results

A total of 114 students (20% response) and 57 tutors (10% response) at East Anglia and 100 students (9% response) and 30 tutors (15% response) at Manchester completed the APSQ online. Although response rates were low, the combined data yielded an adequate subject-to-variable ratio (Grimm & Yarnold Citation1995, pp 99–136). Demographic data for the total student sample revealed an age range from 18 to 51 years (mean = 24.14 years), the majority were female (n = 135: 63.1%) and described themselves as ‘white British’ ethnicity (n = 157: 59.3%). Responses were spread evenly across the 5 years of study. Demographic data for the tutor sample revealed an age range from 33 to 63 years (mean = 46.99 years), the majority were male (n = 56: 64.4%) and described themselves as ‘white British’ ethnicity (n = 65: 74.7%).

Construct validity and internal consistency of the nine sub-scales

Factorability of the 37 APSQ-II items was first examined. The Kaiser–Meyer–Olkin measure of sampling adequacy was 0.75, and Bartlett's test of sphericity was also significant (χ2 (666) = 2936.38, p < 0.001) suggesting the data was suitable for factor analysis. Data from the 37-item revised APSQ-II was analysed using principal components analysis with varimax rotation (after checking for multiple correlations between factors using an oblimin method described in Study 1). This yielded an 11-factor solution accounting for 63.8% of the variance in the data. As with Study 1, it was not possible to determine a factor solution from scree plot analysis. However, components 10 and 11 comprised only 3 items which were considered uninterpretable as factors, accounting for only 7.8% of the overall variance in the data and were subsequently removed. summarises each of the remaining nine factor's mean scores, loading range, reliability coefficients and mean inter-item correlations. Of the 37 items administered in APSQ-II, 32 items remained in the same factors which emerged from Study 1 indicating good stability of factor structure. After removal of a total of 7 redundant items (3 from factors 10 and 11 and a further 4 which had multiple loadings), the 9 factor, 30-item solution accounted for 63.79% of the variance in the data.

To maximize internal consistency, items within a scale should be highly correlated with one another. The FL within factor analysis provide an indication of the extent to which each item is a good measure of the construct (i.e. the whole factor). The mean inter-item correlations provide information about the extent to which the items are related to one another – thus the aim is to increase this value. Items were therefore deleted if they had low FL and their removal increased the mean inter-item correlation and had little impact on reliability (n = 4). The APSQ-III comprises 26 items across 9 key patient safety factors with an Overall alpha of 0.73.Footnote4 The final factor structure describing the loadings of each item on their corresponding factor is summarised in .

Criterion validity

The differences between tutor and student scores on the nine sub-scales were investigated in a multivariate analysis of variance. The multivariate F was significant (F (9,245) = 18.77, p < 0.001) revealing that across these different factors there were differences between tutors and students. Inspection of the univariate statistics revealed that tutors scored significantly higher on ‘error inevitability’ (F (1,253) = 31.06, p < 0.001), ‘patient involvement’ (F (1,253) = 10.88, p < 0.001), ‘importance of patient safety in the curriculum’ (F (1,253) = 23.33, p < 0.001) and ‘incompetence’ as a cause of errors (F(1,253) = 64.25, p < 0.001). Tutors scored significantly lower on ‘working hours as error cause’ (F (1,253) = 24.52, p < 0.001), ‘error reporting confidence’ (F (1,253) = 19.42, p < 0.001) and ‘patient safety training received’ (F (1,253) = 16.74, p < 0.001). These differences suggest that the tool is able to distinguish the attitudes of these two groups; tutors showing greater awareness of the fallibility of human performance and the role of patient involvement and training of medical staff in patient safety. Conversely, medical trainees reported stronger attitudes in their confidence to report errors, that shorter working hours would reduce errors and that errors result somewhat from incompetence.

All student respondents were asked to provide some background information including the following question ‘has patient safety been taught as part of the undergraduate syllabus so far?’. This was used as a second criterion measure and participants were therefore separated into those that answered yes (n = 102) to the question and those that answered no (n = 88). A significant multivariate difference was found for these two groups (F (9,180) = 4.86, p < 0.001). Univariate statistics revealed significant differences on three sub-scales: with those participants who had been taught about patient safety indicating more agreement about the importance of patient safety and a greater confidence in reporting. Not surprisingly this group also indicated more positive attitudes about the patient safety training they had received.

Discussion

There are few validated instruments available to assess the attitudes of medical students towards patient safety; yet patient safety education is especially concerned with changes in attitudes and culture. The questionnaire was developed to tap into a range of attitudes that denote current ‘systems’ thinking about medical error, including the causes, reporting and management of error. Items were developed by drawing on existing measures and through a wider search of the literature. The face validity of the items was assessed through review by the second and third main authors (Study 1).

The findings reported here provide some evidence for a questionnaire that has a robust and stable factor structure, across two independent samples. Moreover, the sub-scales identified through factor analysis show moderate to good internal consistency and reasonable mean inter-item correlations. The second study also demonstrated the criterion validity of the tool in that it can distinguish between groups of tutors and students and between students who have been exposed to patient safety training and those who have not. It is worthy of note, however, that the tutors who responded to this questionnaire did not show consistently more positive beliefs about patient safety. It may be that their somewhat more negative attitude for some factors is a function of a different set of core values that were dominant at the time of their own training. Alternatively, it might be that their beliefs reflect a better understanding of the organisational culture within the NHS. For example, tutors expressed less confidence in reporting an error without fear of reprisal, a belief that might arise from personal experience of blame. It is anticipated that this questionnaire could be used as a before and after measure to assess the success of a change in the curriculum to incorporate patient safety training. With some amendments, e.g. removal of items pertaining to patient safety in the curriculum, the questionnaire could be used to measure attitudes within other populations of qualified healthcare professionals, perhaps as an outcome measure in patient safety intervention studies, where reliable and valid tools are often lacking (Flin et al. Citation2006). With further refinement and validation, the questionnaire could be used as a means of evaluating positive attitudes towards patient safety.

Limitations and further research

The poor response rate, particularly in the second study means that the findings here must be treated with some caution. Low response rates are not surprising given the web-based questionnaire format of the data collection; where respondents were allowed complete anonymously thus making reminders to non-respondents impossible. Indeed, a large scale study of web-based questionnaires amongst undergraduate students reported a 17% response rate (Sax et al. Citation2003). The incentive to complete the questionnaire was relatively minor and completion of the questionnaire was not linked to the course requirements. Despite this, the responses were evenly distributed across years and the gender, age and ethnicity data were closely aligned to the medical student population as a whole, suggesting that if there were some non-response bias, it may have been with regard to frequency of access to email. However, the absolute values of the responses to items, i.e. the mean scores on the factors shown in may not be generalisable beyond this potentially more conscientious and patient safety enlightened population of medical students. Thus, these values have been given little consideration in this paper. However, with the exception of error inevitability for which there is little variation in the scoring, the means and standard deviations of the remaining factors indicate good variability in responses. Further research is now needed to demonstrate the test-retest reliability and the predictive validity of the questionnaire in a more representative sample of undergraduate medical students.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.

Notes

Additional information

Notes on contributors

Sam Carruthers

SAM CARRUTHERS is a research fellow in the Academic Unit of Psychiatry and Behavioural Sciences of the University of Leeds, UK.

Rebecca Lawton

REBECCA LAWTON is a senior lecturer in Health Psychology at the Institute of Psychological Sciences, University of Leeds, UK.

John Sandars

JOHN SANDARS is a senior lecturer in Community-based Education at the University of Leeds, School of Medicine, UK.

Amanda Howe

AMANDA HOWE is a professor of Primary Care at the School of Medicine, Health Policy & Practice, University of East Anglia, UK.

Mark Perry

MARK PERRY is a lecturer in Primary Care and Communication at the School of Medicine, University of Manchester, UK.

Notes

1. The SAQ is a refinement of the Intensive Care Unit Management Attitudes Questionnaire (ICUMAQ; Thomas et al. Citation2003) which was derived from a questionnaire widely used in commercial aviation, the Flight Management Attitudes Questionnaire (FMAQ; Helmreich et al. Citation1993).

2. Although these tools were measures of patient safety attitudes, none were considered appropriate for a UK medical student sample in their entirety; hence only select items were used.

3. Norman & Streiner (2000) recommend a suitable formula for minimum acceptable FL when the sample size, N, is 100 or more: Min FL = 5.152/[SQRT(N−2)]. In both study one and study two the calculated value was 0.3. Although it should be noted that acceptable variable FL are purely arbitrary and generally subject to their interpretability as part of that factor.

4. Multivariate analyses of variance were conducted to investigate the ability of the nine scales to discriminate between the medical schools and the participant group (student vs. tutor). Significant main effects were found for both independent variables (F (9,243) = 2.26, p < 0.05; F (9, 243) = 17.85, p < 0.001) respectively. These findings will be reported in a separate paper.

References

  • Aron DC, Headrick LA. Educating physicians prepared to improve care and safety is no accident: It requires a systematic approach. Qual Saf Health Care 2002; 11: 168–173
  • British Psychological Society. 2006. Code of ethics and conduct. Leicester, UK: British Psychological Society.
  • Coyle YM, Mercer SQ, Murphy-Cullen CL, Schneider GW, Hynan LS. Effectiveness of a graduate medical education program for improving medical event reporting attitude and behavior. Qual Saf Health Care 2005; 14: 383–388
  • Department of Health. 2001. Building a safer NHS for patients: Implementing an organisation with a memory. London: Stationary Office.
  • Field A. Discovering statistics using SPSS. Sage Publications, London 2005
  • Flin R. Operating Room Management Attitude Questionnaire (ORMAQ; University of Texas). Anaesthesia 2003; 58: 233–242
  • Flin R, Burns C, Mearns K, Yule S, Robertson EM. Measuring safety climate in health care. Qual Saf Health Care 2006; 15: 109–115
  • Grimm LG, Yarnold PR. Reading and understanding multivariate statistics. American Psychological Association, Washington DC 1995
  • Halbach JL, Sullivan LL. Teaching medical students about medical errors and patient safety: Evaluation of a required curriculum. Acad Med 2005; 80: 600–606
  • Helmreich RL, Merritt AC, Sherman PJ, Gregorich SE, Wiener EL. The Flight Management Attitudes Questionnaire (FMAQ). NASA/UT/FAA Technical Report 93–4, The University of Texas, Austin, TX 1993
  • Hutcheson G, Sofroniou N. The multivariate social scientist: Introductory statistics using generalised linear models. Sage Publications, Thousand Oaks, CA 1999
  • Institute of Medicine. To err is human: Building a safer health system. National Academy Press, Washington DC 1999
  • Kaiser HF. The application of electronic computers to factor analysis. Educ Psychol Meas 1960; 20: 141–151
  • Leape LL, Berwick DM. Five years after To Err is Human: What have we learned?. J Am Med Assoc 2005; 293: 2384–90
  • Leape LL, Brennan TA, Laird N, Lawthers NG, Localio AR, Barnes BA, Hebert L, Newhouse JP, Weiler PC, Hiatt H. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991; 324: 377–384
  • Madigosky WS, Headrick LA, Nelson K, Cox KR, Anderson T. Changing and sustaining medical students’ knowledge, skills and attitudes about patient safety and medical fallibility. Acad Med 2006; 81: 94–101
  • Muller D, Ornstein K. Perceptions of and attitudes towards medical errors among medical trainees. Med Educ 2007; 41: 645–652
  • Norman GR, Streiner DL. Biostatistics. The bare essentials. BC Decker Inc, London 2000
  • Patey R, Flin R, Cughbertson BH, MacDonald L, Mearns K, Cleland J, Williams D. Patient safety: Helping medical students understand error in healthcare. Qual Saf Health Care 2007; 16: 256–259
  • Reason J. Human error. Cambridge University Press, CambridgeUK 1990
  • Rosebraugh CJ, Honig PK, Yasuda SU, Pezzullo JC, Flockhart DA, Woosley RL. Formal education about medication errors in internal medicine clerkships. J Am Med Assoc 2001; 286: 1019–1020
  • Sax LJ, Gilmartin SK, Bryant AN. Assessing response rates and nonresponse rates in web and paper surveys. Res High Educ 2003; 44: 409–432
  • Schaefer H, Helmreich R. The Operating Room Management Attitudes Questionnaire (ORMAQ). NASA/University of Texas FAA Technical Report, University of Texas, Austin 1993
  • Sexton JB, Thomas EJ, Grillo SP. The Safety Attitudes Questionnaire (SAQ). Technical Report 03-02. Austin, The University of Texas Center of Excellence for Patient Safety Research and Practice, Texas 2003
  • Sorokin R, Riggio JM, Hwang C. Attitudes about patient safety: A survey of physicians-in-training. Am J Med Qual 2005; 2: 70–77
  • Thomas EJ, Sexton JB, Helmreich RL. Discrepant attitudes about teamwork among critical care nurses and physicians. Crit Care Med 2003; 31(3)956–959
  • Vincent C, Neale G, Woloshynowych M. Adverse events in Bristol hospitals: Preliminary retrospective record review. Brit Med J 2001; 322: 517–519
  • Wakefield A, Attree M, Braidman I, Carlisle C, Johnson M, Cooke H. Patient safety: Do nursing and medical curricula address this theme?. Nurse Edu Today 2005; 25: 333–340
  • Walton MM, Elliott SL. Improving safety and quality: How can education help. Med J Aust 2006; 184: S60–S63
  • Walton M, Shaw T, Barnet S, Ross J. Developing a national patient safety education framework for Australia. Qual Saf Health Care 2006; 15: 437–42
  • Yu KH, Nation RL, Dooley MJ. Multiplicity of medication safety terms, definitions and functional meanings: When is enough enough?. Qual Saf Health Care 2005; 14: 358–363

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.