4,298
Views
42
CrossRef citations to date
0
Altmetric
Research Article

What are the most important non-academic attributes of good doctors? A Delphi survey of clinicians

&
Pages e347-e354 | Published online: 27 Jul 2010

Abstract

Background: Although a multiplicity of qualities and behaviours considered essential in a good doctor are identified in the profession's guidance documents, there is no consensus as to their relative importance, or indeed, agreement as to the core qualities that should be, or could be, feasibly assessed in the limited time of the typical medical school interview.

Aim: The aim of the study was to identify the most important generic attributes of good doctors, which can inform the content of the undergraduate medical student selection processes.

Method: The study used a Delphi survey to systematically gather the opinion of a panel of experts from a range of medical specialties as to the most important core attributes of good doctors. Additionally, a snapshot of opinion was obtained from the attendees of workshops held at a medical school educational conference.

Results: Common core attributes of a good doctor were identified across a number of medical specialties.

Conclusions: Consensus among clinicians from disparate specialties can be reached as to the most important generic attributes of good doctors and can be used to inform the choice of personal qualities and behaviours examined during undergraduate medical student selection process.

Introduction

A recommendation of the Tooke inquiry was that the medical profession work to reach a consensus about the correct and essential attributes of a future doctor (Tooke Citation2008). To this end, the Medical Schools Council issued a consensus statement that clearly defined the role of a doctor and described a number of abiding attributes required in doctors (MSC Citation2008). The attributes described derive from the outcomes expected of graduates, as outlined in Tomorrow's Doctors (GMC Citation2003), the qualities and behaviours needed in a doctor, as set out in Good Medical Practice (GMC Citation2006) and the attributes identified in the Guiding Principles for the Admission of Medical Students (MSC Citation2006). The latter of these reports identifies the following core attributes:

recognition that patient care is the primary concern of a doctor. Honesty, empathy, integrity, and an ability to recognise one's own limitations and those of others … good communication and listening skills, an ability to make decisions under pressure and remain calm … to cope with stress and have an understanding of professional issues such as teamwork and respect the contribution of other professions … Curiosity, creativity, initiative, flexibility and leadership (MSC Citation2006).

This list is by no means exhaustive. Indeed, Albanese et al. reported that the ‘literature identifies up to 87 different personal qualities relevant to the practice of medicine, and selecting the most salient of these that can be practically measured is a challenging task’ (Albanese et al. Citation2003). In practice, each medical school selects medical students ‘according to a profile of qualities considered by their admissions committee to be important in a medical student and future doctor (Powis Citation1998).

At this time there is no general agreement among UK medical schools about what the most important non-academic attributes of a good doctor are, and also whether there are generic attributes common to all medical specialties (Parry et al. Citation2006; Patterson & Ferguson Citation2007). However, if consensus can be reached on generic non-cognitive attributes required across all stages of the medical career and common to all specialties, these could be used to ‘guide the design of selection criteria used to recruit to undergraduate medical courses at the outset of training’ (Patterson & Ferguson Citation2007). Such agreement would go some way towards reducing the number of qualities and behaviours to the key ones that could manageably be assessed in the time allotted to the typical medical school interview.

Aim

The purpose of this study was to find the consensus opinion among a sample of clinicians from a range of specialties as to the most important generic attributes of good doctors. The study will produce a ranked and rated list of personal qualities and behaviours that practising doctors consider the most important in a good doctor that can inform the content of the undergraduate medical student selection process.

Methods

The Delphi method

The Delphi survey technique is an anonymous, structured and iterative process carried out over a series of questionnaire rounds to systematically gather and aggregate the opinion of a panel of experts with the aim of reaching a consensus (Hsu Citation2007).

Sample size and selection of the expert panel

There is no agreement in literature about expert panel sample size. Experts in a particular field are purposefully selected to bring their expertise to bear on specialist issues based on their knowledge and practical engagement with the issue under investigation (Akins et al. Citation2005). This study's expert panel were purposefully recruited from trained clinicians who are active and experienced medical educators at the Peninsula Medical School. A letter explaining the aims of the study and the considerable commitment in time required resulted in a sample of 10 participants, nine males and one female. Although the range of specialties of those who agreed to participate is limited, the sample included a paediatrician, a pathologist, a haematologist, an immunologist, a surgeon and five practising general physicians (GPs).

The Delphi survey

In round one, the panel were sent a questionnaire containing a list of 20 attributes considered to be those of a good doctor derived by the authors from the following documents: Tomorrow's Doctors (GMC Citation2003), Guiding Principles for the Admission of Medical Students (MSC Citation2006), Good Medical Practice (GMC Citation2006) and Medical students: professional behaviour and fitness to practise (GMC Citation2007). Although in some cases the terminology used to describe a particular attribute was common to each guidance document, in many cases, a different terminology was used to describe the same underlying concept (e.g. probity, honesty, trustworthiness and acting with integrity). Thus, a seeming multiplicity of terms was distilled into a list of 20 attributes. The decision to provide a pre-selected list of attributes taken from key documents rather than enable participants to generate items relevant to their specialism themselves was made pragmatically. The generation of items to include in the survey would have considerably increased the number of survey rounds and commitment required of participants. Difficulty of recruitment and high attrition rates are common as the number of survey rounds increases, such as when potential participants are requested to spend several pre-survey rounds attempting to reach agreement as to what to include (Hardy et al. Citation2004). Additionally, the attributes considered important in a good doctor and outlined in the key guideline documents were selected by experts in the medical profession from a diversity of specialties. Thus, the pre-selected items used in this survey are no more than a tractable distillation of expert opinion on a wide range of attributes considered important in a good doctor, ready for rating and ranking to enable practical use in informing the student selection process.

Each participant was asked to rate the importance of each individual attribute on a five-point Likert scale: 1 (not at all important), 2 (moderately important), 3 (important), 4 (very important) and 5 (always important). Participants were also asked to rank each attribute in order of importance from 1 through to 20, with 1 as the most important and 20 as the least important.

In round two, participants received their individual round one questionnaire containing their rating and ranking of the attributes, along with the aggregate group rating and ranking. The panel was invited to rate and rank the attributes again and those who disagreed with the aggregate group judgement on any particular attribute, or attributes, were invited to give a brief reason for their disagreement.

In the third and final round, the process was repeated. However, participants also received a summary of the brief explanations given by those who disagreed with the aggregate group judgement for particular attributes. They were asked to rate and rank the attributes in the light of the emerging aggregate group rating and ranking and the qualitative feedback from those who dissented from the group aggregate.

Additionally, 48 attendees of two workshops held at a medical school education conference were invited to rate and rank the 20 attributes of a good doctor as they appeared in the Delphi survey questionnaire. The workshop survey sample not only included clinicians from a much wider range of specialties than those who made up the Delphi panel, but also included scientists, administrators and support staff working in a medical school (Workshop 1, n = 32, and Workshop 2, n = 16). Whilst the rating and ranking of attributes by the Delphi panel was a three-round anonymous and iterative process, the surveys of workshop attendees were much less formal and involved an element of discussion and debate between individuals.

Data analysis

There is no general agreement over the level of pre-determined measures of consensus. The criterion of agreement about the rating of each attribute used in this study is based on the use of simple summary statistics and the assumption that the scale upon which the panel express their opinion is measured at the interval level. Thus, the mean, median, range and standard deviation (SD) for the rating of each item was calculated at each round of the survey. The mean, as a measure of central tendency, is taken to represent the panel's group opinion, and the SD, as a measure of spread, the amount of disagreement within the panel (Greatorex & Dexter Citation2000; Holey et al. Citation2007).

Similarly, at each round, consensus in ranking was determined using the mean to evidence convergence of views by a movement towards central tendency, and by use of the strength of agreement indicated by comparison of the SD (strength of aggregate judgement) and the range (with larger ranges indicating outlier views). We do however acknowledge that the assumption of an interval scale for Likert-type categories and the use of the mean and SD in the calculation of central tendency is controversial (Jamieson Citation2004); however, it has long been accepted that the validity of parametric statistics ‘is often affected very little by gross departures from these assumptions’ (Pell Citation2005).

Ethical considerations

Ethical approval was sought for this anonymous survey from the Peninsula College of Medicine and Dentistry Ethics Committee, and it was decided that as the research did not raise any protection issues on human subjects, ethical approval was not required.

Results of the Delphi survey

Rating

In the final round (), ten attributes were rated at five on the five-point Likert scale by the panel and thus considered always important in a good doctor. These ten attributes also comprised the top ten aggregate rankings of the panel in the final round, and all attributes rated below five appeared in the bottom ten of the rankings, evidencing the level of consistency of judgement between the rating and ranking exercises.

Table 1.  Delphi panel's final round rating and ranking of the attributes of a ‘good’ doctor, rated on a five-point Likert scale (ranked 1 through to 20)

There was remarkable stability in the ratings in respect of the top 10 attributes across the three stages of the survey, indicating that clinicians, albeit from a limited range of specialties, appear to hold stable judgements when rating the most important attributes required of a good doctor (). The strength of agreement between raters is commonly reported as a kappa statistic which calculates the degree of actual agreement in classification over that which would be expected by chance and is scored as a number between 0 and 1, with perfect agreement equating to 1 and chance agreement to 0 (Cohen Citation1968). Thus, kappa has the advantage of providing more information than a straightforward calculation of the raw proportions of agreement. When more than two raters assign ratings to the ordered categorical data, as in this study, a weighted multi-rater kappa statistic is used to assess the magnitude of agreement. Weighted kappa takes into account the ordinal nature of the data and assigns less weight to agreement as categories are further apart (Viera & Garrett Citation2005). Weighted multiple-rater kappa estimates, calculated to assess the degree of agreement between raters in respect of the 20 items at each of the three rounds, showed agreement to be fair at round one, and moderate to good at rounds three and four (weighted kappa = 0.3222, 0.38072 and 0.4193, respectively) (Landis & Koch Citation1977, calculations of weighted kappa are based on the methods outlined in O’Connell & Dobson Citation1984). The increase in kappa across the three rounds reflects the emerging consensus across the group in respect of the top 10 items, whilst the magnitude of the kappa at round three evidences the disagreement, particularly about the importance of the remaining attributes.

Table 2.  The Delphi panel's rating of the top 10 attributes of a ‘good’ doctor over the three rounds of the Delphi survey on a five-point Likert scale

Stability in the ranking of the top 10 attributes at each of the three rounds evidenced consistency of opinions across the stages of the survey ().

Table 3.  The Delphi panel's ranking of the top 10 attributes of a ‘good’ doctor over the three rounds of the Delphi survey, ranked 1 through to 20, with 1 the most important

Furthermore, the additional information provided to the panel in the form of brief summaries of reasons for dissent from the emerging group consensus does not appear to have had a dramatic effect on the stability of group opinion in respect of the ranking of the majority of the attributes.

In , the reasons given for dissent from the emerging consensus are summarised (experts, when given an opportunity to comment at round two with the knowledge of round one, aggregate group rankings). Out of the 20 attributes, recognition that patient care is the primary concern of a doctor, probity, curiosity, potential for leadership, insight into medicine, pro-social attitude, reflective manner and awareness of ethical issues in professional medical behaviour were all the subjects of expressed dissent from the group consensus. But only the attributes such as pro-social attitude, reflective manner and awareness of ethical issues in professional and medical behaviour changed the ranked position after round two.

Table 4.  Reasons given by Delphi panel members for disagreement with the emerging consensus

Whether the brief reasoned arguments for dissent had an effect on group ranking is a matter of speculation. However, pro-social attitude was elevated two places (7 to 5) despite the argument in the feedback to the panel that clearly supports the view that medicine needs to accommodate individuals with a range of perspectives. The attribute reflective manner was raised three places (15 to 12), perhaps in the light of the argument that reflective manner is essential for lifelong learning and maintaining a high standard in many of the other attributes. Awareness of ethical issues in professional and medical behaviour fell three places (12 to 15), possibly influenced by the feedback to the panel, which argued that compliance with guidelines will ensure conformity without awareness.

Results of the workshops

A snapshot of opinion was obtained from the attendees of two workshops held at a medical school conference. Although like is not being compared to like, it can nevertheless be seen in that workshop attendees and the Delphi panel of experts ranked the same seven attributes in the ‘top ten’ of most important attributes of a good doctor and rated these attributes as either very important or always important (workshop ratings are not included in the table). Stark disagreement about particular rankings is evident in relation to the attribute, ability to cope with ambiguity, change, complexity and uncertainty, which was ranked sixth most important by both the Delphi expert panel and those in Workshop 1, but fourteenth most important by those who attended Workshop 2. Disparity in opinion about the importance ranking of the attributes, commitment to lifelong learning, awareness of ethical issues in professional medical behaviour, ability to be a team player and ability to make decisions under pressure is also evidenced among the three samples. But overall there is much agreement, especially in respect of the ‘top ten’ of attributes. Given the wider range of medical specialties among the workshop sample, the findings indicate that clinicians from diverse specialties and those from the wider population of individuals who work in medical education appear to hold common judgements as to the most important generic attributes required of a ‘good’ doctor.

Discussion and conclusions

It is widely accepted that it takes more than academic ability to make a good doctor (Harris & Owen Citation2007; Patterson & Ferguson Citation2007), and medical schools have been encouraged for some time to select students for broader characteristics and develop selection procedures that assess non-cognitive attributes (GMC Citation1993; World Summit on Medical Education Communique Citation1994; Greengross Citation1997; Core Committee, Institute for International Medical Education Citation2002). Nevertheless, as Reiter commented ‘little effort has been spent assessing the relative value of dozens of characteristics that have been identified’ (Reiter & Eva Citation2005). A generalisable aspect and value of this study is its demonstration of the utility of the Delphi survey method to enable individual institutions to identify the non-cognitive attributes considered most salient by their faculty, stakeholders and community. However, as Albanese points out, ‘it would be of substantial help to have a nationally defined set as a starting point’ (Albanese et al. Citation2003). Our starting point was the UK medical profession's key guidance documents.

The study produced a rated and ranked list of non-academic attributes of a good doctor derived from the medical profession's regulatory documents (Tomorrow's Doctors, GMC (Citation2003); Good Medical Practice, GMC; (Citation2006); Guiding Principles for the Admissions of Students, Medical Schools Council (2006); and Medical Students: professional behaviour and fitness to practise, GMC (Citation2007). There was a remarkable stability across the rounds of the Delphi survey in opinion as to the top 10 attributes considered always important in good doctors. The findings indicate that clinicians from several different specialities seem to hold common opinions as to the most important core generic attributes of a good doctor.

It was reassuring that eight of the attributes ranked in the top 10 and rated as ‘always important’ in a good doctor by the Delphi panel were among those qualities and behaviours described as ‘overarching outcomes’ or behaviours students ‘must demonstrate’, by the time they graduate, as set out in the new version of Tomorrow's Doctors (GMC Citation2009).

Limitations to our findings are that it involved a single institution and a panel of experts from a restricted range of specialties. It is clear that the medical profession still has more fundamental work to do to establish an agreed set of essential attributes of a good doctor. Future research is recommended that surveys the opinions of experts from a much larger sample derived from a wider range of medical specialties and a greater number of institutions.

We recognise that our study included the imposition of a pre-conceived set of attributes on the panellists by not allowing them to contribute to the choice of items to be rated and ranked. However, extensive use of Delphi expert panel surveys by Keeney et al. in health care research has led them to conclude that the provision of pre-existing information, such as a list of attributes, for rating and ranking in the first round of the Delphi may be more effective and efficient in producing useful results (Keeney et al. Citation2006).

The influence on Delphi reliability of feedback in the form of comments on the reasons for dissent is questioned by some as to whether the increase in agreement is due to constructive feedback helping the ‘experts who are out of line with the consensus to refine their judgements, or whether such experts have just conformed to the majority view’ (Greatorex & Dexter Citation2000). It is impossible to know for certain but our view is that these dissenting voices provide valuable insights into the issue under scrutiny that otherwise would not be revealed by the statistically derived consensus.

It could be argued that a further limitation of the study is the questionable face validity of the rating categories and whether a rater actually discerns a difference, for instance, between ‘moderately important’ and ‘important’ on the five-point Likert scale used. As Crewson points out, agreement ‘is likely to be underestimated and not generalisable when rating categories have questionable face validity’ (Crewson Citation2005). Face validity is however concerned with the use of existing measures which previous researchers have shown to produce valid and reliable results. Thus, our choice of the five-point Likert scale anchored by ‘not at all important’ and ‘always important’ was informed by a practical application of the Delphi technique in nursing research (Hardy et al. Citation2004).

This study has contributed to the discourse on the rating and ranking of the non-academic attributes of good doctors (Reiter & Eva Citation2005). However, it is recognised that there remain thorny issues to resolve, such as the stability of attributes over the life course, whether some attributes can be taught or acquired during medical education and differences of opinion as to the meanings ascribed to attributes and how they can be reliably and validly assessed. Longitudinal research also needs to be undertaken to ascertain whether the selection of students with key non-academic attributes do indeed produce doctors with the defined personal qualities and behaviours set out in Tomorrow's Doctor (GMC Citation2009).

In conclusion, this study reports the consensus opinion of doctors from a range of specialties on the most important non-academic attributes of good doctors. The results can be used to inform the choice of personal qualities and behaviours examined during undergraduate medical student selection process.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.

References

  • Akins R, Tolson H, Cole B, 2005. Stability of response characteristics of a Delphi panel: Application of bootstrap data expansion. BMC Med Res Methodol [online]; 5:37. Available from: <http://www.biomedcentral.com/1471-2288/5/37>. Accessed 2009 May 5
  • Albanese M, Snow M, Schochlak S, Hugget K, Farrell P. Assessing personal qualities in medical school admissions. Acad Med 2003; 78: 149–158
  • Cohen J. Weighted kappa: Nominal scale agreement with provision for scaled agreement or partial credit. Psychol Bull 1968; 70: 213–220
  • Core Committee, Institute for International Medical Education. Global minimum essential requirements in medical education. Med Teach 2002; 24(2)130–135
  • Crewson P. Fundamentals of clinical research for radiologists: Reader agreement studies. Am J Roentgenol May, 2005; 184: 1391–1397
  • GMC (General Medical Council) 1993, revised 2003. Tomorrow's Doctors. London: General Medical Council. <http://www.gmc-uk.org/education/>. Accessed 2009 Apr 20
  • GMC (General Medical Council) 2006. Good Medical Practice: Regulating Doctors Ensuring Good Medical Practice. London: General Medical Council. <http://www.gmc-uk.org/education/>. Accessed 2009 Apr 20
  • GMC (General Medical Council and Medical Schools Council) 2007. Medical Students: Professional Behaviour and Fitness to Practice. <http://www.gmc-uk.org/education/undergraduate/undergraduatepolicy/>. Accessed 2009 Apr 20
  • GMC (General Medical Council) 2009. Tomorrow's Doctors. London: GgGeneral MmMedical CcCouncil 2009, <http://www.gmc-uk.org/education / >. (aAccessed 2009 April 2009)
  • Greatorex J, Dexter T. An accessible analytical approach for investigating what happens between the rounds of a Delphi study. J Adv Nurs 2000; 32(4)1016–1024
  • Greengross S. What patients want from their doctors. Choosing tomorrow's doctors, I Allen, P Brown, P Hughes. London Policy Institute, London 1997; 12–19
  • Hardy D, O’Brien A, Gaskin C, O’Brien AJ, Morrison-Ngatai E, Skews G, Ryan T, McNulty N. Practical application of the Delphi technique in a bicultural mental health nursing study in New Zealand. J Adv Nurs 2004; 46(1)95–109
  • Harris S, Owen C. Discerning quality: Using the multiple mini-interview in student selection for the Australian National University Medical School. Med Educ 2007; 41: 234–241
  • Holey E, Feeley J, Dixon J, Whittaker V, 2007. An exploration of the use of simple statistics to measure consensus and stability in Delphi studies, BMC Med Res Methodol [online]; 7:52. Available from: <http://www.biomedcentral.com/1471-2288/>. Accessed 2009 May 5
  • Hsu C, 2007. The Delphi technique: Making sense of consensus. J Pract Assess Res Eval [online]; 12(10):4. <http://pareonline.net/getvn.asp?v=12&n=10>. Accessed 2009 May 5
  • Jamieson S. Likert scales: How to abuse them. Med Educ 2004; 38: 1212–1218
  • Keeney S, Hasson F, McKenna H. Consulting the oracle: Ten lessons from using the Delphi technique in nursing research. J Adv Nurs 2006; 53(2)205–212
  • Landis J, Koch G. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159–174
  • MSC (Medical Schools Council) 2006. Guiding Principles for the Admission of Medical Students, Medical Schools Council Publications and Guidance. <http://www.chms.ac.uk/publications.htm>. Accessed 2009 Apr 20
  • MSC (Medical Schools Council) 2008. Consensus Statement, the Role of the Doctor: Past, Present and Future, Medical Schools Council Publications and Guidance. <http://www.chms.ac.uk/publications.htm>. Accessed 2009 Apr 20
  • O’Connell D, Dobson A. General observer-agreement measures on individual subjects and groups of subjects. Biometrics 1984; 40: 973–983
  • Parry J, Mathers J, Stevens A, Parsons A, Lilford R, Spurgeon P, Thomas H. Admissions processes for five year medical courses at English schools: Review. BMJ 2006; 332: 1005–1009
  • Patterson F, Ferguson E. Selection for medical education and training. Association for the Study of Medical Education, Edinburgh 2007
  • Pell G. Use and misuse of Likert scales in letters to the editor. Med Educ 2005; 39: 970
  • Powis D. How to do it: Select medical students. BMJ 1998; 317: 1149–1150
  • Reiter H, Eva K. Applied research: Reflecting the relative values of community, faculty, and students in the admissions tools of medical school. Teach Learn Med 2005; 17(1)4–8
  • Tooke J. Aspiring to excellence: Final report of the independent inquiry into modernising medical careers. MMC Inquiry 2008, London 2008
  • Viera A, Garrett J. Understanding interobserver agreement: The kappa statistic. Fam Med 2005 2005; 37(5)360–363
  • World Summit on Medical Education Communique. Med Educ 1994; 28(S1)1–3

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.