2,782
Views
14
CrossRef citations to date
0
Altmetric
Web Paper

Using a structured clinical coaching program to improve clinical skills training and assessment, as well as teachers’ and students’ satisfaction

, , , , &
Pages e586-e595 | Published online: 08 Dec 2009

Abstract

Introduction: The ability to deliver the traditional apprenticeship method of teaching clinical skills is becoming increasingly more difficult as a result of greater demands in health care delivery, increasing student numbers and changing medical curricula. Serious consequences globally include: students not covering all elements of clinical skills curricula; insufficient opportunity to practise clinical skills; and increasing reports of graduates’ incompetence in some clinical skills.

Methods: A systematic Structured Clinical Coaching Program (SCCP) for a large cohort of Year 1 students was developed, providing explicit learning objectives for both students and paid generalist clinical tutors. It incorporated ongoing multi-source formative assessment and was evaluated using a case-study methodology, a control-group design, and comparison of formative assessment scores with summative Objective Structured Clinical Examination (OSCE) scores.

Results: Students demonstrated a higher level of competence and confidence, and the formative assessment scores correlated with the Research students’ summative OSCE scores. SCCP tutors reported greater satisfaction and confidence through knowing what they were meant to teach. At-risk students were identified early and remediated.

Discussion: The SCCP ensures consistent quality in the teaching and assessment of all relevant clinical skills of all students, despite large numbers. It improves student and teacher confidence and satisfaction, ensures clinical skills competence, and could replace costly OSCEs.

Introduction

Training in basic clinical skills is considered to be a core element in the undergraduate medical curriculum, yet has been traditionally based on a largely “see one, do one, teach one” philosophy (McLeod et al. Citation2003; Williams & Klamen Citation2006). Despite their importance however, medical students’ acquisition of clinical skills may be “haphazard and random” (Wall et al. Citation2006), and their level of achievement dependent on “individual motivation, access to patients and quality of teaching” (Ledingham & Dent Citation2001). Researchers in a number of different countries (Remmen et al. Citation1998; Seabrook Citation2004) have found that students’ acquisition of basic clinical skills suffer because of their limited (supervised) hands-on experience. Simply exposing students to clinical work is insufficient, and the extent to which the “intended” curriculum is congruent with the “curriculum in action” is questionable (Remmen et al. Citation1999).

Notwithstanding the potential educational advantages of learning in context and early patient contact, the hospital is not necessarily an ideal educational setting (Durak et al. Citation2006). Clinicians may be willing to teach, but teaching duties may be neglected because of “heavy service, time constraints, noisy wards and uncooperative patients” (Ahmed & El-Bagir Citation2002). Additionally, teaching quality is inconsistent even within the same disciplines so that students have highly individualistic learning experiences which a number of researchers report, may or may not include the coverage of all syllabi within the clinical skills curriculum (Sachdeva et al. Citation1995; Remmen et al. Citation1999, Ahmed & El-Bagir Citation2002).

The content of teaching by educators who are unaware of what is expected of them may be unconnected to students’ needs (Armstrong et al. Citation2004), and students may not have their clinical skills competence assessed at an appropriate level (Searle Citation2000; Bligh Citation2004). Objective Structured Clinical Examinations (OSCEs) are an accepted method of assessing the “shows how” of clinical competence. They are considered to be reasonably objective as students are assessed by multiple examiners over a range of clinical activities. Be that as it may, as DeLisa points out, a large number and wide range of assessment stations are needed to obtain a comprehensive picture of an examinee's skills, and this is logistically (e.g. in the face of examining a cohort of more than 400 students having access to sufficient numbers of: OSCE station developers, clinical examiners; standardized patients; invigilators; administrative staff as well as premises large enough and appropriate for setting up dozens of OSCE stations) and financially unsustainable with a large cohort of students. There is also no guarantee that their performance in an OSCE will translate into an acceptable performance in a true clinical setting (DeLisa 2000). Indeed, medical graduates may commence practice with deficiencies in their clinical skills, despite having passed final-year OSCEs (Fox et al. Citation2000; Remmen et al. Citation2001).

It is now accepted that medical students should receive structured and systematic teaching and assessment in a range of clinical skills (Ledingham & Dent Citation2001). At The University of Queensland, the Year 1 and 2 clinical coaching programs are delivered through the Central and Southern Clinical Divisions. Like other medical schools throughout the world (Remmen et al. Citation1998), The University of Queensland sought to ensure that all students cover a range of clinical skills through devising a list of core competencies. Despite this, and the fact that both students and clinical tutors (“tutors”) were given lists at the beginning of the year, there were reports that many tutors and students did not know explicitly what should be taught or learned (Régo Citation2005).

This article discusses the development, evaluation and outcomes of a successful pilot Structured Clinical Coaching Program (SCCP) which was designed to address existing issues in the teaching of clinical skills (e.g. opportunistic teaching leading to sometimes-incomplete coverage of the clinical skills curriculum, consultants not turning up to teach, consultants not knowing what to teach), and to ensure teaching quality in the face of an already-large and ever-increasing first-year student cohort (N = 321 in 2006 and N = 403 in 2008). Underpinning its development was an awareness that despite having some theoretical knowledge of anatomy and possibly of communication skills, the majority of students would not have had any practical experience and would thus need their learning to be scaffolded (Spouse Citation1998).

The study

The aims of the study were to determine:

  • whether the SCCP model is a reliable and defensible method of summative assessment in terms of:

  • student performance

  • early identification of at-risk students

  • inter-rater reliability;

  • the effectiveness of student self- and peer-assessment in the development of competence and confidence in clinical skills;

  • the effectiveness of tutor assessment of their own and others’ groups in terms of:

  • student performance;

  • tutor performance,

  • teaching,

  • reflection;

  • the impact a SCCP has on the quality of clinical education and on student learning.

Methods

Study design

A triangulated case-study methodology (Tellis Citation1997) was chosen because it allows the in-depth description and evaluation of an intervention. A case study is an exploration of a “bounded system”, i.e. bounded by time and place and is designed to adduce the details of a study from the viewpoint of all participants (Cresswell Citation1998; Violato et al. Citation2003).

Student sample

The Research and Control groups (N = 159 and N = 162, respectively) were based at two clinical divisions – Central and Southern Clinical Divisions (CCD and SCD hereafter) – within two major teaching hospitals. Participants were randomly allocated to each site to ensure an even distribution of ages (80% aged between 21 and 25 years), males and females (49% female, 51% male) and academic background (key subject areas: 62% biological science, 8% Arts, 7% Pharmacy). Students were then randomly allocated so that there were five students in each clinical coaching group. With the exception of highly specialized units (e.g. Burns and Transplant units) where students are not placed anyway, the patient profile and faculty-to-student ratios were similar at both sites, and there was no significant difference in summative OSCE scores between the sites in the years prior to the intervention. Some of the skills students covered during the year were assessed in the summative assessment OSCEs and the results were compared with the results from the Research students’ formative assessments. Summative assessment OSCE assessors were external to the intervention and blind to students’ Control or Research group status.

Existing clinical skills program

Two years prior to the intervention and in an attempt to give students and their clinical tutors a clear indication as to what was expected of them, a “Portfolio of Essential Skills” had been devised and given to all students and tutors in Year 1. Before this, they had only been given the broad learning objectives to be achieved within each of the body systems mentioned, but not the specific skills to be acquired. Regardless of the clinical division to which they belonged however, many clinical tutors did not use it at all. This was less because of a deficit in the documentation than a perceived impracticability in implementing it through the paucity of patients who fitted the profile for the different weekly Problem-based learning (PBL) cases. Some clinical tutors had also disregarded the clinical skills curriculum developed by the School of Medicine because they felt they knew what the students should be taught. The clinical skills curriculum was in some instances not delivered in full (Régo Citation2005).

Design of the intervention

The SCCP is, as the name suggests, a completely structured program encompassing the supervised hands-on teaching of the theory and practice of clinical skills, both physical examination and history-taking. Five body systems were covered in both the SCCP and the traditional clinical coaching program (Gastrointestinal, Respiratory, Cardiovascular, Musculoskeletal, Neurological). (Communication skills are a separate syllabus and were taught to the entire cohort.)

The structure of the intervention was such that 4 weeks × 1.5 h each week were allocated to each system with the exception of the neurological system which was allocated 5 weeks × 1.5 h. Students in the Research group used each other as patients to practise their physical examination and history-taking skills under the supervision of the tutor. A clinical visit also occurred in week 3. At the end of each system Research students were formatively assessed. For example, at week 3, the tutors carried out a formative assessment of their own five students with respect to physical examination and history-taking skills. At the end of week 4, peer students and a tutor from another group carried out formative assessment within each group, using the same instruments as the original group tutor. The instruments were aligned to the curriculum laid out in the Clinical Skills Manual and explicitly listed the skills required for each system. The only difference between the instrument used by the students and by the other raters was the scale. Peer students, the groups’ own tutors and the peer tutors used a five-point scale of competence (ranging from “outright fail”, “marginal fail”, “pass”, “clear pass”, “high pass”). Research students were given an on-line version of the individual skills listed in the clinical skills manual for each system so that they could self-rate their levels of confidence each week using a five-point scale of confidence (“unseen”, “saw demonstrated”, “have tried”, “becoming confident”, “very confident”).

The Control students were not asked to rate their confidence in order to replicate as closely as possible the traditional clinical coaching program of which confidence-testing was not a part. In the customary way, Control students practised their examination skills on whatever patients were available on the ward to which they were attached. In contrast to the Research students who received ongoing feedback through formative assessment at the end of each system, the Control students were only summatively assessed in the normal fashion at the end-of-year exams.

As multi-source feedback is a way of validating each rater's scores, evaluation of the SCCP was designed to elicit feedback from the summative assessment process (the OSCE) and a formative assessment process incorporating students’ own rating of their confidence in the various skills; and formative assessment by their peers and from their own and another tutor, in real-time. Additionally, tutors were surveyed before and after the intervention, and group interviews were held with both tutors and students from the Research and Control groups. Tutors from the Control group were surveyed by e-mail and data from existing reports relating to the previous clinical coaching program were also used in the analysis.

A comparison of the different features of the intervention and the traditional clinical coaching program can be seen in .

Figure 1. Comparison of the structured clinical coaching program at the research site and the traditional clinical coaching program at the control site.

Figure 1. Comparison of the structured clinical coaching program at the research site and the traditional clinical coaching program at the control site.

Previous experience (Régo & Ozolins 2007) had alerted the research team to the high chance of Research students sharing the intervention teaching materials with Control students. As a consequence and so that the effects of the structure of the program could be evaluated, tutors and students at the Control and Research sites were given the same handbook containing the individual examination and history-taking skills to be learned within each body system (see the Appendix). The lists of these skills were used by all raters in the students’ formative assessments.

Students in the Research group remained in the same clinical coaching group with the same tutor all year. On the other hand, students in the Control group rotated through the various hospital-based clinical departments in the normal fashion every 6 weeks. Whilst the hospital-based specialists were paid to teach students only indirectly through their employer (who are in turn paid by the School of Medicine), all of the 17 tutors in the Research group were experienced general practitioners who were paid explicitly by the School of Medicine to teach clinical skills. In contrast to other studies (Murray et al. Citation1997; Wallace et al. Citation2001; Grant & Robling Citation2006) the GPs taught in clinical teaching rooms on the UQ School of Medicine campus, and not in the community.

As usual, clinical tutors at the Control site taught students using the patients that were available to them on the ward, whilst the Research tutors demonstrated to and supervised their students as they practised mostly on each other, and sometimes on patients in hospitals or the community.

A follow-up survey was conducted at the beginning of the students’ Year 2. This sought to determine how well they felt they had been prepared by the clinical coaching program they had experienced (either in the CCD or in the SCD), as well as, inter alia, which method (structured versus unstructured) they believed to be the best to prepare them for Year 2 clinical skills, and the time spent learning or teaching students from the other clinical division.

Using the statistical package SPSS® (SPSS Inc Citation2005), analysis of variance (ANOVA) was used to compare OSCE and formative assessment scores, and Pearson's r was used to calculate associations between groups and between the different raters. Themes were developed from the qualitative data from tutors and students using a constant comparative method of analysis (Cresswell Citation1998).

The study was approved by The University of Queensland's ethics committee and the Research students gave written consent to participation. Nonetheless, the only clinical coaching program being offered at the Research site (the CCD) was the intervention. Had any student not wished to participate, he or she would have been transferred to the Control site (the SCD). Consent was not sought from the Control group because it was engaged in the standard clinical coaching program. The only change in the latter from the previous years’ clinical coaching programs was the students’ and teachers’ access to the same (new) teaching and learning materials as the Research group.

Results

The same formative assessment instruments (as described above) were used by the students, their own group's tutor and the peer tutors to assess the students’ clinical skills competence. Each scale for the body systems proved to be reliable with Cronbach's alpha coefficients ranging from 0.994 (Neurological), 0.987 (Gastrointestinal), 0.980 (Respiratory), 0.979 (Musculoskeletal), and 0.964 (Cardiovascular). Additionally, when each of the scales was subjected to principal axis factoring, only one factor per system was extracted. As well, the formative assessment scores were predictive of OSCE scores, and concurrent validity was achieved by congruence of OSCE scores with the formative assessment scores.

All students at the Research site opted to participate in the project. As might be expected, the Research students’ confidence in all body systems rose markedly over the 4 or 5 weeks they were taught (). (Control students did not measure their confidence formally.) Research students’ self-rating of their confidence was correlated significantly (p < 0.05) with the ratings given to them for competence by the other raters (on average, between a borderline and clear pass – 3.74/5, range 3.68–3.84). The strongest correlation was between the peer tutors’ and peer students’ scores for students’ competence, and the weakest correlation was between the group tutors’ scores for competence and the students’ self-scores for confidence ().

Figure 2. Comparison of mean scores at start and finish of body systems (1 = Unseen–5=Very confident). Note: (*SD = Standard deviation).

Figure 2. Comparison of mean scores at start and finish of body systems (1 = Unseen–5=Very confident). Note: (*SD = Standard deviation).

Table 1.  Correlation of aggregated mean scores for research students (all raters, all systems)

End-of-year assessment scores

The Research students received higher OSCE scores than the Control group (F(303) = 3.92, p = 0.049; means 3.80/5 (SD = 0.30) cf. 3.72/5 (SD = 0.30), and there was correspondence between the OSCE score and the mean for all systems of the formative assessment scores for the Research students (). The proportion of higher passes (clear-high passes – 4–5/5) in the two groups was 25.2% (CCD) cf. 18.2% (SCD), the odds of the Research students reporting a higher OSCE score being nearly twice as high as they were for the Control students (1.38/0.72 = 1.92).

Figure 3. Comparison of summative and formative assessment scores.

Figure 3. Comparison of summative and formative assessment scores.

All of the at-risk students in the Research group (N = 23/153, 15%) who were identified and supported early in the year through referral to appropriate agencies, passed the summative assessment well. However, the School of Medicine had no way of knowing how Control students were faring throughout the year. This was because of the structure of the traditional clinical skills program, that is, large numbers of students rotating through hospital-based clinical departments every 6 weeks, allowing little time for students and tutors to form relationships.

Cost of the CCP versus OSCE

Logistical Opportunity Costs

Logistic organization of the Year 1 OSCE in 2006 entailed the marshalling of 562 people including academic and administrative staff, and 321 students. Were an OSCE for Year 1 students to be held in 2009, some 681 people (including the 440 Year 1 students) would be involved (). On top of that, as student numbers increase even further, it is possible that another much larger venue would need to be found, and equipment such as sphygmomanometers, screens, beds, tables, and chairs hired for 2 or 3 days. This contrasts with arrangements for the SCCP: existing School of Medicine rooms are used for the weekly SCCP sessions, the program employs the necessary number of clinical tutors (N = 36 in 2008), and it requires the part-time attention of the clinical co-ordinator and an administrative assistant, respectively.

Figure 4. Comarison of costs – OSCE's vs. clinical coaching program.

Figure 4. Comarison of costs – OSCE's vs. clinical coaching program.

Personal opportunity costs

Because of the need to use a large hospital outpatient department, OSCEs can only be held on weekends. Thus a major opportunity cost for all concerned in running an OSCE is that of family time. It has been increasingly difficult to find examiners willing to spend their weekends assessing students in an OSCE, whereas the SCCP assessments are done during normal business hours on weekdays by the clinical tutors already being paid to teach in the SCCP. If, because of increasing student numbers, OSCEs had to be held on a week day as well, there would be considerable financial opportunity costs to the examining consultants as they cancelled their own patients/operating lists (if, indeed, the School was able to obtain their services then).

Financial opportunity costs

As student numbers increase there are economies of scale because some costs remain constant and the relative cost of running an OSCE declines, whilst the cost of running assessments through the SCCP increases slightly as more clinical tutors/assessors are employed. Nonetheless, the saving from not holding an OSCE in 2006 would have been $116,000 ($310,000 versus $194,000 for the SCCP). In 2008, the saving from not holding an OSCE was $93,000, and were the Year 1 OSCE to be run in 2009, the saving would be in the vicinity of $50,000 ().

Qualitative results

Overall, Research students participating in group interviews were very enthusiastic about the SCCP. They appreciated both having the same tutor all year and their dedication, as well as the structured nature of the program – particularly in contrast to the rest of the PBL medical program:

you know exactly what's expected of you. It's the only time in the entire med course that you know what's expected of you. It's wonderful. (S1)

Well I guess in this one you’ve got excellent resources, you’ve got structure, and you’ve got dedicated tutors. That trilogy certainly provides me with a great deal of warmth and comfort, and I’m thankful for it. (S3)

Additionally, Research students believed being taught by general practitioners (GPs) gave them a major advantage over the Control students, for example:

in some case where they have specialists, the specialists really don’t want to branch too far from their own interests, whereas the GPs seem to have a broad general knowledge and they seem to use a lot of these basic skills in their day-to-day practices … (S4)

This impression was confirmed by students in the Control group themselves:

Often the Southern Division classes were poorly organised with many cancellations/late arrivals without notice. Also different coaches had different levels of expertise or only focussed on their expertise, e.g. CVS (S14).

Much of learning was self-taught, especially in areas above-mentioned (“musculoskeletal/neurological exams”). Every consultant wanted to do cardio or GIT, and despite requests to make a start on other exams. Students from the other program seemed to have a better broader grasp across disciplines/systems (S27).

Being from the Southern School some clinical coaches did not cover elements that we were required to learn; therefore other coaches had to try to bring us up to date. By the end of the year, there were some systems we were unable to cover without coaches (sic) (R31).

As far as the GP tutors were concerned, they preferred having the same group all year. Moreover, having the structure raised their confidence because they knew what they were meant to be teaching. The other tutors’ assessment of their students using the same instrument and five-point scale (“outright fail”, “marginal fail”, “marginal pass”, “clear pass”, “high pass” – see sample of items in the Appendix), also confirmed for them that their students were covering everything they needed to at the right level, for example:

When we started all this, I was quite negative about it because I felt it wasn’t necessary for me. I was following the book. Now I have completely turned around. I don’t think it is a waste of time because you can see whether they’ve got those skills or not. That's what it's for. (T1)

I think it actually helps the tutors. Yes, because the year before when I was teaching, they had missed out on some systems altogether, and that doesn’t happen now. (T3)

Both tutors and students reported the need for more time in each tutorial (from 1.5 h to 2 h), and for 5 or 6 weeks per system.

Discussion

Given the range of elements in the intervention, it is not possible to determine which one factor was responsible for its success. It is absolutely clear however, that simply making available to students and their clinical teachers detailed lists of skills to be covered (e.g., a “Portfolio of Essential Skills”), is insufficient to ensure the quality of teaching and assessment, or full coverage of a clinical skills curriculum.

Multi-source feedback systems are increasingly being used as a way of determining clinical competence because the perceptions (and assessment scores) of each rater can be validated by the others (Violato et al. Citation2003). In this study, the scores between the summative OSCE and each of the raters, particularly the two tutors, were largely congruent. Any difference between the tutors’ scores may be explained through the peer tutors examining students a week later than the groups’ tutors. Although there were some concerns about the validity of self-reporting (Spector Citation1994; Mattheos et al. Citation2004), students’ self-assessment of their confidence was largely matched with the scores given to them for competence by the other raters. These results give grounds for optimism that our large-scale and expensive summative OSCEs can be validly replaced using at ongoing assessment by at least two clinical tutors.

Paying the clinical tutors had the advantage of ensuring they always turned up for their clinical coaching sessions. They could also be and indeed were, held accountable for their teaching. There is ample evidence that students benefit from formative assessment and timely and detailed feedback (Johnston & Boohan Citation2000; Bandaranayake Citation2001; Gordon Citation2003; Macmillan & McLean Citation2005; Carless Citation2006), and it is apparent from the Research students’ comments that the weekly guided practice and formative feedback from two separate tutors were highly valued.

Having one tutor all year enabled Research group tutors to identify and support students having difficulties. In contrast and as is the norm (Sayer et al. Citation2002), there was little opportunity for Control group tutors to assess students’ learning on an on-going basis and to identify vulnerable students.

Above all, the Research group tutors and both the Research and Control students believed the intervention to be superior to the traditional program because it was structured and ensured an equality of experience for all students and coverage of all of the topics in the clinical skills curriculum. Some Control students were not shown how to examine the musculoskeletal or neurological systems, for example, and one Research tutor confirmed that this incomplete coverage of the curriculum had also happened prior to the intervention.

Because the emphasis in Year 1 is on normal structure and function, any consenting patient would have been suitable to help Control students learn history-taking and examination skills on all/any body system. However, feedback from tutors in the traditional clinical coaching program one year prior to the intervention and also from the emailed survey of the tutors at the Control site during the study, showed many tutors lacked the confidence to coach outside their area of expertise, even though they were not required to teach them at a sub-speciality level (Régo Citation2005). Under these circumstances, it is possible that the intended curriculum is not always covered, particularly when students’ learning is not augmented by skills laboratory training, or when students do not believe the acquisition of certain skills to be “compulsory” (Remmen et al. Citation1999) (an issue resolved at The University of Queensland a year before the intervention).

GPs were preferred as teachers because they had dedicated teaching time which they spent on the School of Medicine campus. Unlike hospital specialists, they were also paid directly and were more reliable in attending scheduled sessions. Additionally, GPs are generally more confident than hospital specialists at teaching students basic clinical skills (Murray et al. Citation1997; Johnston & Boohan Citation2000).

At the beginning of the study, some intervention tutors were concerned that it would be uncomfortable for them to assess students from other tutors’ groups, and to be so assessed. On the contrary, they felt affirmed in their teaching, and confident their students were gaining the necessary skills. Additionally, the tutors were helped by the SCCP to give students the essential feedback they needed to develop competency (Carraccio et al. Citation2002), and tutors identified as needing support were offered it.

Limitations of the study

A number of issues need to be taken into account in the analysis of the findings. For example, although the tutors’ regular observation of students practising their clinical skills is superior to a one-off OSCE, it is impossible at the point of writing to know how well students will perform in the real clinical world. This would be worth following up in the Research students’ clinical years. Additionally, it has been suggested that an emphasis on the more-easily measured acquisition of clinical skills could inhibit the teaching and assessment of more difficult to measure, but equally important, interpersonal skills (Carraccio et al. Citation2002). However, as mentioned above, the SCCP is balanced by the specific teaching of communication skills elsewhere in the medical program. Additionally, students’ awareness relating to clinical skills coaching was heightened in both the Research and Control sites to the extent that one-third of the Control students overcame what they perceived to be a deficit in their training by being coached by students involved in the intervention, and “other tutors”. A placebo effect alone may have had a positive impact on the Research students’ results. Nonetheless, the teaching of their peers would have helped to reinforce the teaching they received from their clinical tutors, whilst possibly bringing some Control students’ skills levels up to the same level (or at least for both groups to be good at passing OSCEs!).

Conclusion

Accrediting bodies globally set down the core competencies and objectives to meet minimal standards of practice whilst at the same time, not connecting what students should master with how they should do so (Armstrong et al. Citation2004). It is possible currently for students to graduate from traditional programs without having been taught or practised some clinical skills. Additionally, the considerable logistic and financial costs involved in mounting a reliable OSCE are unsustainable in the face of large and ever-increasing student numbers.

The apprenticeship model of clinical skills training is successful when students can be closely supervised and their learning is scaffolded by their preceptor (Furmedge Citation2008). As students become more competent, their autonomy should increase so that support may be faded so that students eventually become independent practitioners (Balmer et al. Citation2008). However, in the absence of close supervision as in the current environment, it is imperative that students’ learning of clinical skills be scaffolded. This principle underpinned the development of, and was almost certainly important to, the success of the SCCP.

The SCCP is a structured apprenticeship model. It ensures student and tutor satisfaction, as well as quality teaching and the reliable assessment of every student's competence in all relevant clinical skills, despite large numbers. It is a viable alternative to the long-standing inconsistencies of traditional clinical skills teaching programs and their assessment.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.

Additional information

Notes on contributors

Patricia Régo

PATRICIA RÉGO is a member of the Discipline of Medical Education, School of Medicine, The University of Queensland, and evaluation consultant to the Queensland Health Skills Development Centre. Her research interests include: simulation in healthcare, students’ acquisition of clinical skills, and medical education.

Ray Peterson

Associate Professor RAY PETERSON is the Head of the Discipline of Medical Education and Director of the MBBS Program, School of Medicine, The University of Queensland.

Leonie Callaway

Associate Professor LEONIE CALLAWAY is the Head of the Royal Brisbane Clinical School, The University of Queensland, a specialist in Internal and Obstetric Medicine, Royal Brisbane and Women's Hospital, and currently Lead Fellow, Teaching and Learning, Royal Australasian College of Physicians. Her research interests lie in obstetric medicine and medical education.

Michael Ward

Professor Emeritus MICHAEL WARD was previously the Head of the Central Clinical Division, School of Medicine at The University of Queensland. Now he is the Commissioner, Health Quality & Complaints Commission, Queensland Health.

Carol O’Brien

CAROL O’BRIEN is trained in analytical philosophy to research master's level. Her research interests include: the ethics and metaphysics of death and dying; applied ethics; and the moral psychology of Aristotle and Epicurus. She currently works in research higher degree administration in Brisbane, Australia.

Ken Donald

Professor Emeritus KEN DONALD was previously the Head of the School of Medicine, The University of Queensland.

References

  • Ahmed K, El-Bagir M. What is happening to bedside clinical teaching?. Med Educ 2002; 36: 1185–1188
  • Armstrong E, Mackey M, Spear SJ. Medical education as a process management problem. Acad Med 2004; 79: 725–728
  • Balmer D, Serwint J, Ruzek S, Giardino A. Understanding paediatric resident-continuity preceptor relationships through the lens of apprenticeship learning. Med Educ 2008; 42(9)923–929
  • Bandaranayake R. Study skills. A practical guide for medical teachers, J Dent, R Harden. Churchill Livingstone, Edinburgh 2001
  • Bligh J. More medical students, more stress in the medical education system. Med Educ 2004; 38: 460–462
  • Carless D. Differing perceptions in the feedback process. Stud Higher Educ 2006; 31: 219–233
  • Carraccio C, Wolfsthal S, Englander R, Ferentz K. Shifting paradigms: From Flexner to competencies. Acad Med 2002; 77: 361–367
  • Cresswell J. Qualitative inquiry and research design: Choosing among the five traditions. Sage, Thousand Oaks, CA 1998
  • Delisa J. Evaluation of clinical competency. Am J Phys Med Rehab 2000; 79: 474–477
  • Durak Hİ, Vatansever K, Kandiloğlu G. An early patient contact programme combining simulation and real settings. Med Educ 2006; 40: 1137–1137
  • Fox RA, Ingham Clark CL, Scotland AD, Dacre JE. A study of pre-registration house officers’ clinical skills. Med Educ 2000; 34: 1007–1012
  • Furmedge D. Apprenticeship learning models in residents: Are they transferable to medical students?. Med Educ 2008; 42: 856–857
  • Gordon J. ABC of learning and teaching in medicine: One to one teaching and feedback. BMJ 2003; 326: 543–545
  • Grant A, Robling M. Introducing undergraduate medical teaching into general practice: An action research study. Med Teach 2006; 28: 192–197
  • Johnston B, Boohan M. Basic clinical skills: Don’t leave teaching to the teaching hospitals. Med Educ 2000; 34: 692–699
  • Ledingham I, Dent J. Clinical skills centres. A practical guide for medical teachers, J Dent, R Harden. Churchill Livingstone, Edinburgh 2001
  • Macmillan J, Mclean MJ. Making first-year tutorials count: Operationalizing the assessment-learning connection. Active Learn Higher Educ 2005; 6: 94–105
  • Mattheos NA, Nattestad E, Falk-Nilsson E, Attstrom R. The interactive examination: Assessing students’ self-assessment ability. Med Educ 2004; 38: 378–389
  • Mcleod PJ, Steinert Y, Meagher T, Mcleod A. The ABCs of pedagogy for clinical teachers. Med Educ 2003; 37: 638–644
  • Murray E, Todd C, Modell M. Can general internal medicine be taught in general practice? An evaluation of the University College London model. Med Educ 1997; 31: 369–374
  • Régo P. Evaluation of the clinical coaching program phase I: Report on the use and knowledge of the portfolio of essential clinical skills by clinical coaches. School of Medicine, The University of Queensland, Brisbane 2005
  • Régo P, Ozolins I. IVIMEDS: A short report on an evaluation of the cardiovascular system learning module. Med Teach 2007; 29: 961–965
  • Remmen R, Derese A, Scherpbier A, Denekens J, Hermann I, van der Vleuten C, Royen PV, Bossaert L. Can medical schools rely on clerkships to train students in basic clinical skills?. Med Educ 1999; 33: 600–605
  • Remmen R, Scherpbier A, Derese A, Denekens J, Al E. Unsatisfactory basic skills performance by students in traditional medical curricula. Med Teach 1998; 20: 579
  • Remmen R, Scherpbier A, van der Vleuten C, Denekens J, Derese A, Hermann I, Hoogenboom R, Kramer A, van Rossum H, van Royen P, et al. Effectiveness of basic clinical skills training programmes: A cross-sectional comparison of four medical schools. Med Educ 2001; 35: 121–128
  • Sachdeva AK, Loiacono LA, Amiel GE, Blair PG, Friedman M, Roslyn JJ. Variability in the clinical skills of residents entering training programs in surgery. Surgery 1995; 118: 300–309
  • Sayer M, Chaput de Saintonge M, Evans D, Wood D. Support for students with academic difficulties. Med Educ 2002; 36: 643–650
  • Seabrook MA. Clinical students’ initial reports of the educational climate in a single medical school. Med Educ 2004; 38: 659–669
  • Searle J. Defining competency – the role of standard setting. Med Educ 2000; 34: 363–366
  • Spector P. Using self-report questionnaires in OB research: A comment on the use of a controversial method. J Org Behav 1994; 15: 385–392
  • Spouse J. Scaffolding student learning in clinical practice. Nurse Educ Today 1998; 18: 259–266
  • SPSS Inc. 2005. SPSS 14.0 for Windows. SPSS Inc., Chicago: SPSS
  • Tellis W. 1997. Application of a case study methodology. Qual Rep 3(3), http://www.nova.edu/sss/AR/QR3-3/tellis.html
  • Violato C, Lockyer J, Fidler H. Multisource feedback: A method of assessing surgical practice. BMJ 2003; 326: 546–548
  • Wall D, Bolshaw A, Carolan J. From undergraduate medical education to pre-registration house officer year: How prepared are students?. Med Teach 2006; 28: 435–439
  • Wallace P, Berlin A, Murray E, Southgate L. Cement: Evaluation of a regional development programme integrating hospital and general practice clinical teaching for medical undergraduates. Med Educ 2001; 35: 160–166
  • Williams RG, Klamen DL. ‘See one, do one, teach one’ – exploring the core teaching beliefs of medical school faculty. Med Teach 2006; 28: 418–424

APPENDIX – Excerpts from body-system examination lists

Respiratory system

  1. Positioning and exposure of chest (sitting, standing)

  2. General inspection (body habits, general condition, respiratory effort, cough, sputum mug)

  3. Hands, upper limb (pulse, respiratory rate, clubbing, cyanosis, wasting, nicotine stains)

  4. Face (eyes, nose, mouth, tongue, pharynx)

Cardio-vascular system
  1. Positioning and exposure (on bed −45°, chest and abdomen exposed)

  2. General inspection (general condition, dyspnoea, oedema, cyanosis, pallor)

  3. Hands and upper limbs (pallor, cyanosis, xanthomata, splinter haemorrhages)

  4. Pulse (rate, rhythm, character, radiofemoral delay, radioradial delay)

Cranial nerves
  1. Position – sitting

  2. General inspection (scars, neurofibromas, facial asymmetry, ptosis, proptosis, deviation of eyes, unequal pupils)

  3. First nerve – olfactory (ask the patient if they can smell)

  4. Second nerve – optic (visual acuity, visual fields)

Upper limb
  1. Position – sit upright

  2. Inspect (posture, wasting, fasciculation, pronator drift)

  3. Tone (wrist, elbow)

  4. Power (shoulder abduction and adduction, elbow flexion and extension, wrist flexion and extension, finger extension, flexion and abduction)

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.