2,834
Views
4
CrossRef citations to date
0
Altmetric
ARTICLES

Understanding patient involvement in judging students’ communication skills in OSCEs

Abstract

Introduction

Communication skills are assessed by medically-enculturated examiners using consensus frameworks which were developed with limited patient involvement. Assessments consequently risk rewarding performance which incompletely serves patients’ authentic communication needs. Whilst regulators require patient involvement in assessment, little is known about how this can be achieved. We aimed to explore patients’ perceptions of students’ communication skills, examiner feedback and potential roles for patients in assessment.

Methods

Using constructivist grounded theory we performed cognitive stimulated, semi-structured interviews with patients who watched videos of student performances in communication-focused OSCE stations and read corresponding examiner feedback. Data were analysed using grounded theory methods.

Results

A disconnect occurred between participants’ and examiners’ views of students’ communication skills. Whilst patients frequently commented on students’ use of medical terminology, examiners omitted to mention this in feedback. Patients’ judgements of students’ performances varied widely, reflecting different preferences and beliefs. Participants viewed variability as an opportunity for students to learn from diverse lived experiences. Participants perceived a variety of roles to enhance assessment authenticity.

Discussion

Integrating patients into communications skills assessments could help to highlight deficiencies in students’ communication which medically-enculturated examiners may miss. Overcoming the challenges inherent to this is likely to enhance graduates’ preparedness for practice.

Practice points

  • Patients want their voices to be heard in the assessment of communication skills as they believe their feedback may better prepare student for practice.

  • A gap between examiners’ and patients’ perceptions of effective communication is apparent.

  • Variability in patients’ judgements may be an opportunity for students to learn how to meet the communication needs of a diverse population.

  • Patients wish to be involved in summative or formative OSCEs, or have a coaching role.

  • Patient involvement in assessment would be resource intensive and institutions would need to carefully select a diverse range of patients to be involved.

Introduction

Communication skills are the foundation of positive patient-physician relationships, patient satisfaction and patient outcomes (Bennett and Lyons Citation2011; Skirbekk et al. Citation2011). The patients of physicians who communicate effectively are more likely to acknowledge and understand their health problems, modify their health behaviours, adhere to treatments offered and are better able to manage their health safely and effectively (Berman and Chutka Citation2016).

The assessment of communication skills is vital for Medical Education (Choudhary and Gupta Citation2015). Assessment drives learning and students study what they expect to be assessed upon (Wormald et al. Citation2009). If communication skills are not assessed, students may assume such skills are unimportant; assessment legitimises the skill (Ranjan et al. Citation2015). Generally, the content of assessments sends out strong signals to students about what faculty perceives to be important yet there may be discrepancies between faculty’s and patients’ perceptions of what good patient-physician communication is (Kenny et al. Citation2010).

Medical schools use assessment frameworks or rubrics, such as the Calgary Cambridge guide (Kurtz Citation1989) or the Kalamazoo Consensus statement (Makoul Citation2001) to guide communication skills teaching and assessment. Both of these frameworks were developed by expert medical educators without any direct input from patients (Kurtz and Silverman Citation1996; Rider et al. Citation2006). Consequently, by teaching and assessing using frameworks developed without patient involvement, there is a risk that communication skills may not meet the communication needs of patients (Spencer et al. Citation2000). Whilst patients have been integrated into curriculum design and delivery, they are rarely involved in assessment (Jha et al. Citation2009). Incorporating patients’ views of effective communication into the teaching and assessment of communication skills may better prepare students for practice as, ultimately, it is patients with whom students must effectively communicate (Towle et al. Citation2010). To the best of our knowledge, there has been no in-depth exploration of patients’ views of medical student-to-patient communication within a formal assessment context.

By contrast, there has been considerable research into the assessment of communication skills by simulated or standardised patients (SPs). They have contributed extensively to assessment and feedback on students’ performance in Objective Structured Clinical Examinations (OSCEs; Wallace et al. Citation2002). OSCEs are an assessment format orientated towards standardisation (Khan et al. Citation2013). Students move through a series of stations where their competencies are assessed within a simulated environment against defined criteria. Research has shown that SP’s judgements can be valid and reliable within OSCEs (Gormley et al. Citation2012). SPs, however, do not usually present their own health issues (Cleland et al. Citation2009; Peters Citation2019). SPs are typically trained by faculty members which is likely to engender a degree of enculturation into a medical perspective (Gordon et al. Citation2012), and, where SPs provide judgements, they are often prevented from providing their authentic opinion by using checklists developed through a medically enculturated lens. Consequently, whilst Bleakley (Citation2020) suggested that students are better prepared for practice by feedback from patients who are not immersed within the educational environment, much of SPs current role is focused on achieving standardisation and may therefore miss this opportunity.

Such assessment of communication skills through a medically enculturated lens is problematic if there is a difference between that and the patients’ perspectives (Butalid et al. Citation2012). Medical schools may be graduating students using assessments which examine communication skills from a medically enculturated perspective, rather than actual patients’ perspective. Given the importance of physicians’ communications skills, it is important to identify the patients’ perspectives on them and to integrate them with current medical education practice.

Although this argument echoes regulatory boards’ requirements that institutions develop coordinated and sustained programmes of patient involvement across the spectrum of medical education and training (Australian Health Practitioner Regulation Agency Citation2020; General Medical Council Citation2020), both are predicated on the assumption that patients want to be involved in the education of physicians. Neither the literature nor the regulatory bodies provide guidance as to how this can be done. Research is needed as to how an authentic patient voice can inform contemporary assessment within medical education. Therefore, to address gaps in our current understanding, we decided to study the following questions:

  • What are patients’ perspectives of medical students’ communication skills and how do they perceive examiners’ feedback on students’ performances?

  • Do patients believe that they should be involved in the assessment of students’ communication skills and, if so, how do patients believe they could be integrated within such assessment?

Methods

This study is epistemologically underpinned by social constructionism (Burr Citation2015). Framed by the research questions the methods needed to be exploratory, therefore, we used principles of constructivist grounded theory to inform data collection and analysis (Charmaz and Belgrave Citation2012). This methodology is suited to explore complex phenomena unexplained by pre-existing theory. We sought to gain an in-depth understanding of patients’ perceptions of communication skills, and to explore patients’ views of being integrated into the assessment of communication skills, with the overarching aim of constructing a theory which depicts this.

Population sampled

We used a theoretical approach to sampling (Watling and Lingard Citation2012). The initial inclusion criteria were:

  • Individuals who were eighteen years or older.

  • Individuals who were currently using, or had used healthcare services in the previous twelve months.

  • People who have the capacity to provide informed consent.

  • Individuals who speak English.

We initially recruited individuals who met the inclusion criteria and then theoretically sampled to collect data which elaborated and refined categories within the emerging theory. Guided by the emergent data our theoretical sampling focused on including participants who are different ages and genders.

Recruitment

We recruited from community groups within North Staffordshire and via Keele Medical School’s existing networks within the community. AM gained consent to join group meetings via the groups’ gatekeepers. AM gave attendees information leaflets and invited attendees to give them to peers who had not attended. People who were interested in participating approached AM via the contact details provided on the information sheet. AM then checked they had read the information sheet, checked the inclusion criteria and arranged an appointment for an interview at a time and place which was convenient for the participant.

Data collection methods

Data was collected using cognitive stimulated interviews (Beatty and Willis Citation2007). These are used to study how participants understand and respond to materials (e.g. questionnaires, images or videos) presented by researchers. All cognitive stimulated interviews were conducted by AM from August to December 2019. Interviews lasted on average 63 minutes (ranging between 42 and 83 minutes). An initial topic guide was developed based on previous literature (See Supplementary Appendix 1). The topic guide was updated iteratively between successive interviews. Participants were shown three videos which depicted differing levels of student performances in three different communication focused OSCE stations, for which we already had examiner scores and feedback. The examiner feedback was generated within previous research (Yeates et al. Citationin press) and was used as a stimulus within the interview. The OSCE stations included: taking a history of a patient presenting with a fever, taking a paediatric history from a child’s mother and taking a history from an older patient who had fallen. Three different OSCE stations were shown to participants to illustrate different clinical situations, each station required students to communicate with an SP. Students and SPs who featured in the videos, and examiners who have provided feedback, had consented for their performances and data to be used in future research. Videos were chosen which represented a ‘good’ performance (four marks over the pass mark on a 27 point scale), ‘borderline’ (on the pass mark) and ‘poor’ (failed the station) performance. Participants were asked to describe their thoughts on the students’ communication skills and provide feedback for each performance. Feedback statements from examiners who have watched the same video, as part of previous research, were then shown to participants and their views on this were also explored. After 15 interviews, the research team had reached an adequate understanding of the key concepts and determined that theoretical saturation (Saunders et al. Citation2018) had been reached (e.g. no new data was being elicited which informed theory development). All interviews were audio-recorded, transcribed verbatim and reviewed for accuracy. All identifying information was removed from the transcripts.

Data analysis

The analysis had three phases. Applying principles of constructivist grounded theory (Charmaz and Belgrave Citation2012), in the first phase AM inductively open-coded the data and used constant comparative analysis of initial interviews to create focused codes. In the second phase, we met in pairs (AM with each co-author) to combine key concepts and to identify major themes (axial coding). In the final phase of selective coding, we examined the relationships between major themes and constructed our theory which explains patient involvement in the assessment of communication skills. We used NVIVO 11 to facilitate data management, interpretation, coding, and memo-ing. When writing this paper we sensitively edited some extracts of the data to ensure anonymity and readability. We took care to ensure the meaning of the data was not altered during this process.

Reflexivity

Within constructivist grounded theory, results are seen as interpretations constructed through interactions between the researcher and participants within the interviews, and the shared experience of the research team who engaged in data analysis (Mruck and Mey Citation2007). Therefore, it is important that the researchers acknowledge their own experiential and theoretical backgrounds. AM has a background in Psychology and experience in qualitative methods and patient involvement research which is what motivated her to conduct this study. PY and RM are both clinicians with significant experience in medical education research. Due to potential power-dynamics, AM conducted all interviews as participants may have responded differently when being asked by clinicians about how medical students should communicate. When analysing the data, PY and RM added a clinical perspective based on their experiences; this enabled triangulation of the data between clinicians and a non-clinician.

Ethical approval

Approval was obtained from Keele Medical School’s ethical review panel (MH-190062). Written consent was obtained for all participants. The students who appeared in the videos were approached with details about the research, told the potential audience of the videos and each provided consent.

Results

We interviewed fifteen participants (aged between 27 and 81 years old; 5 males, 10 females). See for an overview of participant characteristics.

Table 1. Participant demographics.

From the data generated within the interviews, we have developed a theory which characterises the intricacies of patients assessing communication skills. In the following section we will present key themes within our theory; these will be supported by anonymous quotations.

A perceived disconnection

When discussing the importance of communication skills, most participants had the perception that current clinicians had deficiencies in such skills, as described by Participant Seven:

[Universities] are churning out really clever doctors but they can’t… they have… their bedside manner is not as good as their medical knowledge and to get that clinical information it comes down to communication skills.

Participants reported that an important feature of patient interaction was the use of the appropriate terminology. When commenting upon students’ performances, most participants described that they did not know the meaning of a number of terms that the students were using (e.g. a ‘productive’ cough’). Examiners did not comment upon the students using medical terminology in the feedback they provided. This may have suggested that examiners did not, on this occasion, recognise when medical terminology is being used by the student, participants believed that encouraging students to use terminology which patients understand could reduce miscommunication of information in future clinical situations.

Other instances of divergences within the feedback provided by examiners and patients were evident. For example, one examiner reported that a particular student had ‘good’ communication skills whereas Participant Four disregarded this and stated that the examiner was ‘simply wrong.’ The difference in judgements could be because examiners are viewing the students’ communication skills through a ‘medical lens,’ whereas participants are influenced by their lived experiences of being a patient. Whilst participants acknowledged that it was the examiner’s role to judge medical knowledge, they suggested that, due to their perceptions of the importance of patient involvement, that patients should provide judgements and feedback on communication skills.

Participants believed that they would offer realistic judgements or feedback due to their lived experiences of being a recipient of care. Participants perceived that examiners would judge students’ communication through a ‘medical lens’ due to their training and professional background. There was an apparent disconnect between the examiners’ ‘medical lens’ and what participants believed to be authentic patient interaction; Participant Seven describes this in the following way:

Yeah it just… [Patient involvement] gives the students some you know grasp of reality as well because the patients’ perspectives, you know, they aren’t medically trained like examiners we’re not looking through medical eyes it gives… it’s a bit more real, like it’s a bit more you know, they’re actually interacting with members of the public. So we’re not looking at what blood tests he’s offered, we’re looking at ‘Am I going to get something out of this that will make me feel better?’ and I think erm not that the medics ignore the patient’s side, but you’re just looking through very different eyes at it.

Due to their non-clinical perspectives, patients perceived that their judgements and feedback better prepared students for the communication aspects of clinical practice where they will be interacting with a range of people who are not medically trained.

Variability in patients’ judgements and values

Whilst participants described the potential benefits of patient involvement in the assessment of communication skills, they each had differing views on what constitutes appropriate communication. For example, one student repeated the same phrase ‘okay’ a number of times throughout their performance and whilst Participant One described the repetitive phraseology as ‘annoying,’ Participant Eleven suggested:

I think the ‘okays’ came across as positive reinforcement, um, and kept the patient going. She elicited a lot, without talking too much herself, yeah, and that, that is actually quite skilful.

Other instances of divergences in opinion between participants included some participants believing that a particular student had clearly structured the order of their questions whilst Participant Nine described the student’s organisation of questions as ‘scattered.’ Three participants thought a student’s speed of questioning was slow despite other participants suggesting that the questions were asked at an appropriate speed. Four participants also thought that the students should be writing information on pieces of paper as memory aids whilst others believed this would be inappropriate and would negatively affect the students’ eye-contact with patients. Eye-contact was an important non-verbal communication for participants, as was body language, however participants interpreted students’ body language in different ways. Despite a number of participants thinking that a particular student’s body language was being open and engaging, Participant Seven described the same student as ‘sitting awkwardly and looking between his legs.’ It is clear from the data that each participant interpreted student behaviours differently.

The variability in judgements could be explained by factors such as participants own perspectives, values or previous experiences with healthcare services. Participant Eleven suggested her positive judgement of a female student’s communication skills could be because of her own preferences in gender when receiving care:

I think it’s very hard um, with this kind of thing, to totally remove the gender dimension from it, uh-huh, so it could be that part of my reaction to the woman is just in that kind of situation where I’m out of comfort zone and I’m sitting there in a hospital gown with, you know, up on a, some sort of a bed or trolley um, that I personally would feel more comfortable with a woman anyway, so it’s hard to remove that unconscious bias dimension, yeah.

Participant Eleven was a female and no male participants in this sample reported that gender influenced their judgement of students’ communication skills. Both male and female participants reflected on previous encounters with healthcare services which influenced the aspects of the students’ performances on which they commented. For example, Participant Three used his General Practitioner (GP) as a benchmark for his perceptions of ‘good’ communication skills as he perceived the GP as friendly, engaging and having the ability to build up a good rapport with him. Participant Three then looked for these characteristics in the students’ performances and commented on these aspects if he thought they were performed well, poorly or highlighted if they were missing.

The students’ appearances also influenced participants’ judgements. Younger participants suggested that body art (e.g. tattoos) would not influence their judgement of students communication skills, yet older participants suggested that it was inappropriate for future clinicians to have body art and that this would cause participants to lose a sense of trust in these students. Most participants judged students’ communication as ‘good’ if the student had managed to instil a sense that the participants could trust them. Participants reported that if the students were confident, seemed to have control over the situation, re-capped information, actively listened and responded appropriately then they would trust that student, yet sometimes these behaviours were unreliable proxies. For instance, when commenting upon the student’s performance which received a ‘fail’ from OSCE examiners, some participants suggested that because the student appeared confident they developed a sense of trust in them.

Participants themselves acknowledged that patient’s judgements of communication skills could vary, however, participants also reflected on judgement variability which occurs between trained examiners. When discussing variability in patients’ judgements, Participant Nine stated:

I have to put the other point of view to you then, er, how … it’s … you are only in the same situation as you are with your examiners because your examiners will have their own internal bias and that maybe against tattoos or something. They might, it’s no different to the other examiners. Er, I mean it’s not different to the to the professional examiners, they may equally well have the same or different biases.

Participants believed that patients’ variability in judgements and values should not prevent them from assessing communication skills and that when students are in clinical practise they will be expected to interact with a range of patients who hold different values. Participants went on to suggest ways in which patients could be integrated into the assessment of communication skills.

Integration within assessment

Patient involvement in the assessment of communication skills was deemed important as participants described that Medical Schools have an institutional responsibility to try and teach students authentic communication skills as, ultimately, it is the public for whom students will care. There was a sense that patient involvement would be a way to ensure medical schools are socially accountable for meeting the communication needs of the public.

Most participants believed that patients could be integrated within summative OSCE examinations by being in the room when the examination takes place. This would give the patient and examiner the same experiential involvement, yet the former would score and provide feedback on the students’ communication skills, whilst the later would concentrate on medical knowledge and ask the questions at the end of the station. However, one participant suggested that the presence of two individuals examining students may be distracting and disadvantage students who are particularly anxious in exam situations. Taking this into account, four participants suggested that patients could be involved in formative OSCEs as this may remove some of the students’ stress whilst still being a chance for patients to provide feedback on students’ communication skills.

Participants suggested that patients could also coach students. If students’ OSCE examinations were filmed, participants suggested that a patient could review a students’ performance with them. Participant Nine described that:

[Patients] could guide the student, which then helps them to reflect on their own practice. So, maybe something that actually takes it away from the OSCE situation, and is a, a separate round of things, without that err, pass/fail assessment grading. And they just look at the performances together and things and start from absolute basics … how they sit, you know, in this semi-formal situation, you know, what they are doing with their hands or whether they appear to be distracted, how relaxed or otherwise they appear, how they start the interview with the patient. There, there could be a lot to talk about.

By coaching students, patients could provide individualised feedback on observed communication. The patient could then question, challenge and encourage the student, and, hopefully, this would prepare students for how patients may expect them to interact in clinical situations.

Patient selection and training

To mitigate against the impact of differing patient values and experiences of care, some participants suggested that there must be careful selection of patients, as illustrated by Participant Five:

You’ve gotta get somebody who, you don’t want somebody who is a wannabe thespian getting hold of [the students] and being overly dramatic. You can’t force some things, but you can give them a bit of a, a bit of err, helping hand with a few tips and hints, and pull up a few bad habits before they set in and can’t shift ‘em.’

It was important to participants that patients have a clear understanding of their role and why institutions are involving them in the assessment of communication skills. They perceived that patients involved in assessment would require information about their role which is communicated in ways that they can access and understand. A lack of clarification regarding role expectations could lead to disengaged patients. To ensure patients have a clear understanding of their role, participants suggested that preparative training would be essential. Within this preparative training faculty members should communicate information about assessment, the patients’ role, time commitments and potential burden of involvement. Participants also suggested that within the preparative training patients should be reminded of why equality and diversity is important as this would, hopefully, prevent any negative judgements of students based on their backgrounds. Inclusivity and diversity of patients in assessment were described by a number of participants, as illustrated by Participant Eight:

The danger of course in recruiting and training and then putting [patients] in is that whether you’re actually getting a cross-section of the population. Erm you know, you’re not probably going to get the vulnerable elderly who will have a very different view from you know, the fit 25-year-old. Erm it’s whether you’re going to get a broad enough cross-section to represent the population, I’m not sure.

Participants suggested that institutions which are seeking to integrate patients in the assessment of communication skills should try to involve patients who are diverse in terms of social-demographics (e.g. age, gender, ethnicity and social class). Patients of different backgrounds would be able to draw on their own cultural contexts when interpreting students’ communication skills and their feedback may prepare students to meet the needs of a diverse population.

Discussion

Grounded in the data, we developed a theory which describes the potential for patient involvement in the assessment of communication skills. Preparing students for practice is the core component of the model. Patients perceived a disconnect between their views of appropriate communication and examiners’ perceptions. Participants believed that they should be involved in communication assessment as they would offer authentic judgements and feedback which, in their opinion, would better prepare students for practice. When participants went on to provide feedback on students’ performances, there was variability between participants due to their own values and experiences of care. The apparent variability was not seen as detrimental to the judgement process, instead participants suggested that students will need to interact within a diverse population; variability in feedback may show students that different people have different communication needs and this may better prepare them for practice. Similarly, selecting an inclusive and diverse patient population to involve in assessments may also give students the opportunity to receive feedback from patients who represent the diverse society in which they will work. Preparative training opportunities may inform patients that their role is to provide constructive feedback to students which could serve as a vehicle for improved communication skills within future clinical situations.

Theoretical implications

The divergence in judgements between patients and examiners leads to the bigger question of whose role it is to assess students’ communication skills: the patients who clinicians serve, or the clinicians who are responsible for ensuring students’ progress through the course? Whilst power in education is complex (Felton and Stickley Citation2004), part of its operation can include the ability to determine the behaviours of others or to decide the outcome of a situation (Apple Citation2013). Whilst faculty members are currently the power-holders in regards to assessing students’ communication skills, patients may challenge this by perceiving that they themselves should be involved. Yet, giving power to one group may also reduce the power of another, and involving patients in the assessment of communication skills may result in faculty members feeling devalued. Despite this, Parsons (Citation1985) recognised that power could also have a collective benefit if two groups collaboratively shared power. Patients viewed their role was to provide guidance on students’ communication skills whilst faculty members focused (at least in part) on medical knowledge. Whilst such power-sharing must acknowledge and accept the diversity of knowledge and experiences of both examiners and patients, it could reduce the (detrimentally high) cognitive load which examiners experience (Tavares and Eva Citation2014; Byrne and Tanesini Citation2015) whilst aligning assessment scores and feedback with patients’ communication needs.

Examiners appeared to under-recognise occasions when students’ use of medical terminology made it difficult for patients to comprehend their questions or explanations. According to the Johari Window model (Luft and Ingham Citation1961), which helps individuals to understand their relationship with themselves and others, the use of medical jargon may be in the examiner’s ‘blind spot.’ Actions and behaviours in an individual’s ‘blind spot’ are known to others (in this instance patients), but not the person themselves. Examiners may not realise when medical jargon is being used and may believe that students are communicating effectively when they are not. The inclusion of patients in the assessment of communication skills could potentially highlight to students what is in the examiner’s ‘blind spot’ and provide feedback which enables students to develop accordingly. Whilst scripting of the patient in the case our study used made medical jargon inappropriate, consideration will be needed in practice as to whether medical jargon is always inappropriate, as some well-informed patients may prefer its use. Equally some patients who understand jargon may have similar blind spots to examiners. Consequently, further consideration is needed of the interaction between patients’ own perspectives and the likely needs of the patient in the assessed scenario which may be resolved by involving patients in scenario writing.

We observed notable judgemental variability between different patients for the same student performances. As participant nine observed, trained examiners are also known to adopt very different perspectives of the same performance (Gingerich et al. Citation2017; Gingerich Citation2020). Whether this variability is of concern will depend on the focus and stakes of the assessment context. Within a social constructivist view of assessment (Govaerts and van der Vleuten Citation2013; Hodges Citation2013) this variability offers a variety of authentic unique learning opportunities which students should triangulate as they develop. Most institutions use standardised testing (such as OSCEs) for high-stakes decisions in order to ensure fairness, consistency and a standard blueprint of assessments. In these assessments it would be important to determine methods of managing variability in patients’ judgements in order to ensure fairness. One approach would be to simply limit use of patients’ scores and feedback to low stakes settings or to provide them alongside summative scores as additional formative information. Whilst pragmatically feasible, this risks devaluing patients’ contribution and paradoxically signalling to students that they are unimportant. Lockyer (Citation2003) found that approximately 2–3 times the number of patient ratings as physician ratings were required to achieve the same degree of reliability when ratings physician performance. Consequently further work is needed to understand the degree of sampling required in order to integrate patient voices into high stakes assessments.

Implications for practice

Although it is uncommon for institutions to permit the public to make summative assessment judgements, with governing bodies (Australian Health Practitioner Regulation Agency Citation2020; The General Medical Council Citation2020) encouraging patient involvement within all areas of medical education it may be time to consider ways that this can be achieved.

Several practical considerations may emanate from including patients within the specifics of assessment conduct. Integrating patients could be resource intensive for institutions. Faculty members may need to engage with the local community to recruit diverse patients to avoid discriminatory involvement (Pandya-Wood et al. Citation2017) which would not assess students’ communication across a representative patient population. There could also be financial burdens for patients which may deter them from being involved. To include patients from lower socio-demographic areas, institutions could offer financial incentives (Staley et al. Citation2014).

Training may be needed before patients are involved in assessments yet may paradoxically risk enculturation (Gordon et al. Citation2012). We suggest that training should empower patients to provide their authentic opinions on students’ communication skills within a responsible framework of public involvement. Horgan et al. (Citation2020) developed a training package to underpin patient involvement in mental-health nursing education. Accordingly, training for patients should include an induction and orientation, mentorship, role clarity and equitable payment, equality and diversity awareness and clear guidance on acceptable forms of language within feedback.

Institutions should consider the value of patient feedback at different points in the curriculum. Given the variability of patient feedback, whilst thought provoking, it may be challenging for early learners to process. Conversely, providing more advanced learners with varied messages may stimulate reflection on their ability to adapt within a given context. Another issue is how institutions capture patients’ judgements. Involving patients in co-construction of rating formats could provide a flexible set of criteria that captures authentic judgements whilst still providing sufficient commonality to use within standardised assessments.

The stations used within the OSCE could also be co-produced with patients to enable them to have an understanding of the context within which the student is communicating. Whilst patients suggested they only wanted to judge communication skills and not biomedical knowledge, the two are intrinsically entangled. Understanding and developing the case content may help patients to judge communication skills appropriately. One way to involve patients in developing assessment practices is by creating a patient advisory group, similar to those used in nursing education (Scammell et al. Citation2016). Whilst this may successfully involve patients in assessments, it may be necessary to rotate or replace involved patients before they become enculturated into medical educators’ worldview.

Limitations

Whilst smaller sample sizes permit an in-depth exploration of phenomena, this does mean that our findings cannot be extrapolated to the wider population. Our sample predominantly comprised participants who were older, White and British. Whilst this is a limitation, older patients comprise a large proportion of people seeking medical attention (NHS England Citation2015). Individuals from different demographic backgrounds may have offered different perspectives which should be explored in future research. The videoed OSCE performances only represented a limited number of students and tasks. Whilst three separate tasks were depicted in our videos, different findings might have occurred for a wider sample of students or clinical tasks. Using video (rather than live performances) as stimulus material might have altered patients’ perceptions. Recent research has shown equivalence in the scores (Yeates et al. Citation2019) and judgemental processes (Yeates et al. Citation2020) of live and video performances filmed by similar methods.

Future research

Whilst our research has identified ways in which patients wish to be integrated into the assessment of communication skills, future research is needed which investigates the effects of these approaches and how they could, or should, be put into practice. As we only sampled patients, future studies should specifically explore faculty members’ and students’ perceptions of patients being integrated into summative or formative OSCEs and explore their opinions on patients being coaches to students. Future research should also seek to investigate if patients from various demographic backgrounds have different views on students’ communication skills and their role within communication-based assessment.

Conclusion

Despite patient involvement in Medical Education being a priority for policy-makers, institutions need to be aware of the challenges of involving patients in assessment. Patients want their voices to be heard within the assessment of communication skills as an opportunity for students to learn how to meet the communication needs of a diverse population. Institutions should work together with patients to value all forms of expertise and build a productive network of engagement.

Author contributions

AM designed the study, obtained ethical approval, coded all interviews, directed the analysis and wrote the first draft of the manuscript. PY provided feedback on the design of the study. PY and RM participated in the analysis of the data and revisions of the manuscript. All authors participated in critically appraising the work for intellectual content and approved the final version to be published.

Ethical approval

Approval was obtained from Keele Medical School’s ethical review panel (MH-190062). Written consent was obtained for all participants.

Glossary

Patient involvement: The involvement of people who are engaged in teaching, assessment or curriculum development because of their lay expertise or experiences of health, illness or disability and who are aware that they have designated educational roles.

Towle A, Bainbridge L, Godolphin W, Katz A, Kline C, Lown B, Madularu I, Solomon P, Thistlethwaite J. 2010. Active patient involvement in the education of health professionals. Medical education. 44(1):64–74.

Constructivist grounded theory (CGT): A qualitative research methodology that seeks to understand a phenomena where no adequate prior theory exists. The ‘constructivist’ element refers to the fact that the evolving theory is constructed between the researcher and participant.

Charmaz K. 2017. The power of constructivist grounded theory for critical inquiry. Qualitative inquiry. 23(1): 34–45.

Supplemental material

Supplemental Material

Download MS Word (22.4 KB)

Acknowledgements

The authors would like to thank all participants who contributed to this study. We would also like to thank Mrs Antonina Mathie for her constructive and thorough feedback.

Disclosure statement

The authors have no declarations of interest to declare.

Additional information

Funding

The research was funded by a grant awarded to Dr Alice Moult from the Association for the Study of Medical Education (ASME). Peter Yeates is funded by a National Institute for Health Research (NIHR) Clinician Scientist Award. This paper presents independent research funded by the National Institute for Health Research (NIHR) and the Association for the Study of Medical Education (ASME). The views expressed are those of the author(s) and not necessarily those of the Association for the Study of Medical Education (ASME), the NHS, the NIHR or the Department of Health and Social Care.

Notes on contributors

Alice Moult

Dr Alice Moult, PhD, School of Medicine, David Weatherall Building, Keele University, Keele, Staffordshire.

Robert K. McKinley

Professor Robert K. McKinley, PhD, School of Medicine, David Weatherall Building, Keele University, Keele, Staffordshire.

Peter Yeates

Dr Peter Yeates, PhD, School of Medicine, David Weatherall Building, Keele University, Keele, Staffordshire.

References

  • Apple MW. 2013. Education and power. UK: Routledge.
  • Australian Health Practitioner Regulation Agency. 2020. Good medical practice: a code of conduct for doctors in Australia. AHPR; Accessed 21st August 2020 https://www.medicalboard.gov.au/Codes-Guidelines-Policies/Code-of-conduct.aspx.
  • Beatty PC, Willis GB. 2007. Research synthesis: the practice of cognitive interviewing. Public Opin Quart. 71(2):287–311.
  • Bennett K, Lyons Z. 2011. Communication skills in medical education: an integrated approach. Educ Res Perspect. 38(2):45.
  • Berman AC, Chutka DS. 2016. Assessing effective physician-patient communication skills: “Are you listening to me, doc?” Korean J Med Educ. 28(2):243.
  • Bleakley A. 2020. Embracing ambiguity: curriculum design and activity theory. Med Teach. 43(1):14–18.
  • Burr V. 2015. Social constructionism. UK: Routledge.
  • Butalid L, Verhaak PF, Boeije HR, Bensing JM. 2012. Patients’ views on changes in doctor-patient communication between 1982 and 2001: a mixed-methods study. BMC Family Pract. 13(1):80.
  • Byrne A, Tanesini A. 2015. Instilling new habits: addressing implicit bias in healthcare professionals. Adv Health Sci Educ. 20(5):1255–1262.
  • Charmaz K, Belgrave L. 2012. Qualitative interviewing and grounded theory analysis. SAGE Handbook Interview Res. 14(2):347–365.
  • Choudhary A, Gupta V. 2015. Teaching communications skills to medical students: introducing the fine art of medical practice. Int J App Basic Med Res. 5(Suppl 1):S41.
  • Cleland JA, Abe K, Rethans JJ. 2009. The use of simulated patients in medical education: AMEE Guide No 42. Med Teach. 31(6):477–486.
  • Felton A, Stickley T. 2004. Pedagogy, power and service user involvement. J Psychiatr Ment Health Nurs. 11(1):89–98.
  • General Medical Council. 2020. Patient and public involvement in undergraduate medical education. GMC; [accessed 2020 March 23]. https://www.gmc-uk.org/-/media/documents/Patient_and_public_involvement_in_undergraduate_medical_education___guidance_0815.pdf_56438926.pdf.
  • Gingerich A, Ramlo SE, van der Vleuten CP, Eva KW, Regehr G. 2017. Inter-rater variability as mutual disagreement: identifying raters’ divergent points of view. Adv Health Sci Educ. 22(4):819–838.
  • Gingerich A. 2020. The reliability of rater variability. J Grad Med Educ. 12(2):159–161.
  • Gordon J, Markham P, Lipworth W, Kerridge I, Little M. 2012. The dual nature of medical enculturation in postgraduate medical training and practice. Med Educ. 46(9):894–902.
  • Gormley G, Sterling M, Menary A, McKeown G. 2012. Keeping it real! enhancing realism in standardised patient OSCE stations. Clin Teach. 9(6):382–386.
  • Govaerts M, van der Vleuten CP. 2013. Validity in work‐based assessment: expanding our horizons. Med Educ. 47(12):1164–1174.
  • Hodges B. 2013. Assessment in the post-psychometric era: learning to love the subjective and collective. Med Teach. 35(7):564–568.
  • Horgan A, Manning F, Donovan MO, Doody R, Savage E, Bradley SK, Dorrity C, O’Sullivan H, Goodwin J, Greaney S, et al. 2020. Expert by experience involvement in mental health nursing education: the co‐production of standards between Experts by Experience and academics in mental health nursing. J Psychiatr Ment Health Nurs. 27(5):553–562.
  • Jha V, Quinton ND, Bekker HL, Roberts TE. 2009. Strategies and interventions for the involvement of real patients in medical education: a systematic review. Med Educ. 43(1):10–20.
  • Kenny DA, Veldhuijzen W, Van Der Weijden T, LeBlanc A, Lockyer J, Légaré F, Campbell C. 2010. Interpersonal perception in the context of doctor–patient relationships: a dyadic analysis of doctor–patient communication. Soc Sci Med. 70(5):763–768.
  • Khan KZ, Ramachandran S, Gaunt K, Pushkar P. 2013. The objective structured clinical examination (OSCE): AMEE guide no. 81. Part I: an historical and theoretical perspective. Med Teach. 35(9):e1437–e1446.
  • Kurtz SM, Silverman JD. 1996. The Calgary—Cambridge Referenced Observation Guides: an aid to defining the curriculum and organizing the teaching in communication training programmes. Med Educ. 30(2):83–89.
  • Kurtz SM. 1989. Curriculum structuring to enhance communication skills development. In: Stewart M & Roter D, editors. Communicating with medical patients. Newbury Park (CA): Sage Publications Inc.
  • Lockyer J. 2003. Multisource feedback in the assessment of physician competencies. J Contin Educ Health Prof. 23(1):4–12.
  • Luft J, Ingham H. 1961. The johari window. Hum Relat Training News. 5(1):6–7.
  • Makoul G. 2001. Essential elements of communication in medical encounters: the Kalamazoo consensus statement. Acad Med. 76(4):390–393.
  • Mruck K, Mey G. 2007. Grounded theory and reflexivity. UK: The Sage Handbook of Grounded Theory; pp. 515–538.
  • NHS England. 2015. Improving care for older people; [accessed 2020 March 25]. https://www.england.nhs.uk/ourwork/clinical-policy/older-people/improving-care-for-older-people/.
  • Pandya-Wood R, Barron DS, Elliott J. 2017. A framework for public involvement at the design stage of NHS health and social care research: time to develop ethically conscious standards. Res Involv Engagem. 3(1):6.
  • Parsons T. 1985. Talcott Parsons on institutions and social evolution: selected writings. Chicago: University of Chicago Press.
  • Peters G. 2019. The role of standardized patient assessment forms in medical communication skills education. Qual Res Med Healthcare. 3(2):8213.
  • Ranjan P, Kumari A, Chakrawarty A. 2015. How can doctors improve their communication skills? J Clin Diagn Res. 9(3):JE01.
  • Rider EA, Hinrichs MM, Lown BA. 2006. A model for communication skills assessment across the undergraduate curriculum. Med Teach. 28(5):127–134.
  • Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, Burroughs H, Jinks C. 2018. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quan. 52(4):1893–1907.
  • Scammell J, Heaslip V, Crowley E. 2016. Service user involvement in preregistration general nurse education: a systematic review. J Clin Nurs. 25(1–2):53–69.
  • Skirbekk H, Middelthon AL, Hjortdahl P, Finset A. 2011. Mandates of trust in the doctor–patient relationship. Qual Health Res. 21(9):1182–1190.
  • Spencer J, Blackmore D, Heard S, McCrorie P, McHaffie D, Scherpbier A, Gupta TS, Singh K, Southgate L. 2000. Patient‐oriented learning: a review of the role of the patient in the education of medical students. Med Educ. 34(10):851–857.
  • Staley K, Buckland SA, Hayes H, Tarpey M. 2014. The missing links’: understanding how context and mechanism influence the impact of public involvement in research. Health Expect. 17(6):755–764.
  • Tavares W, Eva KW. 2014. Impact of rating demands on rater-based assessments of clinical competence. Educ Primary Care. 25(6):308–318.
  • Towle A, Bainbridge L, Godolphin W, Katz A, Kline C, Lown B, Madularu I, Solomon P, Thistlethwaite J. 2010. Active patient involvement in the education of health professionals. Med Educ. 44(1):64–74.
  • Wallace J, Rao R, Haslam R. 2002. Simulated patients and objective structured clinical examinations: review of their use in medical education. Adv Psychiatr Treat. 8(5):342–348.
  • Watling CJ, Lingard L. 2012. Grounded theory in medical education research: AMEE Guide No. 70. Med Teach. 34(10):850–861.
  • Wormald BW, Schoeman S, Somasunderam A, Penn M. 2009. Assessment drives learning: an unavoidable truth? Anat Sci Educ. 2(5):199–204.
  • Yeates P, Cope N, Hawarden A, Bradshaw H, McCray G, Homer M. 2019. Developing a video‐based method to compare and adjust examiner effects in fully nested OSCEs. Med Educ. 53(3):250–263.
  • Yeates P, Moult A, Lefroy J, Walsh-House J, Clews L, McKinley R, Fuller R. 2020. Understanding and developing procedures for video-based assessment in medical education. Med Teach. 42(11):1250–1260.
  • Yeates P, Moult A, Cope N, McCray G, Xilas E, Lovelock T, Vaughan N, Daw D, Fuller R, McKinley R. in press. Measuring the effect of examiner variability in a multiple-circuit Objective Structured Clinical Examination (OSCE). Acad Med.