8,088
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Improving optometry student interpersonal skills through online patient, clinician and student evaluation and feedback

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 83-92 | Received 13 Oct 2022, Accepted 19 Mar 2023, Published online: 20 Apr 2023

ABSTRACT

Clinical relevance

Interpersonal skills are crucial for successful clinician-patient interactions. To prepare future optometrists for clinical practice, pedagogical evaluation is important to support the implementation of new strategies for teaching and evaluating interpersonal skills.

Background

Optometry students largely develop their interpersonal skills through in-person patient interactions. Telehealth is increasing, yet strategies to develop the interpersonal skills of students for teleconsulting have not been explored. This study aimed to assess the feasibility, effectiveness and perceived usefulness of an online, multisource (patients, clinicians and students) evaluation and feedback program for developing interpersonal skills.

Methods

Via an online teleconferencing platform, optometry students (n = 40) interacted with a volunteer patient, observed by a teaching clinician. Patients and clinicians evaluated the interpersonal skills of the student in two ways: (1) qualitative written feedback, and (2) quantitative rating (Doctors’ Interpersonal Skills Questionnaire). All students received written patient and clinician feedback after the session, but not their quantitative ratings. A subset of students (n = 19) completed two sessions, self-ratings, and were provided with their written feedback and an audiovisual recording from their first interaction before completing the second session. All participants were invited to complete an anonymous survey at program completion.

Results

Patient and clinician overall interpersonal skills ratings were positively correlated (Spearman’s r = 0.35, p = 0.03) and showed moderate agreement (Lin’s concordance coefficient = 0.34). Student self-ratings did not match patient ratings (r = 0.01, p = 0.98), whereas there was moderate agreement between clinician and student ratings (Lin’s concordance coefficient = 0.30). Ratings improved at the second visit (p = 0.01). Patient ratings were higher than clinicians (p = 0.01) and students (p = 0.03). All participants agreed that the program was feasible, useful and effective at fostering good interpersonal skills.

Conclusion

Multisource feedback about interpersonal skills contributes to improvement in student performance. Patients and clinicians can evaluate and provide useful feedback to optometry students about their interpersonal skills using online methods.

Introduction

Interpersonal skills are the verbal and non-verbal social skills used to communicate, interact and build relationships with other people. For all health professionals, good interpersonal skills enable relationships built on ‘respect, trust and effective communication’ and ‘practitioners to work in partnership with patients’.Citation1 Entry-level competency standardsCitation2 require optometrists to incorporate the patient’s perspective to provide integrated, respectful and responsive people-centred eye care.Citation3 Accordingly, significant curricular development has focused on teaching and evaluating interpersonal skills of health professionals, predominantly in medicineCitation4,Citation5 but increasingly recognised in optometry.Citation6–8

In the four-year, masters-level Doctor of Optometry course at The University of Melbourne, students are introduced to interpersonal skills and communication strategies from their first year of study. Understanding how to take a history is taught via lectures in Year 1 and practised and assessed in a pre-clinical environment amongst their peers. In Year 2, students begin to explore the specific history questions required for different presenting complaints and how to approach management discussions. This is complemented by final-year student consultation observations in the clinic, where Year 2 students practise recording the history-taking they observe. Year 3 students begin to see patients individually in a clinical environment and attend 16 hours of seminars covering topics such as understanding self in communication, communication across cultures and communicating bad news. By final year, students are able to formally demonstrate their competency using qualitative information about their in-person interactions through clinical feedback systems, self-evaluation and a final-year portfolio.

However, there have been no reported opportunities in any optometry degree for students to develop their interpersonal skills in a telehealth environment. This is critical to address in contemporary health professional education,Citation9 as telehealth is increasingly used in optometry.Citation10 Furthermore, the important contribution of the ‘patient voice’ to communication and interpersonal skills training has long been recognised in medicine,Citation11–16 and more recently, in optometry.Citation7

Thus, this study aimed to develop a novel, online activity to evaluate and provide feedback about the teleconsulting interpersonal skills of students from three perspectives (multisource, or also called 360-degree, feedback) – the patient, the clinician, and the student. The key research questions addressed by this study were: (1) Is there agreement between patients, clinicians, and students in rating and feedback about the interpersonal skills of students? (2) Do the interpersonal skills of students improve with a repeated patient interaction, following the provision of written feedback about an initial patient interaction? (3) Do patients, clinicians and students perceive the activity to be a feasible, useful and effective teaching and learning exercise for developing the interpersonal skills of students?

Because the new learning activity was added as a voluntary exercise for students in their final year of the Doctor of Optometry course at The University of Melbourne, no formal changes were made to the curriculum and no additional content or theory about interpersonal skills specifically in a telehealth setting or more broadly in a general setting was provided to students. Being final year optometry students, with at least one year of prior clinical exposure seeing patients under supervision, we expected that the student participants would be able to draw on their varying individual in-person experiences in the clinic and from their common learning and practice of communication and interpersonal skills built up progressively across the course to complete the activity.

Methods

Participants

Ethics approval was granted by the Human Research Ethics Committee of The University of Melbourne (ID 14513) and the study protocol adhered to the tenets of the Declaration of Helsinki. Recruitment was through word-of-mouth and email invitations sent to final-year Doctor of Optometry students at The University of Melbourne and teaching clinicians (registered optometrists with experience supervising optometry students) at The University of Melbourne, Australian College of Optometry, or private independent practice.

Volunteer patients were recruited through online advertisements posted at The University of Melbourne (e.g. Staff News), local senior community group mailing lists (e.g. University of the Third Age), or from a database of previous volunteers who had consented to being contacted for research opportunities.

All participants provided written informed consent and were required to have English language proficiency and access to an electronic device with microphone and camera. Patients were ‘real’ (not actors) for authenticityCitation15 and were at least 50 years of age  to ensure that the mock scenarios would be age-appropriate. In total, 40 optometry students, 33 patients and 21 clinicians participated between October 2020 and March 2022.

Study sessions

Study sessions were conducted using an online teleconferencing platform (Zoom Video Communications Inc., San Jose, CA, USA) and involved one patient (de-identified by renaming), one student (identifiable) and one clinician (anonymous: muted and off camera). One study investigator (BN or JN) hosted each session to provide instructions, time management and technical assistance. For logistical reasons, most sessions lasted one hour to enable one patient and one clinician to evaluate two students (~30 mins each patient-student interaction).

A written and verbal prebriefing was provided to familiarise participants with the format and tasks. Students took a history (max 15 mins), assuming they had never met the patient before, and then discussed a fictional scenario and management plan (max 15 mins, without any digital visual aids such as screen sharing). The host chose a plausible scenario (from Supplementary Material A as likely to be encountered in clinical practice),Citation17 based on the patient’s history (e.g. if a patient reported previous cataract surgery, the ‘cataract’ scenario was not chosen). During the patient-student interaction, patients were allowed to ask questions to the student optometrist, or seek clarification (e.g. ask student to repeat information, or rephrase), at any time if needed.

After each session, patients were provided with an information brochure (from Optometry Australia, the key professional organisation for optometrists in Australia) about the scenario discussed and offered a non-compulsory debrief session with the study investigators. Students were allocated to one of two participant groups: (1) group 1 (n = 19), who only received feedback from the patient and clinician; and (2) group 2 (n = 21), who received feedback from the patient and clinician and an audiovisual recording of their patient interaction, completed a self-evaluation (before any clinician or patient feedback was provided, to avoid biasing their self-evaluation), and were offered an opportunity to participate in a second session.

Surveys to evaluate and provide feedback about the patient-student interaction

Two types of data were collected using an online survey platform (Qualtrics, Provo, Utah, USA): (1) quantitative evaluation data to rate the interpersonal skills of students, and (2) qualitative comments about the student’s interpersonal skills (see Supplementary Material B). The first part of the survey was a modified version of the Doctors’ Interpersonal Skills Questionnaire (DISQ)Citation18 previously used by patients to evaluate in-person the interpersonal skills of students.Citation7

Each question in the 12-item DISQ is rated on a 5-point Likert scale (1 = poor, 2 = fair, 3 = good, 4 = very good, 5 = excellent). The twelve item ratings were summed to calculate an overall score (range 12–60), with higher scores associated with better interpersonal skills. The ratings questionnaire also included a reliability check, and question order was randomised. The second part of the survey consisted of two open-text feedback questions: ‘What two things did the student do well?’ and ‘What two things could the student improve?’.

Survey about the teaching and learning initiative

At program completion, participants received a link to an online, anonymous survey to evaluate the activity as per Schmid et al.Citation7 Patients and clinicians were asked about their experience providing feedback to students (see Supplementary Material C), while students were asked about their experience receiving patient and clinician feedback (see Supplementary Material D).

Data analysis

Quantitative DISQ rating data were tested for normality using a Kolmogorov-Smirnov test. A repeated-measures analysis of variance (RM-ANOVA) was used to compare data between two factors (e.g. student groups, sessions). Post-hoc Bonferroni pairwise comparisons determined the statistical significance of a main effect with more than two factors (e.g. multiple feedback sources). To evaluate the relationships between patient, clinician and student ratings, Spearman’s rank correlations and Lin’s concordance correlation coefficientCitation19 (CCC), and 95% confidence intervals (CI), were calculated to assess absolute agreement between two measures.Citation20 A correlation coefficient < 0.20 was considered poor and > 0.80 was considered excellent agreement.Citation21 A chi-square test of proportions was used to compare percentage responses between groups. A p < 0.05 was considered statistically significant.

For the qualitative data (written feedback), a thematic analysis was undertaken. Deductive coding first identified the prominent themes of the twelve DISQ domains (e.g. time given to patient, opportunity for patient to express concerns or fears). Inductive coding was subsequently used as more themes and sub-themes emerged. Two authors (BN and JN) coded the written feedback in NVivo 12 (QSR International Pty Ltd, Doncaster, Victoria, Australia). Discrepancies in coding were resolved by consensus (BN and JN), and the extracted themes were verified by author MC who did not code the data.

Results

All questionnaires (quantitative DISQ ratings, qualitative written feedback, and program evaluation surveys) had complete data and therefore all these data were included in the analysis.

Patient and clinician quantitative evaluations of the interpersonal skills of students

When comparing patient and clinician ratings of students in group 1 (n = 19) and group 2 (n = 21), patients gave higher overall DISQ scores than clinicians (RM-ANOVA main effect of feedback source: F(1,38) = 7.40, p = 0.01, partial η2 = 0.16), but there was no difference between group ratings (main effect of group: F(1,38) = 2.09, p = 0.16, partial η2 = 0.05). Hence, groups 1 and 2 were combined (n = 40) to determine the level of agreement between patient and clinician ratings of ‘first-time’ patient-student interactions.

depicts the weak positive correlation between patient and clinician overall DISQ scores (Spearman’s r = 0.35, p = 0.03). There was moderate agreement between patient and clinician overall ratings of the interpersonal skills of students (Lin’s CCC = 0.34, 95% CI = 0.07–0.56). Given the Bland-Altman plot shown in , patient and clinician ratings are expected to differ on average by 4 points on the DISQ, and to be within approximately ± be within approximately 17 points of each other (95% limits of agreement).

Figure 1. Relationship between Doctors’ Interpersonal Skills Questionnaire (DISQ) overall scores evaluating the interpersonal skills of students at their first patient-student interaction by (A) patients and clinicians (n = 40 evaluations), (C) patients and students (n = 21 evaluations), and (D) clinicians and students (n = 21 evaluations). Panel B is a Bland-Altman plot, showing the difference in overall DISQ score between patients and clinicians as a function of average ratings of the interpersonal skills of students. The bias (average difference, horizontal dashed line) was negative (3.75 points), indicating that on average patient ratings were higher than ratings by clinicians. The 95% limits of agreement were from -20.5 to 13 points (horizontal dotted lines) .

Figure 1. Relationship between Doctors’ Interpersonal Skills Questionnaire (DISQ) overall scores evaluating the interpersonal skills of students at their first patient-student interaction by (A) patients and clinicians (n = 40 evaluations), (C) patients and students (n = 21 evaluations), and (D) clinicians and students (n = 21 evaluations). Panel B is a Bland-Altman plot, showing the difference in overall DISQ score between patients and clinicians as a function of average ratings of the interpersonal skills of students. The bias (average difference, horizontal dashed line) was negative (3.75 points), indicating that on average patient ratings were higher than ratings by clinicians. The 95% limits of agreement were from -20.5 to 13 points (horizontal dotted lines) .

shows the patient and clinician ratings, and agreement thereof, for each of the 12 DISQ domains. Warmth of greeting, concern for patient as a person, explanation skills, and ability to be reassuring showed the highest agreement between patient and clinician ratings (; Lin’s CCC = 0.32–0.40, demonstrating moderate agreement).Citation21 All other domains showed poor agreement (i.e. clinicians typically gave lower scores than patients), with patients and clinicians disagreeing the most when evaluating their confidence in the ability of students, respect shown to patient, and time given to patient (; Lin’s CCC = 0.16, demonstrating poor agreement).Citation21

Table 1. Doctors’ Interpersonal Skills Questionnaire (DISQ) ratings by patients and clinicians from 40 ‘first-time’ patient-student interactions. Median and range are listed. Each domain could have a maximum score of 5, the overall score could have a maximum score of 60. Lin’s concordance correlation coefficient (CCC) and 95% confidence intervals are given to indicate agreement between patient and clinician ratings. Asterisks indicate significant agreement.

Student self-evaluations of their interpersonal skills

show the relationships between patient, clinician and student ratings based on the overall DISQ score. Student self-ratings (n = 21) did not match that of patients (; Spearman’s r = 0.01, p = 0.98; Lin’s CCC = 0.01, 95%CI = −0.31–0.33), with students tending to underscore themselves relative to patient ratings. Similarly, Spearman’s rank correlations and Lin’s concordance between clinician and student overall DISQ ratings did not reach statistical significance (; Spearman’s r = 0.29, p = 0.20; Lin’s CCC = 0.30, 95%CI = −0.11–0.62).

Effect of second session

Of the 21 students in group 2, one student could not attend a second visit and one patient did not complete an evaluation and was lost to follow-up. Thus, in total there were 19 repeat sessions (). Overall ratings of the interpersonal skills of students by patients, clinicians and students were higher at the second visit (RM-ANOVA main effect of session: F(1,54) = 7.76, p = 0.01, partial η2 = 0.13). There was also a main effect of feedback source (F(2,54) = 5.39, p = 0.01, partial η2 = 0.17). Post-hoc Bonferroni pairwise multiple comparisons demonstrated that patients rated the interpersonal skills of students higher than clinicians (p = 0.01) and students (p = 0.03) across all visits.

Figure 2. Overall DISQ scores from the A: First session (n = 19 evaluations), and B: Second session (n = 19 evaluations), as rated by patients, clinicians and students. Boxplots show the median, 25th and 75th percentiles, with the whiskers depicting the 10th and 90th percentiles. All outliers are shown as individual symbols. Horizontal dotted lines indicate the minimum and maximum overall DISQ scores possible (i.e. 5 and 60, respectively). Overall ratings of the interpersonal skills of students by patients, clinicians and students were higher at the second visit (RM-ANOVA main effect of session: F(1,54) = 7.76, p = 0.01, partial η2 = 0.13). Asterisks indicate that the post-hoc Bonferroni pairwise comparison was statistically significant at p < 0.05.

Figure 2. Overall DISQ scores from the A: First session (n = 19 evaluations), and B: Second session (n = 19 evaluations), as rated by patients, clinicians and students. Boxplots show the median, 25th and 75th percentiles, with the whiskers depicting the 10th and 90th percentiles. All outliers are shown as individual symbols. Horizontal dotted lines indicate the minimum and maximum overall DISQ scores possible (i.e. 5 and 60, respectively). Overall ratings of the interpersonal skills of students by patients, clinicians and students were higher at the second visit (RM-ANOVA main effect of session: F(1,54) = 7.76, p = 0.01, partial η2 = 0.13). Asterisks indicate that the post-hoc Bonferroni pairwise comparison was statistically significant at p < 0.05.

Thematic analysis of written feedback

As described in , the thematic analysis revealed 6 key themes (‘body language’, ‘communication techniques’, ‘confidence in student ability’, ‘manner’, ‘patient-centred approach’ and ‘relationship building’) to describe the feedback provided by patients, clinicians and students in response to the question: ‘What did the student do well?’ Under these 6 key themes, 27 sub-themes were identified. The most commonly encountered sub-themes under each key theme were: ‘smile’ and ‘eye contact’ (under ‘body language’), ‘good explanations or instructions’ (under ‘communication techniques’), ‘thorough’ (under ‘confidence in student ability), ‘warm and friendly’ (under ‘manner’), ‘answered questions’ and ‘opportunity for patients to ask questions or express concerns’ (under ‘patient-centred approach’) and ‘built rapport’ (under ‘relationship building’).

Table 2. Key themes (italics) and sub-themes (indented) identified across the three sources of written positive feedback about the interpersonal skills of students in response to the open-ended question: ‘What did the student do well?’. Theme totals (italics) are the aggregate of sub-themes.

summarises the thematic analysis of the feedback provided in response to the question: ‘What could the student improve?’ Overall, 8 key themes were identified, most of which were similar to the key themes reported in (‘body language’, ‘communication techniques’, ‘confidence in student ability’, ‘language and information delivery’, ‘manner’, ‘patient-centred approach’, ‘relationship building’, ‘no improvement needed’). Of the 32 sub-themes, the most commonly encountered sub-themes under each key theme were: ‘poor eye contact and note taking approach’ (under ‘body language’), ‘poor explanations or instructions’ (under ‘communication techniques’), ‘did not fully explore patient history’ (under ‘confidence in student ability’), ‘use of jargon or acronyms’ (under ‘language and information delivery’), ‘not calm or confident’ (under ‘manner’), ‘did not consider the individual situation of the patient’ (under ‘patient-centred approach’), and ‘did not build rapport’ (under ‘relationship building’).

Table 3. Key themes (italics) and sub-themes (indented) identified across the three sources of written negative feedback about the interpersonal skills of students in response to the open-ended question: ‘What could the student improve?’. Theme totals (italics) are the aggregate of sub-themes.

Evaluation of learning and teaching activity

The anonymous program evaluation survey was completed by 31 students (78% response rate), 30 patients (91% response rate) and 17 clinicians (81% response rate). Overall, all participants perceived the activity to be useful, and agreed that patient and teaching clinician feedback is effective at fostering good interpersonal skills in the clinic learning environment (). Fewer students agreed that feedback from clinicians was effective at fostering good interpersonal skills (82%), compared to feedback from patients (94%), but these proportions did not differ (; χ2(1, N = 53) = 1.23, p = 0.27).

Figure 3. Reported effectiveness of feedback for fostering good interpersonal skills in the clinic learning environment, from the perspectives of A: Student optometrists, when evaluating feedback from patients (black bars) and clinicians (grey bars), and B: Patients (black bars) and clinicians (grey bars), when evaluating their own feedback.

Figure 3. Reported effectiveness of feedback for fostering good interpersonal skills in the clinic learning environment, from the perspectives of A: Student optometrists, when evaluating feedback from patients (black bars) and clinicians (grey bars), and B: Patients (black bars) and clinicians (grey bars), when evaluating their own feedback.

shows the student perspectives about the activity. Most students agreed that feedback from patients (97%) and clinicians (93%) was useful (), and reported that they reflected on and used the feedback from patients (100%) and clinicians (93%) to improve their clinical competence (). Example quotes of how students demonstrated behaviour change are listed in Supplementary Material E. Students felt anxious knowing they were being evaluated by patients and clinicians similarly (, chi-square test of proportions: χ2(1, N = 53) = 2.29, p = 0.13). Students also self-reported trying harder to positively interact with the patient, knowing they were being evaluated by the patient and clinician (, χ2(1, N = 51) = 0.22, p = 0.64). All patients and clinicians felt that providing feedback helped the student learn and that it was easy to give constructive comment about how the student interacted. Only 3% of patients felt anxious about providing feedback.

Figure 4. Frequency of student optometrists who self-reported A: That the feedback from patients and clinicians was useful, B: That they reflected on the feedback from patients and clinicians and used it to improve their clinical competence, C: Anxiousness knowing that feedback would be given from patients and clinicians, and D: That they tried harder to positively interact with the patient because they knew they were being evaluated by patients and clinicians. In all panels, black bars represent patients and grey bars represent clinicians.

Figure 4. Frequency of student optometrists who self-reported A: That the feedback from patients and clinicians was useful, B: That they reflected on the feedback from patients and clinicians and used it to improve their clinical competence, C: Anxiousness knowing that feedback would be given from patients and clinicians, and D: That they tried harder to positively interact with the patient because they knew they were being evaluated by patients and clinicians. In all panels, black bars represent patients and grey bars represent clinicians.

Discussion

This study demonstrates the feasibility and usefulness of an online program to support the development of interpersonal skills of optometry students. Building on previous work examining patient feedback from in-person interactions,Citation7 a novel multisource (patient, clinician and student) evaluation and feedback program that was wholly online was developed. The strengths of teleconferencing technology were capitalised on, to enable remote participation (to enhance inclusivity), de-identification and anonymity (to minimise patient and clinician discomfort in providing feedback), and audiovisual recording (as an additional feedback mechanism).

Participants were instructed on how to complete an evaluation (quantitative) and provide feedback (qualitative) in their own time using online surveys. This ensured that common barriers to effective feedback for health professional trainees were addressed, such as insufficient time,Citation22 lack of instructions regarding feedback,Citation22 indirect feedback mechanisms,Citation22 unsafe or inappropriate environment for feedback,Citation22 fear of awkward future interactions,Citation22 concern that scoring poorly would be detrimental to a student’s reputation, academic standing or educational progress,Citation23 and lack of direct observation of student-patient interactions.Citation24 Students had a unique opportunity for explicit, focused observation of their interpersonal skills that enhanced credibility and specificity of feedback. Such focused opportunities are less frequent for interpersonal skills specifically and have not been reported for optometry teleconsulting.

Is there agreement between patients, clinicians, and students in their evaluation of and feedback about the interpersonal skills of students?

Consistent with general practitionerCitation18 and medical residentCitation25 trainees, clinician ratings of the interpersonal skills of students were generally lower than, but moderately correlated with, ratings from patients. Differences of up to 17 points in the overall DISQ score between clinicians and patients (see ) may not be acceptable in some evaluation settings, such as for final assessments of student performance. Nevertheless, the level of concordance between patient and clinician ratings of the interpersonal skills of students reported here (Lin’s CCC = 0.34) and in other health professions (medical residents, Lin’s CCC = 0.50)Citation25 implies that both patient and clinician assessments are equally valid and useful for training purposes.

Both the quantitative and qualitative analysis revealed that patients and clinicians do not provide the same evaluation of, and feedback to, students about their interpersonal skills. Because clinicians regularly assess students for the content and clinical knowledge exchanged during patient-student interactions, their evaluations of interpersonal skills could be influenced by the (wrong) clinical information presented by the student.Citation26 Indeed, the thematic analysis showed that patients thought students were ‘knowledgeable’ and had confidence in what the student was saying, whereas clinicians were more likely to pick up on inaccurate, misrepresented information. This lends further support to the known advantages of multisource feedback (e.g. teachers, patients, family members of patients),Citation27–29 as recommended by the Accreditation Council of Graduate Medical Education Outcome Project.Citation30 Different perspectives capture more information and likely provide a more complete and complementary account of student skills.Citation31 It is therefore recommended that future implementations of the program continue to procure feedback from multiple sources, including the patient voice.Citation31

Similar to previous reports in medical trainees, student self-ratings poorly matched patient ratings of interpersonal skills (Lin’s CCC = 0.11).Citation25 This is in contrast to a previous report of optometry students overestimating their interpersonal skills compared to masked graders.Citation23 On the other hand, patients rated students more highly than clinicians, suggesting a possible leniency in patient evaluations. Higher ratings by patients have also been reported in medical trainee evaluations,Citation18,Citation25 in contrast to other work focusing on medical specialists (e.g. anaesthesiologists)Citation26 where non-blinded teaching staff assigned higher scores than blinded, external evaluators. None of the patients in this study asked for a debrief, but future work could benefit from compulsory debriefings to investigate the reasoning behind individual patient and clinician ratings and possible reluctance to give poor marks.Citation23

Interestingly, while clinician ratings were lower on average compared to patient ratings, it cannot be ascertained from this study whether the grades of clinicians were primarily driven by a lack in the clinical knowledge of students, which was reasonably expected to contribute to their confidence in the ability of students (Item 6 in the DISQ, see Supplementary Material B). While there is a positive correlation between low overall clinician grades and a lack of confidence in the ability of students (Spearman r = 0.76, p < 0.001), this relationship also exists for patient ratings (Spearman r = 0.90, p < 0.001). Thus, it can only be concluded from the current data that a high overall rating of interpersonal skills is, at least partly, driven by increased confidence in the ability of students, regardless of whether the patient perceives, or the clinician assesses, that the student has the requisite and correct clinical knowledge.

Do the interpersonal skills of students improve with a repeated patient interaction?

This study found increased quantitative ratings of the interpersonal skills of students at the second visit, which implies an improvement in performance. A significant proportion of students reported trying harder to positively interact with the patient because they knew they were being evaluated. Awareness of being evaluated could lead to the oft-cited ‘Hawthorne effect’ (despite limited evidence thereof).Citation32 It is not possible in this study to disentangle a possible Hawthorne effect or some kind of ‘participant reactivity’Citation32 from other factors that could contribute to improved student performance (i.e. multisource feedback, active student self-reflection, an audiovisual recording as additional feedback, a practice effect).

Improvement in interpersonal skills with time is unlikely to be attributed to a single feedback mechanism.Citation23,Citation33 Nevertheless, given the depth of student self-reflections and associated self-reported behaviour change (Supplementary Material E), providing feedback did help students improve their general interpersonal skills,Citation23 where the ‘correct’ behaviour could not be learnt (unlike hand washing for hygiene, for example)Citation32 because each session involved a different clinical scenario and patient.

Note that students were not explicitly asked in their self-evaluation to comment on whether their performance improved at the second session, if applicable, and neither was it revealed to clinicians and patients whether it was the first or second session for the student. This was the rationale for pooling the qualitative feedback from both sessions in the thematic analysis. Nevertheless, in a small number of student responses (n = 6), it was clear that their self-reflection was written in the context of what happened previously. This suggests that students actively tried to address their previous feedback.

Given the improvement in the interpersonal skills ratings at the second visit, there is both qualitative and quantitative data to support the overall suggestion that students were motivated to reflect and act on feedback to improve their interpersonal skills. We do not know, however, which feedback methods in particular were useful as no specific reference was made by students in the anonymous survey to the written feedback or audiovisual recording as feedback. Future investigations would benefit from a more structured evaluation of each component of feedback to ascertain their relative usefulness.

Do patients, clinicians and students perceive the activity to be a feasible, useful and effective teaching and learning exercise for developing the interpersonal skills of students?

The study results indicate that patient and clinician feedback is feasible and is perceived to contribute positively to the learning experiences of students. Consistent with a previous study by Schmid et al.,Citation7 most patients agreed that providing feedback about interpersonal skills was a worthwhile learning activity for students. Similarly, students self-reported that the activity was useful and that they used the information to improve their interpersonal skills (Supplementary Material E). Perhaps unsurprisingly given their chosen vocation, teaching clinicians considered their evaluation and feedback to be useful for student learning. Note that the clinicians in this study participated voluntarily and outside their usual teaching allocations and clinical commitments, which may not be feasible for embedding into an optometry student curriculum without the wider support of a network of clinicians with supervisory experience.

While the main aim of this study was to demonstrate feasibility and positive uptake of a novel teaching activity, the results provide some preliminary insight into potential anxiety-provoking situations for optometry students. The bimodal distribution in suggests two categories of students, based on whether awareness of being evaluated causes anxiety. It is not possible from this study to explore whether the anxiety stems from a general anxiety about all clinical and skill evaluations or from the novelty of this online patient interaction. Albeit scarce, there is some literature in other health professions (e.g. nursing) that corroborates our finding that clinical experiences involving instructor observation and evaluation are anxiety-producing for students, whether real or perceived.Citation34,Citation35 In future, debriefing with students might therefore be useful to better understand the student experience and how best to support students.

Potential future improvements to the online evaluation and feedback program

Patients were free to provide information as they saw fit, rather than being given scripted information. It must be acknowledged that there are advantages of using trained, simulated patients (e.g. actors, laypersons, people with lived patient experience) for consistency in teaching and evaluation purposes. For example, simulated patients have been considered more effective than teaching communication skills to medical students by conventional lectures.Citation36,Citation37 On the other hand, there is conflicting evidence for whether students prefer contact with real versus simulated patients for practicing their interpersonal skills.Citation11,Citation15

Another potential program modification is blinded evaluations, given the potential for biases from previous interactions between teaching clinicians and students.Citation26 Teachers might develop preconceived opinions and impressions from prior evaluations that could influence their present evaluation of a student.Citation38 To address this, there is future potential to send video recordings of patient-student interactions for subsequent review by external evaluators.Citation26 The approach adopted in this study can also be used effectively for in-person evaluations, using real-time video monitoring from a separate roomCitation25 or remotely.

This study was not designed to evaluate whether the improvement in interpersonal skills was retained and has lasting effects on student performance, which would require a longitudinal experimental design, longitudinal experimental design (e.g. surveying and evaluating optometrists in practice at different post-degree timepoints). The literature on whether short-term training and practice leads to long-term benefits is conflicting,Citation39,Citation40 but nonetheless, future studies could address the longer term impact of experiential learning, specific to interpersonal skills.

To control for consistency and reduce variability in overall ratings, possible modifications are compulsory participant training and debriefing and more structured checklists for feedback provision (based on the themes identified in ). Neither was this study designed to evaluate a specific intervention, such as the provision of particular content or theory about adapting interpersonal skills in a telehealth setting. Hence, the baseline level of student knowledge about setting up for online interactions (e.g. ideal camera positions, being attentive to eye contact, checking microphone) or whether students proactively thought about such details in preparation for the online interaction remain unknown.

To explore these aspects further using an intervention to develop these online skills, future work could take advantage of new short courses and modules that provide specific guidance on telehealth and the digital setting. One such example of broader training for health professionals is a recent ‘telehealth foundations’ online learning experience designed specifically for entry-to-practice health professions students,Citation41 to understand the ethical standards, legal responsibilities, and aspects of privacy, communication and professionalism that are relevant to the digital environment.

Conclusion

This study reports a novel, wholly online, multisource feedback and evaluation program to develop the interpersonal skills of optometry students – an area previously limited to in-person interactions. The program was perceived to be useful for students, feasible for patients and clinicians, and led to self-reported positive behaviour change and improved skills (as evaluated by patients, clinicians and students). Implementation of the program may help better prepare future optometrists for successful patient-student interactions and form part of the health professionalism curriculum to prepare clinicians for in-person and telehealth consultations.

Supplemental material

Supplemental Material

Download MS Word (19 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary Data

Supplemental data for this article can be accessed online at https://doi.org/10.1080/08164622.2023.2195049

Additional information

Funding

This work was supported by a University of Melbourne Faculty of Medicine Dentistry and Health Sciences Learning and Teaching Initiative Seed Funding grant to authors BN, MP, JN and AC. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • Australian Health Practitioner Regulation Agency and National Boards. Code of conduct; 2022 Jun [accessed 2022 Jun 29]. https://www.ahpra.gov.au/Publications/Code-of-conduct/Shared-Code-of-conduct.aspx.
  • Kiely PM, Slater J. Optometry Australia entry-level competency standards for optometry 2014. Clin Exp Optom 2015; 98: 65–89. doi:10.1111/cxo.12216.
  • World Health Organization. World report on vision. Geneva; 2019. Licence: CC BY-NC-SA 3.0 IGO.
  • Makoul G. Communication skills education in medical school and beyond. JAMA 2003; 289: 93. doi:10.1001/jama.289.1.93.
  • Deveugele M, Derese A, De Maesschalck S et al. Teaching communication skills to medical students, a challenge in the curriculum? Patient Educ Couns 2005; 58: 265–270. doi:10.1016/j.pec.2005.06.004.
  • Sa B, Ojeh N, Majumder MAA et al. The relationship between self-esteem, emotional intelligence, and empathy among students from six health professional programs. Teach Learn Med 2019; 31: 536–543. doi:10.1080/10401334.2019.1607741.
  • Schmid KL, Hopkins S, Huynh T. Involving patients in the development of interpersonal skills of optometry students. Clin Exp Optom 2020; 103: 361–367. doi:10.1111/cxo.12939.
  • Cham KM, Gaunt H, Delany C. Pilot study: thinking outside the square in cultivating ‘Soft skills’-going beyond the standard optometric curriculum. Optom Vis Sci 2020; 97: 962–969. doi:10.1097/OPX.0000000000001594.
  • Noronha C, Lo MC, Nikiforova T et al. Telehealth competencies in medical education: new frontiers in faculty development and learner assessments. J Gen Intern Med 2022; 37: 3168–3173. doi:10.1007/s11606-022-07564-8.
  • Massie J, Block SS, Morjaria P. The role of optometry in the delivery of eye care via telehealth: a systematic literature review. Telemed J E Health 2022; 28: 1753–1763. doi:10.1089/tmj.2021.0537.
  • Clever SL, Dudas RA, Solomon BS et al. Medical student and faculty perceptions of volunteer outpatients versus simulated patients in communication skills training. Acad Med 2011; 86: 1437–1442. doi:10.1097/ACM.0b013e3182305bc0.
  • Haq I, Fuller J, Dacre J. The use of patient partners with back pain to teach undergraduate medical students. Rheumatology (Oxford) 2006; 45: 430–434. doi:10.1093/rheumatology/kei167.
  • Helfer RE, Black MA, Teitelbaum H. A comparison of pediatric interviewing skills using real and simulated mothers. Pediatrics 1975; 55: 397–400. doi:10.1542/peds.55.3.397.
  • Tattersall RL. The expert patient: a new approach to chronic disease management for the twenty-first century. Clin Med (Lond) 2002; 2: 227–229. doi:10.7861/clinmedicine.2-3-227.
  • Bokken L, Rethans JJ, Jobsis Q et al. Instructiveness of real patients and simulated patients in undergraduate medical education: a randomized experiment. Acad Med 2010; 85: 148–154. doi:10.1097/ACM.0b013e3181c48130.
  • Baines R, Regan de Bere S, Stevens S et al. The impact of patient feedback on the medical performance of qualified doctors: a systematic review. BMC Med Educ 2018; 18: 173. doi:10.1186/s12909-018-1277-0.
  • Papanagnou D, Klein MR, Zhang XC et al. Developing standardized patient-based cases for communication training: lessons learned from training residents to communicate diagnostic uncertainty. Adv Simul (Lond) 2021; 6: 26. doi:10.1186/s41077-021-00176-y.
  • Greco M, Cavanagh M, Brownlea A et al. The Doctors’ Interpersonal Skills Questionnaire (DISQ): a validated instrument for use in GP training. Educ Gen Pract 1999; 10: 256–264.
  • Lin LI. A concordance correlation coefficient to evaluate reproducibility. Biometrics 1989; 45: 255–268. doi:10.2307/2532051.
  • Sanchez MM, Binkowitz BS. Guidelines for measurement validation in clinical trial design. J Biopharm Stat 1999; 9: 417–438. doi:10.1081/BIP-100101185.
  • Altman DG. Practical statistics for medical research. London: Chapman and Hall; 1991.
  • Reddy ST, Zegarek MH, Fromme HB et al. Barriers and facilitators to effective feedback: a qualitative analysis of data from multispecialty resident focus groups. J Grad Med Educ 2015; 7: 214–219. doi:10.4300/JGME-D-14-00461.1.
  • Anderson HA, Young J, Marrelli D et al. Training students with patient actors improves communication: a pilot study. Optom Vis Sci 2014; 91: 121–128. doi:10.1097/OPX.0000000000000112.
  • Howley LD, Wilson WG. Direct observation of students during clerkship rotations: a multiyear descriptive study. Acad Med 2004; 79: 276–280. doi:10.1097/00001888-200403000-00017.
  • Millis SR, Jain SS, Eyles M et al. Assessing physicians’ interpersonal skills: do patients and physicians see eye-to-eye? Am J Phys Med Rehabil 2002; 81: 946–951. doi:10.1097/00002060-200212000-00011.
  • Casabianca AB, Berger JS, Papadimos TJ et al. The effect of previous resident interactions on the assessment of interpersonal and communication skills by teaching faculty: are we the best evaluators? J Educ Perioper Med 2015; 17: E001.
  • Donnon T, Al Ansari A, Al Alawi S et al. The reliability, validity, and feasibility of multisource feedback physician assessment: a systematic review. Acad Med 2014; 89: 511–516. doi:10.1097/ACM.0000000000000147.
  • Stevens S, Read J, Baines R et al. Validation of multisource feedback in assessing medical performance: a systematic review. J Contin Educ Health Prof 2018; 38: 262–268. doi:10.1097/CEH.0000000000000219.
  • Chandler N, Henderson G, Park B et al. Use of a 360-degree evaluation in the outpatient setting: the usefulness of nurse, faculty, patient/family, and resident self-evaluation. J Grad Med Educ 2010; 2: 430–434. doi:10.4300/JGME-D-10-00013.1.
  • Holmboe ES, Iobst WF. Accreditation council of graduate medical education assessment guidebook; 2020 [accessed 2022 Jun 29]. https://www.acgme.org/globalassets/PDFs/Milestones/Guidebooks/AssessmentGuidebook.pdf.
  • Cooper C, Mira M. Who should assess medical students’ communication skills: their academic teachers or their patients? Med Educ 1998; 32: 419–421. doi:10.1046/j.1365-2923.1998.00223.x.
  • Paradis E, Sutkin G. Beyond a good story: from Hawthorne effect to reactivity in health professions education research. Med Educ 2017; 51: 31–39. doi:10.1111/medu.13122.
  • Brinkman WB, Geraghty SR, Lanphear BP et al. Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial. Arch Pediatr Adolesc Med 2007; 161: 44–49. doi:10.1001/archpedi.161.1.44.
  • Kim KH. Baccalaureate nursing students’ experiences of anxiety producing situations in the clinical setting. Contemp Nurse 2003; 14: 145–155. doi:10.5172/conu.14.2.145.
  • Kleehammer K, Hart AL, Keck JF. Nursing students’ perceptions of anxiety-producing situations in the clinical setting. J Nurs Educ 1990; 29: 183–187. doi:10.3928/0148-4834-19900401-10.
  • Geoffroy PA, Delyon J, Strullu M et al. Standardized patients or conventional lecture for teaching communication skills to undergraduate medical students: a randomized controlled study. Psychiatry Investig 2020; 17: 299–305. doi:10.30773/pi.2019.0258.
  • Levenkron JC, Greenland P, Bowley N. Teaching risk-factor counseling skills: a comparison of two instructional methods. Am J Prev Med 1990; 6: 29–34.
  • Murphy KR, Balzer WK, Lockhart MC. Effects of previous performance on evaluations of present performance. J Appl Psych 1985; 70: 72–84. doi:10.1037/0021-9010.70.1.72.
  • Wouda JC, van de Wiel HB. The communication competency of medical students, residents and consultants. Patient Educ Couns 2012; 86: 57–62. doi:10.1016/j.pec.2011.03.011.
  • Fallowfield L, Jenkins V, Farewell V et al. Enduring impact of communication skills training: results of a 12-month follow-up. Br J Cancer 2003; 89: 1445–1449. doi:10.1038/sj.bjc.6601309.
  • Marino R, Merolli M, Capurro D. Information technology self-efficacy and confidence amongst health professions students enrolling in a telehealth educational course. In: eTelemed 2022: The Fourteenth Conference on eHealth, Telemedicine, and Social Medicine. Porto, Portugal: IARIA; 2022. p. 33–37.