1,139
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

How much is too much? Medical students’ perceptions of evaluation and research requests, and suggestions for practice

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

Abstract

A combination of institutional requirements and research activity means health profession students receive many requests to complete evaluations or participate in research. This study aimed to understand pre-clinical medical students’ perceptions and attitudes towards these requests. A prospective audit of evaluation and research requests to students during 2022 was undertaken to identify the volume and frequency of requests, and inform survey development. The online survey included questions about request frequency, volume, and timing. Two student cohorts received 42 and 34 evaluation plus 8 and 10 research requests, respectively. Responses (n = 167, aggregated response rate 28%) showed 70% felt they received too many evaluation requests, 76% indicated evaluation request volume should be limited, and 30% indicated receiving evaluation requests was ‘a little stressful’. Students indicated reasonable evaluation and research request frequency was 1–2 per month, with yearly maximums of 10–19 evaluation and 10 research requests. Students preferred receiving requests at the start of semesters; examination and holiday times were least preferred. Findings indicate students feel over-evaluated with the current evaluation schedule; students suggested the Medical School should regulate requests. The current research schedule was viewed as reasonable. Outcomes may guide institutional practice around request delivery.

Introduction

Students undertaking tertiary education are often asked to volunteer their time for activities that are not part of their formal course requirements. They may be asked to evaluate their experience of course content and the quality of teaching (Schiekirka et al. Citation2012; Flodén Citation2017; Constantinou and Wijnen-Meijer Citation2022), or to participate in research projects (Christakis Citation1985; Forester and McWhorter Citation2005; Boileau, Patenaude, and St-Onge Citation2018; Devine et al. Citation2019). Medical students are often a population of interest for evaluation and research requests as they are an easily accessible cohort of students (Christakis Citation1985; Dyrbye et al. Citation2007; Constantinou and Wijnen-Meijer 2022), existing within a course that is often subject to examination, and quality control measures because of the high standards associated with delivering and maintaining a medical degree (Blouin et al. Citation2018).

Promoting meaningful student engagement with evaluation and research requests is important in order to make such interactions purposeful. Various strategies have been proposed for ensuring good response rates when asking students to undertake evaluations, including providing time within class to complete the evaluations (Kuch and Roberts Citation2019; Joubert, Steinberg, and van der Merwe Citation2022), and providing feedback to ‘close the loop’ once data have been analysed (Spencer and Schmelkin Citation2002). Studies also indicate that students overburdened with evaluation requests tend to participate less (Spooren and Van Loon Citation2012; Constantinou and Wijnen-Meijer 2022). What remains unclear from these studies is ‘how many is too many’?

Very few studies have described the effect of student response rates in combination with actual data of evaluation request volume. Adams and Umbach (Citation2012) showed response rates for student evaluations of teaching (SET) decreased over sequential semesters if students were administered 11 or more SETs per semester, but previously published ‘guidelines’ and ‘recommendations’ around SET use contain little information regarding request volume, frequency or student perspectives on these variables (Oermann et al. Citation2018; Kreitzer and Sweet-Cushman Citation2022). Regarding research requests, studies have indicated student support for being participants in research (Forester and McWhorter Citation2005; Roberts and Allen Citation2013), and while medical students have described a professional responsibility to participate in research (Devine et al. Citation2019) there are indications some medical students feel they are approached ‘too frequently’ to be participants in research projects (Joubert, Steinberg, and van der Merwe Citation2022), and that scheduling approaches should be considered (Joubert, Steinberg, and van der Merwe Citation2019).

Despite the observation that ‘too many’ evaluations decreases engagement, no studies appear to have explored what frequency or volume of evaluation requests are viewed as reasonable by medical students. Similarly, no empirical evidence exists to indicate what frequency or volume of research requests is reasonable, an important consideration when medical students are viewed as a ‘vulnerable’ population of research participants (Maggio et al. Citation2018; Joubert, Steinberg, and van der Merwe Citation2019; Fisher and Rothwell Citation2022) and many already experience demanding workloads (Sarpel et al. Citation2013). Further, it is unknown whether or how research requests and evaluation requests may be viewed differently by the same cohort of students, or how they may react to these requests. The aim of this project was therefore to explore factors around how students perceive and experience evaluation and research requests in the early stages of a medical course. Audits of evaluation and research requests were utilised to inform the development of a cross-sectional survey, acquiring information on student attitudes, and perspectives that is informed by their ‘lived experience’ of request frequency and volume. The outcomes will assist how to appropriately deliver evaluation and research requests to early-course medical students (Kite, Subedi, and Bryant-Lees Citation2015; Fisher and Rothwell Citation2022), and may be useful in guiding such processes in other tertiary education areas.

Methods

Study location and description

The University of Otago is a research-intensive tertiary institution located in Dunedin, New Zealand. The Bachelor of Surgery and Bachelor of Medicine (MB ChB) degree is a six-year, undergraduate course with 300 students per cohort/year located on the same campus for the first two years (pre-clinical). When students are accepted into this course, they enter at year two of the programme. This study focused on medical students in the pre-clinical years two and three.

The University of Otago academic year runs congruently with the calendar year (i.e. January to December), and the MB ChB programme for pre-clinical students comprises two semesters of 17 and 13 weeks length, respectively, separated by a two-week holiday period. A mid-semester break of one week is included within each semester, and one week for study pre-examinations is at the end of semester two. A typical teaching week for years 2 and 3 comprises contact time of approximately 10 lecture hours and 12 hours of small-group tutorial or laboratory work. In addition to personal study time, task oriented independent (individual or group) learning comprises 6–10 hours per week. During 2022, the majority of the curriculum was delivered face-to-face in semester two, whilst the first half of semester one was influenced by the COVID-19 pandemic, and teaching was mostly online. Assessment within the programme consists of multiple low stakes events throughout the year and higher stakes assessments at end of year, with the overall delivery of assessment encompassing the principles that underpin programmatic assessment (van der Vleuten et al. Citation2012).

Three forms of data collection were involved in this study: a survey, a retrospective audit of research ethics approvals and a prospective audit of evaluation and research requests. Audit data allowed for benchmarking of evaluation and research requests students receive, facilitating interpretation of results against the reported student experience, and provided data that informed the development of the survey. A retrospective audit of evaluation requests was not possible as mechanisms to extract these data do not exist.

Survey

Ethical approval for the survey was acquired (University of Otago Human Ethics Committee; reference D21/400). An electronic survey link was distributed to the study cohort via student email. Consent was assumed via engagement with the electronic study hyperlink and survey platform. The survey was open for two weeks with a reminder sent after one week. In addition to email invitation, students were encouraged to participate via word of mouth and social media platforms. The online survey was comprised of multiple choice, Likert-type, and free-text questions. Limited demographic data were acquired (e.g. self-identified gender and admission pathway). Survey responses were anonymous.

Survey data was analysed for differences between student perceptions of research and evaluation requests in relation to volume and frequency, and student recommendations about timing, volume and frequency. Statistical analyses included t-tests for continuous data, and Chi-square or Wilcoxon rank sum for categorical variables; significance was p < 0.05. Free-text entries to the question ‘Are there any other comments you would like to make about evaluation and/or research requests’? were analysed by thematic analysis to explore patterns within the data (Braun and Clarke Citation2006).

Prospective audit data—evaluation and research requests

An audit of the evaluation and research requests students received from January to December 2022 was performed by a year 2 (author WG) and year 3 student (author JT). Records included request type (evaluation or research), date of request, how the request was delivered, and the estimated duration to complete the request to an adequate standard.

Retrospective audit data—research ethics approvals

Research requests that medical students receive are granted ethical approval by the University of Otago Ethics Committee. A retrospective audit of the titles and home departments of all University of Otago ethics approvals from 2017 to 2021 was performed to identify the potential number of research requests medical students may have experienced in previous years. Research was deemed ‘likely’ to include medical students if the project title included the term ‘medical student’.

Results

Survey—demographic information and response rate

The aggregate response rate was 28% (year 2, n = 76, 26%; year 3, n = 91, 31%). In general, item response distributions showed no significant difference between years 2 and 3, and unless otherwise stated data for both years was combined. Respondents’ gender (p = 0.76) and entry pathway to medical school (p = 0.43) were representative of the cohort.

Survey—evaluation and research request frequency and volume

Evaluation request frequency for year 2 and 3 differed across the year and peaked at different times (). Both year 2 (50%) and year 3 (51%) students indicated that at these peaks the frequency of evaluations was too high, but was otherwise reasonable throughout the year. This compares to an average across both years of 33% of students who indicated the overall frequency was reasonable and 16% that it was not reasonable. Correlating the frequency data (year 2 peak months March–June, year 3 peak months June–September; average of 6 per month) with the perception data indicates that 50% of students consider a monthly evaluation request of greater than five as too high.

Table 1. Prospective audit data for all evaluation and research requests, and mode of request, for year 2 and 3 medical students at the University of Otago during 2022.

Current research request volume was viewed as reasonable by 96% of students, as was frequency (96%). Enforced limitation on the frequency of requests was also favoured for evaluations over research (69% versus 31%; p < 0.001). Of those who thought the frequency of evaluation or research requests should be limited, 71% indicated the frequency of evaluations should be no more than one per fortnight, whilst for research requests 84% suggested no more than one to two per month was reasonable.

More students thought the number of evaluation requests should be limited by the medical school than research requests (76% versus 32%; p < 0.001). Of those that thought the number of evaluation or research requests should be limited, approximately equal numbers thought up to 10 (41%) or between 10 and 19 (50%) evaluation requests were reasonable across the year, whilst for research up to 10 requests were rated as reasonable by 48% of respondents.

When asked to rank when they would prefer to receive requests, early in both semesters or at the end of semester one were the most favoured, with the end of semester two and during examinations least favoured.

Survey—experiences and impressions of evaluation and research requests

Students consistently viewed research requests differently to evaluation requests. They were more likely to engage with research (44%) than evaluation requests (27%), whilst 23% stated they had no real interest in engaging with evaluation requests compared to 6% for research requests.

Students felt they received too many evaluation requests (70%) compared to too many research requests (12%; p < 0.001), and fewer students rated the number of evaluation requests as reasonable compared to the number of research requests (36% versus 85%; p < 0.001). Students were more likely to rate their first reaction to receiving an evaluation request as stressful (31%) compared to when research requests were received (20%; p = 0.02).

Survey—free text responses

There were n = 19 year 2, and n = 21 year 3 free text responses; responses from both years were pooled for analysis. The most dominant theme, associated with the majority of comments, centred on improving the delivery of evaluations, including suggestions to promote engagement with or delivery of the evaluations (). This included comments associated with evaluation length, delivery and timing. Other minor themes included facilitating access to research opportunities, and the negative personal impact of receiving evaluations including guilt and stress.

Table 2. Examples of free-text comments and associated themes from year 2 and 3 students to the question ‘Are there any other comments you would like to make about evaluation and/or research requests’?.

Prospective and retrospective audit data—evaluation and research requests

During 2022, year 2 students received 41 evaluation and 8 research requests, while year 3 students received 33 evaluation and 10 research requests (). Requests were delivered to students via multiple delivery modes; student email was most prevalent for both evaluation and research requests (), with social media invitations least frequently used.

The estimated duration to complete all evaluation requests was 290 minutes (year 2) and 235 minutes (year 3) (average duration 6.9 minutes; range 5–25 minutes). The estimated time to complete all research requests was 220 minutes (year two) and 200 minutes (year three) with an average duration of 23.3 minutes (range 5–70 minutes).

From 2017 to 2021 there were 2950 University of Otago ethics approvals for research projects. On average, 10.8 approvals per year ‘likely’ included medical students as participants (). Data indicate research requests for 2022 likely being consistent with previous years.

Table 3. Retrospective audit data of University of Otago research ethics approvals to identify projects where year two and three medical students may have been participants.

Discussion

This study examined pre-clinical medical student perceptions of the volume and frequency of evaluation and research requests, finding many feel over-evaluated and that the current delivery of evaluation requests was not reasonable. Conversely, existing trends around research requests were mostly viewed as reasonable. Interestingly, some students find the receipt of these requests stressful, and many suggested the Medical School needs to regulate how evaluation and research requests are delivered. Students suggested volumes and frequencies of requests they regard as reasonable, as well as indicating preferences for when and how these are delivered throughout the year.

Student perspectives on evaluation requests

While the use of evaluations to gauge faculty performance continues to be debated (Deaker, Stein, and Spiller Citation2016; Hornstein Citation2017), they are accepted by staff despite their limitations (Stein et al. Citation2013) and remain ubiquitous within the tertiary environment for purposes such as assessing teaching performance (Flodén Citation2017) and undertaking course evaluation (Spooren and Van Loon Citation2012). However, getting students to engage with evaluation requests remains an issue and a lack of student participation may adversely influence the validity of the information gathered. Leaving students to identify which requests to engage with is currently the status quo, and global observations of poor student response rates to evaluation requests (Adams and Umbach Citation2012; Spooren and Van Loon Citation2012; Constantinou and Wijnen-Meijer 2022) suggest change is required to increase participation. It is necessary to ensure student response rates are reasonable, and while students see the necessity to engage with evaluations (Kite, Subedi, and Bryant-Lees Citation2015), there are many reasons why they choose not to, including perceptions around subsequent improvement in teaching or the course (Spencer and Schmelkin Citation2002; Spooren and Christiaens Citation2017; Monzón, Suárez, and Paredes Citation2022), the mode of evaluation delivery (e.g. online) (Plante, Lesage, and Kay Citation2022), student characteristics (Spooren and Van Loon Citation2012), lack of knowledge around how responses are used (Stein et al. Citation2021), and a lack of time (Adams and Umbach Citation2012; Kite, Subedi, and Bryant-Lees Citation2015; Stein et al. Citation2021). Responses in this study are congruent with previous findings, including reference to number of requests, timing and mode of delivery, with email requests being singled out as problematic despite reports of some students preferring this mode of delivery (Schiekirka et al. Citation2012).

Yet, despite the volume of research undertaken around student engagement with evaluations, few studies have explored how the volume or frequency of evaluation requests is perceived by students. A single study (Adams and Umbach Citation2012) shows student response rates to evaluations drop off when more than 11 per semester are delivered, and findings from this current study indicate a similarly pessimistic trend: the majority of students (63%) perceived 30 to 40 evaluations per year as not reasonable. At the present volume and frequency, 23% of medical students indicated they had no real interest in engaging with evaluation requests, with 70% believing they receive too many. These data are in agreement with others that indicate students overburdened with evaluation requests tend to participate less (Spooren and Van Loon Citation2012).

How can evaluations be delivered in a manner that is in keeping with student perspectives? Interestingly, no empirical evidence exists to illuminate the student voice around evaluation request frequency. Here, many suggested the Medical School should limit evaluation numbers to between 10 and 19 requests a year; students also suggest a frequency of no more than one request a fortnight was reasonable, and indicated that the peak frequencies of evaluation request they experienced (different between years 2 and 3; on average more than 5 per month) were too high ().

An interesting aspect of the data is that some students (31%) indicated receiving these requests was stressful, a finding supported by free-text responses. In an environment whereby the contribution of medical schools to adversely influencing student mental health remains under scrutiny (Smith et al. Citation2007; D’Eon et al. Citation2021), tertiary institutions must remain mindful of perceived stressors (Slavin, Schindler, and Chibnall Citation2014; Slavin Citation2016) and this finding requires further exploration to determine how such stress manifests and could be mitigated. Trialling information sessions to students at the inception of a course to raise their awareness around evaluation and research requests, and any responsibilities around participation in these, may help alleviate any uncertainty around student participation and potentially assist in decreasing any stress experienced by receiving them.

Student perspectives on research requests

Students are often willing to be participants in research (Brewer and Robinson Citation2018; Joubert, Steinberg, and van der Merwe Citation2022), and may be motivated to participate in research by the potential benefits (Roberts and Allen Citation2013), while the time required will also affect participation (Sarpel et al. Citation2013; Devine et al. Citation2019). In this study, student free-text responses suggest interest in receiving research requests, highlighting that some students are genuinely interested in participating in research, a response which has also been noted elsewhere (Fisher and Rothwell Citation2022).

Findings indicate that the majority of students (85%) felt they were receiving a reasonable number of research requests throughout the year (roughly one per month), and only 20% of students found receiving these research requests stressful. Does this indicate increased attention should be paid to how students receive requests? Arguments for overseeing research requests include medical students being a vulnerable population (Ridley Citation2009; Bradbury-Jones et al. Citation2011; Walsh Citation2014; Maggio et al. Citation2018; Joubert, Steinberg, and van der Merwe Citation2019) that has the potential to be viewed as under pressure to participate (Bartholomay and Sifers Citation2016; Joubert, Steinberg, and van der Merwe Citation2022) or experience ‘undue influence’ (Callahan, Hojat, and Gonnella Citation2007; van den Broek, Wouters, and van Delden Citation2015), despite some students sometimes not perceiving themselves this way (Sarpel et al. Citation2013). Yet despite the widespread identification of students being ‘vulnerable’ in their role as research subjects, there exist no guidelines or data to indicate what frequency or volume of research requests is reasonable, even with the suggestion that scheduling of approaches needs to be considered (Joubert, Steinberg, and van der Merwe Citation2019, Citation2022).

Approximately one third of students thought the Medical School should limit the number of research requests to not more than ten per year, and that frequency should be limited to one to two requests per month. However, 85% suggested their experienced volume (between eight and ten per year) and frequency () was reasonable, indicating that students are not concerned about the current volume or frequency of research requests. Therefore, to decrease the negative impact of receiving research requests (Fisher and Rothwell Citation2022), it can be suggested that research requests of between one and two per month, with a maximum of ten requests per year, is recommended for similar cohorts of students. The indication that some students are stressed by receiving these requests requires further exploration to see how this may inform request delivery.

Are evaluation and research requests viewed differently by students?

It is clear that decision-making by students around engaging with both research (Fisher and Rothwell Citation2022) and evaluation (Hoel and Dahl Citation2019) requests is multifactorial. Certainly, timing of requests likely contributes (Sarpel et al. Citation2013; Kite, Subedi, and Bryant-Lees Citation2015), and participants indicated that periods of the year that are academically quieter periods were preferred for receiving requests. This seems logical, as students may not feel burdened or overloaded in these periods, and fits with previous suggestions around avoiding busy times (Spooren and Van Loon Citation2012; Fisher and Rothwell Citation2022). However, there are likely other factors that contribute to student perspectives around receiving requests.

Students consistently indicated more negative views on engaging with evaluation requests with almost a quarter indicating they had no real interest in engaging with them. Conversely, students preferred to engage with research requests. Contributions to student perceptions from request volume and frequency appear likely; students indicated the current number of evaluation requests as being too many and unreasonable compared to research, despite the total time to complete all evaluation or all research requests being not dissimilar (290 or 235 versus 220 or 200 minutes, respectively, for the two student cohorts), supporting the notion that low frequency and volume of request contribute to perceptions of reasonableness. Consistent with this, students also more strongly favoured the Medical School regulating the number and frequency of evaluation requests, despite the average time for participating in research requests (23.3 minutes) being much larger than evaluation requests (6.9 minutes). Currently, at this institution, the volume of evaluation and research requests to students is not restricted.

Findings also raise important questions and challenges. Students were significantly more likely to indicate receipt of evaluation than research requests as stressful, underscoring an important and novel observation around student reactions. How this ‘stress’ should be interpreted is unclear, and requires further exploration in the context of student health.

There is little available evidence to support whether the effect on students of these two different types of request should be viewed concurrently or separately; however, at face value these data suggest decisions around engaging with either type of requests may be distinctly different. Comparison between evaluation and research requests in this study is hindered by the disparity between the numbers of evaluation and research requests, though findings indicate volume and frequency as likely metrics which influence student perspectives, with student stress, spare time and assessment proximity all being factors that may affect student perceptions around request delivery. Furthermore, inter-institutional differences, with variation in and across these variables, including different ethnic, cultural and social influences, may also affect student perceptions. These data are, however, a useful benchmark in an area where few currently exist, and may serve as a convenient proxy for justifying the regulation of requests.

Suggestions for good practice in similar student cohorts

In the light of these findings, further areas that may improve the practice of request delivery are presented. There has been a call for international standards for research involving students (Leentjens and Levenson Citation2013), and further guidance around the use of evaluations (Turner, Hatton, and Valiga Citation2018), and perhaps within such standards the frequency and volume of requests should be considered. Based on these current data it can be suggested that, for students experiencing similar coursework, load and environmental stressors, the following may guide good practice around evaluation and research requests to students:

  • a frequency of evaluation requests above five per month is not recommended,

  • the frequency of evaluation requests is limited to one per fortnight,

  • a reasonable number of evaluation requests is not more than 10 to 19 per year,

  • the frequency of research requests is not more than one per fortnight,

  • a reasonable number of research requests is not more than 10 per year.

Email approaches are problematic with students suggesting these are easily overlooked, and other solutions are required to facilitate an increase in engagement. Timing of approaches should not be in busy or stressful times of the year (Spooren and Van Loon Citation2012; Joubert, Steinberg, and van der Merwe Citation2022). Regardless of any decision to limit requests, as good practice, a register of requests should be maintained and monitored, to evaluate volume/frequency of requests in the context of the extant learning environment.

Limitations

The students who collected the prospective audit data (authors WG, JT) were assumed to have received a similar number of requests as other students in their cohort. However, some requests may have been sent to specific students, meaning prospective audit data may be slightly over or underestimated. The retrospective audit is limited to University of Otago ethics applications only, and so may not have included requests from sources external to the University; however, anecdotally very few requests for medical student involvement are of this type. It was also not possible to retrospectively audit evaluation requests as there is no record or screening of evaluation requests, meaning it is unclear whether this year is wholly representative of other years. There is also the potential for non-response bias (Plante, Lesage, and Kay Citation2022), a consideration for such studies where response rates may be modest.

Conclusion

This work is the first to explore the impact, perceptions and attitudes around evaluation and research requests in the same cohort of health professional students. It supplements and extends understanding around evaluation and research requests to students, exploring situational perspectives based on extant audit data to illuminate differences in how students view evaluation and research requests. Data indicate that yearly volumes of 30 to 40 evaluation requests, and evaluation frequencies of more than five per month, are too high and are viewed by students as not reasonable. Student-nominated ‘reasonable’ request parameters include frequency and volume being no more than 19 evaluation or 10 research requests per year, with a frequency of no more than one a fortnight for either. Students indicated email requests can be problematic, and request delivery in academically quieter periods of the year. Interestingly, some students indicate their first reaction to receiving requests can be stressful, suggesting that institutions should perhaps be aware of how many and what type of requests students receive. Findings may provide a useful benchmark and inform institutional guidelines around request delivery, in concert with other data around programme delivery, though how transferable these findings are to other programs, institutions or student cohorts is not yet clear. Data have shown that evaluation and research requests are viewed differently, but not exactly why, and further studies are required to elucidate how these two types of request may affect student perceptions of requests.

Disclosure statement

The authors declare no potential conflicts of interest.

References

  • Adams, M. J. D., and P. D. Umbach. 2012. “Nonresponse and Online Student Evaluations of Teaching: Understanding the Influence of Salience, Fatigue, and Academic Environments.” Research in Higher Education 53 (5): 576–591. doi:10.1007/s11162-011-9240-5.
  • Bartholomay, E., and S. Sifers. 2016. “Student Perception of Pressure in Faculty-Led Research.” Learning and Individual Differences 50: 302–307. doi:10.1016/j.lindif.2016.08.025.
  • Blouin, D., A. Tekian, C. Kamin, and I. B. Harris. 2018. “The Impact of Accreditation on Medical Schools’ Processes.” Medical Education 52 (2): 182–191. doi:10.1111/medu.13461.
  • Boileau, E., J. Patenaude, and C. St-Onge. 2018. “Twelve Tips to Avoid Ethical Pitfalls When Recruiting Students as Subjects in Medical Education Research.” Medical Teacher 40 (1): 20–25. doi:10.1080/0142159X.2017.1357805.
  • Bradbury-Jones, C., S. Stewart, F. Irvine, and S. Sambrook. 2011. “Nursing Students’ Experiences of Being a Research Participant: Findings from a Longitudinal Study.” Nurse Education Today 31 (1): 107–111. doi:10.1016/j.nedt.2010.04.006.
  • Braun, V., and V. Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2): 77–101. doi:10.1191/1478088706qp063oa.
  • Brewer, G., and S. Robinson. 2018. “I like Being a Lab Rat’: Student Experiences of Research Participation.” Journal of Further and Higher Education 42 (7): 986–997. doi:10.1080/0309877X.2017.1332357.
  • Callahan, A., M. Hojat, and J. Gonnella. 2007. “Volunteer Bias in Medical Education Research: An Empirical Study of over Three Decades of Longitudinal Data.” Medical Education 41 (8): 746–753. doi:10.1111/j.1365-2923.2007.02803.x.
  • Christakis, N. 1985. “Do Medical Student Research Subjects Need Special Protection?” IRB 7 (3): 1–4. doi:10.2307/3563627.
  • Constantinou, C., and M. Wijnen‑Meijer. 2022. “Student Evaluations of Teaching and the Development of a Comprehensive Measure of Teaching Effectiveness for Medical Schools.” BMC Medical Education 22 (1): 113–114. doi:10.1186/s12909-022-03148-6.
  • D’Eon, Marcel, Galilee Thompson, Adam Stacey, Jessica Campoli, Kylie Riou, Melissa Andersen, and Niels Koehncke. 2021. “The Alarming Situation of Medical Student Mental Health.” Canadian Medical Education Journal 12 (3): 176–178. doi:10.36834/cmej.70693.
  • Deaker, L., S. Stein, and D. Spiller. 2016. “You Can’t Teach Me: Exploring Academic Resistance to Teaching Development.” International Journal for Academic Development 21 (4): 299–311. doi:10.1080/1360144X.2015.1129967.
  • Devine, L., S. Ginsburg, T. Stenfors, T. Cil, H. McDonald-Blumer, C. Walsh, and L. Stroud. 2019. “Professional Responsibilities and Personal Impacts: Residents’ Experiences as Participants in Education Research.” Academic Medicine : journal of the Association of American Medical Colleges 94 (1): 115–121. doi:10.1097/ACM.0000000000002411.
  • Dyrbye, L., M. Thomas, A. Mechaber, A. Eacker, W. Harper, F. S. Massie, Jr., D. Power, and T. Shanafelt. 2007. “Medical Education Research and IRB Review: An Analysis and Comparison of the IRB Review Process at Six Institutions.” Academic Medicine : journal of the Association of American Medical Colleges 82 (7): 654–660. doi:10.1097/ACM.0b013e318065be1e.
  • Fisher, J., and C. Rothwell. 2022. “Detached or Devotee? Medical Students and Research.” The Clinical Teacher 19 (2): 92–99. doi:10.1111/tct.13470.
  • Flodén, J. 2017. “The Impact of Student Feedback on Teaching in Higher Education.” Assessment & Evaluation in Higher Education 42 (7): 1054–1068. doi:10.1080/02602938.2016.1224997.
  • Forester, P., and D. McWhorter. 2005. “Medical Students’ Perceptions of Medical Education Research and Their Roles as Participants.” Academic Medicine 80 (8): 780–785. https://journals.lww.com/academicmedicine/fulltext/2005/08000/medical_students__perceptions_of_medical_education.16.aspx.
  • Hoel, A., and T. I. Dahl. 2019. “Why Bother? Student Motivation to Participate in Student Evaluations of Teaching.” Assessment & Evaluation in Higher Education 44 (3): 361–378. doi:10.1080/02602938.2018.1511969.
  • Hornstein, H. 2017. “Student Evaluations of Teaching Are an Inadequate Assessment Tool for Evaluating Faculty Performance.” Cogent Education 4 (1): 1304016–1304018. doi:10.1080/2331186X.2017.1304016.
  • Joubert, G., W. Steinberg, and L. van der Merwe. 2019. “The Selection and Inclusion of Students as Research Participants in Undergraduate Medical Student Projects at the School of Medicine, University of the Free State, Bloemfontein, South Africa, 2002–2017: An Ethical Perspective.” African Journal of Health Professions Education 11 (2): 57–62. doi:10.7196/AJHPE.2019.v11i2.1081.
  • Joubert, G., W. Steinberg, and L. van der Merwe. 2022. “Medical Students as Research Participants: Student Experiences, Questionnaire Response Rates and Preferred Modes.” African Journal of Health Professions Education 14 (3): 106–110. doi:10.7196/AJHPE.2022.v14i3.1588.
  • Kite, M., P. C. Subedi, and K. Bryant-Lees. 2015. “Students’ Perceptions of the Teaching Evaluation Process.” Teaching of Psychology 42 (4): 307–314. doi:10.1177/0098628315603062.
  • Kreitzer, R. J., and J. Sweet-Cushman. 2022. “Evaluating Student Evaluations of Teaching: A Review of Measurement and Equity Bias in SETs and Recommendations for Ethical Reform.” Journal of Academic Ethics 20 (1): 73–84. doi:10.1007/s10805-021-09400-w.
  • Kuch, F., and R. Roberts. 2019. “Electronic in-Class Course Evaluations and Future Directions.” Assessment & Evaluation in Higher Education 44 (5): 726–731. doi:10.1080/02602938.2018.1532491.
  • Leentjens, A., and J. Levenson. 2013. “Ethical Issues Concerning the Recruitment of University Students as Research Subjects.” Journal of Psychosomatic Research 75 (4): 394–398. doi:10.1016/j.jpsychores.2013.03.007.
  • Maggio, L., A. Artino, K. Picho, and E. Driessen. 2018. “Are You Sure You Want to Do That? Fostering the Responsible Conduct of Medical Education Research.” Academic Medicine: Journal of the Association of American Medical Colleges 93 (4): 544–549. doi:10.1097/ACM.0000000000001805.
  • Monzón, N., V. Suárez, and D. Paredes. 2022. “Is My Opinion Important in Evaluating Lecturers? Students’ Perceptions of Student Evaluations of Teaching (SET) and Their Relationship to SET Scores.” Educational Research and Evaluation 27 (1-2): 117–140. doi:10.1080/13803611.2021.2022318.
  • Oermann, M. H., J. L. Conklin, S. Rushton, and M. A. Bush. 2018. “Student Evaluations of Teaching (SET): Guidelines for Their Use.” Nursing Forum 53 (3): 280–285. doi:10.1111/nuf.12249.
  • Plante, S., A. Lesage, and R. Kay. 2022. “Examining Online Course Evaluations and the Quality of Student Feedback: A Review of the Literature.” Journal of Educational Informatics 3 (1): 21–31. doi:10.51357/jei.v3i1.182.
  • Ridley, R. 2009. “Assuring Ethical Treatment of Students as Research Participants.” The Journal of Nursing Education 48 (10): 537–541. doi:10.3928/01484834-20090610-08.
  • Roberts, L., and P. Allen. 2013. “A Brief Measure of Student Perceptions of the Educational Value of Research Participation.” Australian Journal of Psychology 65 (1): 22–29. doi:10.1111/ajpy.12007.
  • Sarpel, U., M. Hopkins, F. More, S. Yavner, M. Pusic, M. Nick, H. Song, R. Ellaway, and A. Kalet. 2013. “Medical Students as Human Subjects in Educational Research.” Medical Education Online 18 (1): 1–6. doi:10.3402/meo.v18i0.19524.
  • Schiekirka, S., D. Reinhardt, S. Heim, G. Fabry, T. Pukrop, S. Anders, and T. Raupach. 2012. “Student Perceptions of Evaluation in Undergraduate Medical Education: A Qualitative Study from One Medical School.” BMC Medical Education 12: 45–47. 1 doi:10.1186/1472-6920-12-45.
  • Slavin, S. J. 2016. “Medical Student Mental Health. Culture, Environment, and the Need for Change.” Jama 316 (21): 2195–2196. doi:10.1001/jama.2016.16396.
  • Slavin, S. J., D. L. Schindler, and J. T. Chibnall. 2014. “Medical Student Mental Health 3.0: Improving Student Wellness through Curricular Changes.” Academic Medicine : journal of the Association of American Medical Colleges 89 (4): 573–577. doi:10.1097/ACM.0000000000000166.
  • Smith, C. K., D. F. Peterson, B. F. Degenhardt, and J. C. Johnson. 2007. “Depression, Anxiety, and Perceived Hassles among Entering Medical Students.” Psychology, Health & Medicine 12 (1): 31–39. doi:10.1080/13548500500429387.
  • Spencer, K., and L. P. Schmelkin. 2002. “Student Perspectives on Teaching and Its Evaluation.” Assessment & Evaluation in Higher Education 27 (5): 397–409. doi:10.1080/0260293022000009285.
  • Spooren, P., and W. Christiaens. 2017. “I Liked Your Course Because I Believe in (the Power of) Student Evaluations of Teaching (SET). Students’ Perceptions of a Teaching Evaluation Process and Their Relationships with SET Scores.” Studies in Educational Evaluation 54: 43–49. doi:10.1016/j.stueduc.2016.12.003.
  • Spooren, P., and F. Van Loon. 2012. “Who Participates (Not)? A Non-Response Analysis on Students’ Evaluations of Teaching.” Procedia—Social and Behavioral Sciences 69: 990–996. doi:10.1016/j.sbspro.2012.12.025.
  • Stein, S., A. Goodchild, A. Moskal, S. Terry, and J. McDonald. 2021. “Student Perceptions of Student Evaluations: Enabling Student Voice and Meaningful Engagement.” Assessment & Evaluation in Higher Education 46 (6): 837–851. doi:10.1080/02602938.2020.1824266.
  • Stein, Sarah J., Dorothy Spiller, Stuart Terry, Trudy Harris, Lynley Deaker, and Jo Kennedy. 2013. “Tertiary Teachers and Student Evaluations: Never the Twain Shall Meet?” Assessment & Evaluation in Higher Education 38 (7): 892–904. doi:10.1080/02602938.2013.767876.
  • Turner, K. M., D. Hatton, and T. M. Valiga. 2018. “Student Evaluations of Teachers and Courses: Time to Wake up and Shake Up.” Nursing Education Perspectives 39 (3): 130–131. doi:10.1097/01.NEP.0000000000000329.
  • van den Broek, S., R. Wouters, and H. van Delden. 2015. “In Response to ‘Medical Education Research: Is Participation Fair?” Perspectives on Medical Education 4 (3): 158–159. doi:10.1007/s40037-015-0188-6.
  • van der Vleuten, C. P. M., L. W. T. Schuwirth, E. W. Driessen, J. Dijkstra, D. Tigelaar, L. K. J. Baartman, and J. van Tartwijk. 2012. “A Model for Programmatic Assessment Fit for Purpose.” Medical Teacher 34 (3): 205–214. doi:10.3109/0142159X.2012.652239.
  • Walsh, K. 2014. “Medical Education Research: Is Participation Fair?” Perspectives on Medical Education 3 (5): 379–382. doi:10.1007/s40037-014-0120-5.