2,824
Views
13
CrossRef citations to date
0
Altmetric
Web Paper

Maximizing the value of feedback for individual facilitator and faculty development in a problem-based learning curriculum

&
Pages e26-e31 | Published online: 03 Jul 2009

Abstract

Background: Recruiting and retaining facilitators in problem-based learning requires considerable staff development. Providing meaningful feedback to individual facilitators should contribute to improved management of the tutorial group.

Aim: To ascertain the value ascribed by facilitators to feedback they received (based on student input) regarding their performance in the small group tutorial in a new problem-based learning curriculum.

Methods: Thirty-seven facilitators from a purposive sample, selected for their facilitation experience during the 2001–2003 period, completed a comprehensive survey regarding their experiences. The aspect currently being reported deals with the perceived usefulness of the feedback they received from students and from Faculty following the evaluation of their participation in the small group tutorial. Data are reported for medically qualified and non-medically qualified facilitators.

Results: Both clinical (50%) but more notably the non-clinical (70%) facilitators found the feedback (individual facilitator and general report) useful. Facilitators generally preferred the qualitative comments provided by students in the open-ended section of the evaluation to the Likert scale items. Student comments were valued for the specific direction they offered facilitators to reflect and improve on their management of the small group. For this feedback to be more useful, however, facilitators believed that it needed to be completed by more students who took time to critically engage with the criteria and reflect more honestly on their experiences. In addition, facilitators requested for feedback reports to be made available sooner such that they could improve their facilitation skills for the next group of students.

Conclusions: Both qualitative and quantitative feedback are important for facilitator development and training. While quantitative feedback is important for summative purposes (e.g. quality assurance and promotion), individual student comments provide more formative feedback, allowing facilitators to reflect on and improve their management of the small group. In order for the feedback to be valid, the majority of students had to participate. Facilitators should receive feedback in time to allow them to modify their activities for the new group.

Introduction

A key feature of problem-based learning (PBL) is the small-group tutorial, which requires either a tutor (content expert) or a facilitator (process but not content expert) to oversee individual and group learning. Much has been documented concerning the role of the tutor/facilitator in a PBL curriculum, with debates involving issues such as the content expert vs. the non-content expert and the influence of process variables and tutor characteristics on the functioning of and learning within the small-group tutorial (Dolmans et al. Citation2002). Notwithstanding divergent views and evidence on the profile of an ‘ideal’ facilitator/tutor to oversee the PBL tutorial, a committed tutor/facilitator has an important role to play in student learning, as his/her participation has a direct bearing on the functioning of the small group (van Berkel & Schmidt Citation2000; Pinto et al. Citation2001; Cotterell et al. Citation2004; Dolmans & Ginns Citation2005; Jung et al. Citation2005). As early as 1990, causal modelling studies demonstrated that tutor characteristics were one of three main factors impacting on the success of the small-group tutorial in PBL (Gijselaers & Schmidt Citation1990).

For PBL to be successful and sustainable, a pool of well-trained, motivated and enthusiastic PBL educators is required at the beginning of each year. For academics whose education and training is steeped in traditional epistemology, shifting from a teacher-centred to a student-centred curriculum may require a mindset change. Since new pedagogies such as PBL require a realignment of the roles of students and staff, traditionalists may resist the change as it threatens their authority and status. There is consensus that in preparing faculty for PBL, considerable time and energy need to be devoted to training staff for their new roles as educators and facilitators (Olmesdahl Citation1997; Mennin & Krackov Citation1998; Harden Citation1999; Bernier et al. Citation2000; Bland et al. Citation2000; Murray & Savin-Baden Citation2000; Farmer Citation2004). Even experienced facilitators have identified a need for continuous support and training (Tremblay et al. Citation2001). In Bland and colleagues’ (Citation2000) review of the literature regarding successful curriculum reform, human resource development (training, support and reward) ranked second only after leadership as a factor most likely to ensure success. To foster a sense of ownership of their new roles in PBL and to ensure continued staff commitment to facilitation, considerable faculty input is therefore required in terms of initial and subsequent staff development.

In order to recruit and sustain a pool of committed facilitators, the overall facilitation experience must be rewarding and facilitators need to be regularly appraised of their performance. Since facilitation of the small-group tutorial is a mainstream academic activity in a PBL curriculum, facilitator performance must be evaluated. Such evaluation serves several purposes. Not only is it important in quality assurance but it can also contribute to academic promotion and tenure. Formatively, it may be used to improve individual facilitator and general faculty skills. To enable faculty to develop and to reward staff appropriately, this feedback therefore needs to be sufficiently informative, valid and reliable (Marsh Citation1987; Dolmans & Ginns Citation2005).

Several recent articles have made meaningful contributions to facilitator evaluation. To this end, the validated and reliable 11-item questionnaire of Dolmans and Ginns (Citation2005, p. 536) provides a streamlined facilitator evaluation, while Cotterell and colleagues’ (Citation2004) study demonstrating the possibility of a halo effect of student responses to Likert-style items in facilitator evaluation should alert practitioners to the limitations of such quantitative methods.

The present contribution formed part of a comprehensive evaluation of facilitators following the first three years of PBL implementation (2001–2003) at a South African medical school. The overarching objective of the survey was to determine the factors that might contribute to sustaining facilitator enthusiasm and commitment to the new programme. The aspect of the survey being presented reports on the value ascribed by facilitators (often not reported in the literature, as Murray & Savin-Baden [Citation2000] point out) to individual (based on student evaluation) and general (a faculty newsletter) feedback reports they received from the Medical Education Unit (MEU) regarding their abilities to facilitate learning and manage the small-group tutorial. Also reported are the measures adopted by the MEU in response to issues raised by facilitators in this survey, in order to maximize the benefit of this feedback for individual facilitators and for Faculty.

Methodology

Institutional setting

In January 2001, the Nelson R. Mandela School of Medicine implemented a five-year PBL curriculum. Until 2005, when a Foundation theme was introduced, students were introduced to PBL in their first year during a three-week Orientation in which they became familiar with student life and the PBL philosophy. During the last week of Orientation, small-group tutorials, guided by trained and experienced facilitators, were conducted to enable students to understand the role of their facilitator (process but not content expert), as well as the roles of the Chair and Scribe within the small group. Students were guided through an eight-step tutorial process (e.g. generating hypotheses and learning goals; evaluating the group process). They were also introduced to the library and the computer LAN where they learnt to undertake manual and computer searches for self-directed study. This Orientation was followed by the first of six, six-week themes contributing to the first academic year.

Facilitators and facilitator training

At least twice during the year, clinical and non-clinical faculty members and external professionals are recruited for facilitator training. The latter group includes current and retired clinicians, as well as medical scientists, educationists and other healthcare professionals with at least a Masters degree. During a 2–3 day workshop, they are introduced to the educational principles underpinning PBL, the learning expected of students and how to facilitate this for individual learners and groups. Trainee facilitators work through the eight-step tutorial process, where they role-play the duties of a prospective facilitator and those of students, i.e. Chair, Scribe and individual group members. Trainers and peers provide continuous formative feedback in this mock setting. As it is not possible to recruit 21 discipline experts for the ±210 students admitted to each academic year, the faculty trains process experts, i.e. facilitators, who are provided with the learning objectives for the theme and each case before the theme begins. Taking cognizance of the diversity and varied levels of educational preparedness of students entering the faculty, facilitators are tasked with the responsibility of ensuring a learning environment conducive to open discussion. They are expected to encourage critical thinking, monitor adherence to the eight-step approach to the PBL tutorial and promote self-direction in individual students as well as cooperation amongst group members.

Facilitator evaluation

At the time of the survey, theme evaluation involved each student anonymously completing two separate questionnaires, one relating to the theme in general (e.g. content, experiences, resources, etc.) and the other pertaining to their facilitator's performance. In the latter instance, students responded on a five-point Likert scale to 12 statements, based on a questionnaire designed by Dolmans and colleagues (1994) evaluating mainly facilitators’ behavioural and affective abilities. Students were asked to rate, for example, the facilitator's punctuality, and ability to create a safe learning environment and to promote individual and group learning and critical thinking. The questionnaire included an open-ended section at the end of the Likert scale items where students could provide written comments. Student responses to each item (including the open-ended section) of the evaluation form for each PBL group were analysed and an individual facilitator report, including the verbatim comments offered by students, was provided to each facilitator within two weeks of completion of the theme (i.e. a fortnight into the next theme). A general newsletter (compiled by staff of the MEU) highlighting the main issues raised by students in the different PBL groups accompanied this individual facilitator report.

The survey

Apart from the need for ongoing quality assurance, a comprehensive survey (January 2004) of facilitators who had volunteered during the first three years (2001–2003) of the new PBL curriculum was prompted by several issues. These included the absence of a faculty policy defining the expected staff facilitation ‘norm’, withdrawal of payment for faculty facilitators no longer involved with the traditional curriculum being phased out, an increasing shortage of facilitators as each year of the new programme was implemented, the increased recruitment of external facilitators to meet faculty needs, relaxing of the qualification requirements of facilitators and the request from students for only clinicians to serve as facilitators. Although the survey contained both closed-ended (Yes/No) and open-ended (Explain your answer) statements, it was largely a qualitative investigation, seeking to gather information on facilitator perceptions of their experiences. According to Dolmans and co-workers (Citation2002), this type of qualitative study is generally lacking in facilitator research. The survey was piloted with MEU staff to check for ambiguity. The present contribution reports on one aspect of this comprehensive survey: facilitator perceptions of the value of the feedback they received for each theme.

Participants

At the time of the survey 106 academic staff and private individuals had facilitated the 36 themes of the first three years (2001–2003). As the researchers wished to survey facilitators with some experience, purposive sampling excluded those who had facilitated one theme only (n = 27). Also excluded were staff from the MEU (e.g. the authors) (n = 7) and facilitators who had resigned (n= 11). Of the 61 eligible facilitators, 60.6% (n = 37) responded to the survey. Of these, faculty staff comprised ±74%. Approximately 48% of respondents were clinicians.

Data analysis of open- and closed-ended queries

When facilitator responses to the closed-ended (Yes/No) query relating to the usefulness of the feedback were analysed, a third category emerged. Some facilitators indicated that the feedback was variably useful (see ).

Table 1.  Facilitator perceptions of the usefulness of feedback (n = 37)

With regard to facilitator explanations for their responses (Yes/No and Sometimes useful) to this close-ended query, one author (van Wyk) extracted the main issues raised by each facilitator. Some facilitators provided more than one explanation. Responses were then categorized into themes, which were discussed by both authors until agreement was reached (see ).

Data are presented for medically and non-medically qualified facilitators.

Limitation

By excluding the facilitators who had overseen one PBL group only, we may have captured only the sentiments of the committed teachers.

Results

The majority (62.2%) of facilitators found the feedback useful (). This was particularly so for non-clinicians (60.9% vs. 39.1%). Only 16.2% of the facilitators (mainly clinicians) indicated that the feedback had no value, while 21.6% found it to be variably useful.

In terms of the usefulness of the feedback, facilitators identified two main related issues (see ). Individual student comments provided specific direction for improvement and the feedback identified their shortcomings as facilitators. Other reasons offered included that facilitators could have their performance validated by students and that it was rewarding when students appreciated their efforts.

With regard to the reasons offered by facilitators for the feedback being useful or only sometimes useful, a main theme emerging from the study related to importance of qualitative feedback, i.e. that student comments were preferred to the Likert scores. Almost 22% of facilitators indicated that individual student comments provided specific direction for improvement and ±16% of facilitators requested more detailed, written feedback.

Facilitators highlighted several shortcomings of the facilitator evaluation. Their first request was for the reports to be received earlier, before the start of the next theme, in order to modify their facilitator skills appropriately for the next group. They also requested students to be more responsible, more constructively critical and honest in evaluating their facilitators.

Discussion

Implementing a PBL curriculum requires considerable planning, foresight and communication. Sustaining the reform is perhaps even more difficult than the initial implementation (Mennin & Krackov Citation1998; Robins et al. Citation2000). Evaluation of different aspects of the innovation should therefore be a priority for curriculum designers, in order to ascertain factors that may or may not be contributing to the sustainability of the reform. As facilitators are essential to the success of PBL, their perceptions of what they found rewarding form a critical component of this evaluation. The present study suggests that not only is feedback regarding their performance in the small-group PBL tutorial an important contributing factor to their commitment to the programme, but that the quality of the feedback is also important. While it may be necessary to collect and analyse data from facilitator evaluation quantitatively for summative purposes such as quality assurance and for promotion and perhaps tenure, facilitators found the qualitative component of the evaluation (i.e. individual student comments) to be more useful formatively than the Likert-scale items. These written comments identified specific shortcomings, which at least one facilitator admitted he/she was not always able to undertake. For at least one facilitator (a non-clinician), this feedback provided ‘a non-threatening way of identifying shortcomings’.

Reward is motivating. Being appreciated by students undoubtedly contributes to facilitator motivation and enthusiasm, as is evidenced by one facilitator's comment: ‘It is the only reward (apart from being paid) for the work done. It is nice to know that my work is appreciated and valued.’ In line with Williams and colleagues’ (Citation1999) theory of self-determination, the reward of receiving positive or even negative but constructive feedback allows for reflection, which might then serve to reinforce facilitator commitment to and intrinsic motivation for student learning. Perceptions of appropriate reward (e.g. appreciation by students, promotion) might then also foster a more scholarly approach to teaching and learning amongst facilitators. Besides reinforcing the scores achieved in the closed-ended section, student comments often confirm affective and behavioural qualities of facilitators that students appreciated (e.g. a sense of humour; bringing personal experience to the tutorial) (Pinto et al. Citation2001)

Facilitator perceptions of the level of usefulness they ascribed to the feedback they received has provided the MEU with considerable insight into their individual needs as well as those of faculty. Facilitator input has also provided specific direction for streamlining the process of facilitator evaluation, in order to maximize its usefulness both formatively and summatively. To this end, facilitators informed us that for feedback to be of more value, it had to represent the views of the majority of students in the group (see ). On this issue, Dolmans & Ginns (Citation2005) recommend that at least 60% of students in the group should complete the evaluation for it to be reliable. But, as one facilitator in the present study acknowledged, students are overburdened with evaluation, which is not uncommon in a new curriculum (McLean Citation2004; Dolmans & Ginns Citation2005).

Thus, in terms of MEU streamlining facilitator evaluation, only relatively inexperienced facilitators and those specifically requesting feedback (usually for promotion) are now evaluated. As one facilitator in the present study suggested, experienced facilitators with a proven track record are probably able to self-evaluate their performance. In addition, since modifications to themes in the later years of curriculum implementation are invariably cosmetic, only two or three students per group are randomly assigned to evaluate one theme per year. This serves two purposes. It alleviates some of the burden of evaluation experienced by students, which may contribute to poor response rates in a new programme (McLean Citation2004; Dolmans & Ginns Citation2005), and second and more importantly, it has allowed more emphasis to be placed on facilitator evaluation during each theme.

Since student comments were viewed by facilitators as a means of improving practice and participation in the PBL tutorial, it is imperative, as some facilitators indicated, that students spend time reflecting on their experiences before completing the evaluation. There was a perception amongst some facilitators that not all students attached the same importance to facilitator evaluation as some rushed through the questionnaire. Others complained that students were not always truthful. To this end, feedback for one facilitator indicated non-punctuality, which the facilitator claimed not to be the case. Such a situation may reflect a facilitator's experiences with a non-cohesive group or with difficult students, as alluded to by one facilitator: ‘Students who always want to get away early and not go through the eight steps properly, end up writing false reports, hoping to tarnish an individual.’

Some facilitators requested students to be more critical in their evaluation. As the facilitator evaluation questionnaire guarantees anonymity, students are at liberty to be frank about their facilitator's ability. We believe that by reducing the number of students undertaking general theme evaluation, as discussed earlier, students can now spend more time on facilitator evaluation. In addition, since facilitator evaluation is now undertaken in the fourth rather than the final sixth week of the theme, the stress associated with the End of Theme Test, also scheduled in the final week, is minimized. The evaluation and the assessment in the same week may have been a reason for facilitator claims that students were not sufficiently responsible when completing (or perhaps not completing) the evaluation. It is anticipated that by scheduling the evaluation a fortnight earlier students may become more reflective about their experiences in the group.

Scheduling the evaluation in the fourth week also satisfies the request of several facilitators for earlier feedback. Receiving their facilitator reports a fortnight into the next theme was counter-productive as it did not allow them time to reflect on student comments and suggestions to modify their facilitation style for their new group of students.

Conclusions

Both qualitative and quantitative information is needed for comprehensive facilitator evaluation as it serves different purposes. Quantitative evaluation satisfies faculty management in terms of objectively rewarding staff achievement and for quality assurance purposes. Student comments, on the other hand, provide more specific direction for individual facilitator reflection and can direct faculty efforts at staff development. The authors therefore propose that the use of a short Likert-scale questionnaire, accompanied by an open-ended section for student comments, as presented by Dolmans & Ginns (Citation2005), will serve formative and summative purposes without overburdening students. Depending on the value attached to formative feedback, the open-ended section could be expanded, as one facilitator suggested: ‘I feel they [the students] should be asked how the facilitator could do better or how the process can be improved.’ External validation of student evaluation (e.g. currently undertaken by a MEU member), again a suggestion from a facilitator, should be included from time to time.

Facilitator input, as gleaned from this study, has provided direction for the development of faculty facilitation skills, which may in fact be different for medically qualified and non-medically qualified facilitators. This consideration certainly warrants further investigation. Workshops have been organized to address specific issues identified by facilitators (e.g. assessing students, dealing with difficult or non-compliant students). Such staff development and support, together with the more refined facilitator feedback, will undoubtedly contribute to facilitator commitment towards student learning, and ultimately to the sustainability of the programme.

Since a facilitator interacts with students at the heart of where learning in all its facets (e.g. social constructivism, self-directed, problem-solving, critical thinking) takes place in PBL (i.e. in the small-group tutorial), it is not difficult then to understand why committed facilitators, who have embraced their task of promoting student learning, might wish to become scholars of teaching and learning themselves. In so doing, as some accrediting bodies have requested (e.g. the Health Professions Council of South Africa Citation2004), academic staff then become experts not only in their own discipline but also in medical education. This request for a more professional and scholarly approach to teaching and learning requires, however, that medical faculties need to provide ongoing support and training, and need to attach the same value to teaching as they do to research.

Ethical considerations

Ethical approval for this study was obtained from the Bioethics, Medical Law and Research Ethics Committee of the Faculty of Health Sciences (R202/04).

Additional information

Notes on contributors

Jacqueline van Wyk

JACQUELINE VAN WYK is an educationalist and staff developer at the Nelson R. Mandela School of Medicine.

Michelle Mclean

MICHELLE MCLEAN was a Professor in Histology at the time of the study.

References

  • Bernier GM, Adler S, Kanter S, Meyer WJ. On changing curricula: lessons learned at two dissimilar medical schools. Acad Med 2000; 75: 595–601
  • Bland CJ, Starnaman S, Wersal L, Moorhead-Rosenberg L, Zonia S, Henry R. Curricular change in medical schools: how to succeed. Acad Med 2000; 75: 575–594
  • Cotterell S, Wimmer M, Linger B, Shumway J, Jones E. Using problem-based learning evaluations to improve facilitator performance and student learning. Int Associat Med Sci Edu 2004; 14: 58–63
  • Dolmans DHJM, Gijselaers WH, Moust JHC, De Grave WS, Wolfhagen IHAP, van der Vleuten CPM. Trends in research on the tutor in problem-based learning: conclusions and implications for educational practice and research. Med Teach 2002; 24: 173–180
  • Dolmans D, Ginns P. A short questionnaire to evaluate the effectiveness of tutors in PBL: validity and reliability. Med Teach 2005; 27: 534–538
  • Farmer EA. Faculty development for problem based learning. Eur. J. Dent. Educ. 2004; 8: 59–66
  • Gijselaers WH, Schmidt HG. Development and evaluation of a causal model of problem based learning. Innovation in Medical Education: An Evaluation of its Present Status, ZM Noorman, HG Schmidt, ES Esmat. Springer, New York 1990; 95–113
  • Health Professions Council of South Africa. Standards for the Accreditation of Undergraduate Medical Education in South Africa. Medical and Dental Professional Board, Pretoria 2004
  • Harden RM. Stress, pressure and burnout in teachers: is the swan exhausted?. Med Teach 1999; 21: 245–246
  • Jung B, Tryssenaar J, Wilkins S. Becoming a tutor: exploring the learning experiences and needs of novice tutors in a PBL programme. Med Teach 2005; 27: 6–12
  • Marsh HW. Students’ evaluations of university teaching: research findings, methodological issues, and directions for future research. Int J Educ Res 1987; 11: 253–288
  • McLean M. Sustaining a problem-based learning curriculum: advice in hindsight. Med Teach 2004; 26: 726–28
  • Mennin S, Krackov SK. Reflections on relevance, resistance, and reform in medical education. Acad Med 1998; 73: S60–S64
  • Murray I, Savin-Baden M. Staff development in problem based learning. Teach High Educ 2000; 5: 107–126
  • Olmesdahl PJ. Rewards for teaching excellence: practice in South African medical schools. Med Edu 1997; 31: 27–32
  • Pinto PR, Rendas A, Gamboa T. Tutor's performance evaluation: a feedback tool for the PBL learning process. Med Teach 2001; 23: 289–293
  • Robins LS, White CB, Fantone JC. The difficulty of sustaining curricular reforms: a study of ‘drift’ at one school. Acad Med 2000; 75: 801–805
  • Tremblay M, Tryssenaar J, Jung B. Problem based learning in occupational therapy: why do health professionals choose to tutor. Med Teach 2001; 23: 561–566
  • Van Berkel HJM, Schmidt HG. Motivation to commit oneself as a determinant of achievement in problem-based learning. High Educ 2000; 40: 231–242
  • Williams GC, Saizow RB, Ryan RM. The importance of self-determination theory for medical education. Acad Med 1999; 74: 992–995

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.