1,004
Views
15
CrossRef citations to date
0
Altmetric
Web paper

Can students differentiate between PBL tutors with different tutoring deficiencies?

, PhD, &
Pages e156-e161 | Published online: 03 Jul 2009

Abstract

Many medical schools evaluate the performance of their tutors by using questionnaires. One of the aims of these evaluations is to provide tutors with diagnostic feedback on strong and weak aspects of their performance. Although everyone will agree that students are able to distinguish between poor and excellent tutors, one can question whether students are also able to differentiate between tutors with different tutoring deficiencies—tutors who perform badly on a specific key aspect of their performance. The aim of this study was to investigate to what degree students are able to differentiate between tutors with different tutoring deficiencies, how effective tutors are with different deficiencies and what kind of tips students give for improvement of a tutor's behaviour. Based on students’ ratings on a tutor evaluation questionnaire, tutors were ranked in groups with different deficiencies and the average overall tutor performance score was computed for each group with a particular deficiency. In addition, students’ tips for improvement given in the open-ended question at the end of the questionnaire were analysed. The results demonstrated that on average one out of five tutors showed a deficiency on only one key aspect. Tutors who did not stimulate students towards active learning were perceived as least effective. Furthermore, students’ tips for improvement could be categorized into four groups: tutors who do not evaluate adequately, tutors who are too directive, tutors who are too passive and tutors who lack content knowledge. The results of this study demonstrate that students are not only able to distinguish between poor and excellent tutors, but are also able to diagnose tutors with different tutoring deficiencies and are able to provide tutors with specific feedback to improve their performance.

Introduction

Many medical schools with a problem-based curriculum evaluate the performance of their tutors. In most cases a questionnaire is used, to be answered by students. These questionnaires are preferably based on theories on effective tutoring. Several evaluation questionnaires have been described in the literature. De Grave et al. (Citation1998, Citation1999) developed a questionnaire that measures whether a tutor stimulates students to elaborate, directs the learning process, and stimulates the integration of knowledge, interaction and accountability. Leung et al. (Citation2003) developed a questionnaire that focused on four types of teaching behaviours: the assertive, suggestive, collaborative and facilitative tutor. Hendry et al. (Citation2002) developed a tutor feedback questionnaire aimed at measuring the group process, clinical reasoning process and independent study. Dolmans et al. (Citation2003) developed a questionnaire that measures whether tutors stimulated students towards active or constructive learning, self-directed learning, contextual learning and collaborative learning.

Although the instruments described here differ from each other, they also have some commonalities, i.e. they measure key aspects of the performance of a tutor in PBL. A central theme that is covered in all the questionnaires described here is the type and degree of direction given by the tutor. A tutor should not be too directive and should also not be too passive. Hendry et al. (Citation2003) reported that a dominant tutor causes tension and conflict in groups, which leads to lack of commitment, cynicism or absenteeism of students and as such hinders the learning process. The quiet or passive tutor who is probably trying not to teach also hinders the learning process. In other words, a good PBL tutor ‘knows’ when and how to intervene (Maudsley, Citation2002).

Tutor evaluation questionnaires such as the one described above are usually aimed at providing tutors with diagnostic feedback on strengths and weaknesses in their performance. An assumption behind the use of these tutor evaluation questionnaires is that students are able to give diagnostic feedback to a tutor regarding weak and strong aspects of his or her performance. But one can question whether students are able to give diagnostic feedback.

In an earlier study, Steinert (Citation2004) conducted a focus-group study and asked students to describe what makes for an effective small-group tutor. The results demonstrated that students’ responses could be divided into three main categories. The first dealt with the tutor's personal attributes, such as creating a non-threatening atmosphere. The second dealt with the tutor's facilitations skills, i.e. allowing the group to work independently. The third dealt with the tutor's knowledge concerning the goals of small-group teaching and the content area under discussion. These findings suggest that students do indeed have clear perceptions of effective tutors. However, although everyone will agree that students are able to distinguish between poor and excellent tutors one can question whether students are able to differentiate between tutors with different tutoring deficiencies, i.e. tutors who perform badly on a specific key aspect of their performance. Are students able to give tutors specific feedback on strong and weak aspects of their performance? For example, can students identify tutors who do not stimulate them to work together effectively but whose score is average or high on all other aspects of their tutoring? Or, can students identify tutors with other deficiencies, such as tutors who do not stimulate students towards active learning? In addition, it would be interesting to know how the overall effectiveness of tutors with a particular deficiency is perceived by students. Which tutors are perceived as less effective by students? Tutors with a deficiency in stimulating students towards active learning or tutors with a deficiency in stimulating students to work together effectively? Finally, it is interesting to know what tips students give to improve a tutor's performance.

The following research questions were formulated for this study:

  1. To what degree do students differentiate between tutors with different tutoring deficiencies?

  2. How effective are tutors with different deficiencies in the perception of the students?

  3. What tips are given by students to improve a tutor's performance?

The answers to these questions can provide us with insights into the degree to which students are able to differentiate between tutors with different tutoring deficiencies, the overall perceived effectiveness of tutors with different deficiencies and provide us with suggestions for improvement of a tutor's performance.

Method

Setting

The study was conducted within the problem-based curriculum of the Medical School at Maastricht University, in the academic year 2002–03. Students meet in tutorial groups of about 10 students, twice per week in two-hour sessions. In these groups they discuss problems. A faculty member, referred to as the tutor, guides each tutorial group. In each six-week course, about 23 to 30 tutorial groups are involved, each guided by a tutor. Each tutor is obliged to follow a two-day course on the principles behind problem-based learning. To guide a small group, it is furthermore mandatory to have followed a two-day tutor-training course.

Subjects

Data were collected concerning the tutors’ performance in the tutorial groups during 22 six-week courses in the academic year 2002–03: six courses in the first year, five in the second year, six in the third year and five in the fourth year. One course in the second year and one course in the fourth year were excluded from this study because these courses were elective courses. The number of students participating in each group is either 9 or 10. The number of students completing the instrument per group was at least six. The average response rate was above 90% at the student level. The number of tutorial groups included in the study was 573, a response rate above 90%. Each tutor guided on average two tutorial groups within one academic year.

Instrument

The instrument consists of 11 items. At the end of each course, students are asked to indicate how much they agreed with each statement on a scale from 1 to 5 (1 = strongly disagree, 5 = strongly agree). Five factors are assumed to represent the 11 items. Items 1, 2 and 3 represent factor 1, active learning; items 4 and 5 represent factor 2, self-directed learning; items 6 and 7 represent factor 3, contextual learning; items 8 and 9 represent factor 4, collaborative learning; and items 10 and 11 represent factor 5, intrapersonal behaviour of the tutor. An example of an item within the factor constructive learning is “the tutor stimulated us to understand underlying mechanisms/theories”. An example of a statement within the factor contextual learning is “the tutor stimulated us to apply knowledge to the problem discussed”. The names of the factors and their underlying items are reported in Appendix 1. Students are also asked to give an overall judgement (ranging from 1 to 10, 6 was ‘sufficient’, 10 was ‘excellent’) of the performance of the tutor (question 12). Furthermore, students were asked to give tips for improvement. The questionnaire has been validated in earlier studies (Dolmans et al., Citation2003; Dolmans & Ginns, Citation2005).

Analyses

Average scores were computed per tutorial group for each item. Average scores were available for 573 tutorial groups and their corresponding tutors. Thus the data were aggregated at the tutorial group level. For each tutor the average score on each of the five factors was also computed. Subsequently, each factor score was categorized as either relatively low, average or relatively high; three equal groups were made per factor. Based on these scores, tutors were ranked in groups with different deficiencies. In addition, the average tutor's overall performance score was computed for each group of tutors with a particular deficiency. Finally, students’ tips for improvement given on the open-ended question at the end of the questionnaire were analysed. Not all student comments were used for the analysis. The comments were analysed that were given to tutors who performed relatively lowest on the overall performance score within each category of tutors or for tutors with different tutoring deficiencies. The eight lowest performing tutors within each category were selected and the tips given by the students were analysed. The tips were categorized by the first author of this paper and subsequently this classification was checked by the second author of this paper. Tips within each category were collected and are presented in the results section.

Results

The descriptive statistics are presented in . The average scores on the 11 items vary between 4.2 (SD = 0.7) and 3.4 (SD = 0.5). The highest scoring item deals with the tutor's motivation (item 11). The two lowest scoring items deal with stimulating students to search for various resources by themselves (item 5) and with constructive feedback given by the tutor on the tutorial's group work (item 8). At the level of the five factors, the highest scoring factor deals with the tutor's interpersonal behavior (factor 5). The average score on this factor is 4.0 (SD = 0.6). The lowest scoring factors deal with self-directed learning and collaborative learning (factor 2 and factor 4). The average scores are 3.4 (SD 0.5 and 0.6, respectively). The average overall score for the tutors’ performance is 7.4 (SD = 1.0, scale 1–10). Only 6% of the tutors scored below 6, which is assumed as insufficient on a 10-point scale; 17% of the tutors had a score between 6 and 7, 47% had a score between 7 and 8 and 30% had a score higher than 8.

Table 1.  Mean score (scale 1–5) and corresponding standard deviation (SD) for all the 11 items and the five factors (F1 to F5), 2002–03 (n = 573)

shows how many tutors had a deficiency on one or more factors. As can be seen, 10% of the tutors scored relatively low on all five factors and 19% scored relatively low on three or four out of five factors. In total 118 tutors (21%) scored relatively low on one factor and average or high on the other four factors. Respectively 33 (6%) and 34 tutors (6%) scored low on factor 2 or factor 4; 27 tutors (5%) scored low on factor 3; 16 tutors (3%) scored low on factor 5 and only 8 tutors (1%) scored low on factor 1. Furthermore, 10% of the tutors scored high on all five factors and 9% scored high on four out of five factors.

Table 2.  Number and percentage of tutors with a deficiency and the corresponding overall score for the tutors’ performance (scale 1–10)

In addition, the average overall tutor performance score was computed for the groups of tutors with a tutoring deficiency. As can be seen in , the highest score was received by tutors scoring high on all five factors, as expected. The average score was 8.5 (SD = 0.4). The lowest score was received by tutors scoring low on all five factors; the average score was 5.8 (SD = 1.3). The average score for tutors with a deficiency in only one out of five factors varied between 7.2 and 7.9. The highest score was given to tutors who did not stimulate students towards collaborative learning (7.9). The lowest score was given to tutors who did not stimulate students towards active learning (7.2). Tutors who were less motivated received a score of 7.5 and tutors who did not stimulate students towards self-directed learning or contextual learning scored 7.7.

As described before, students were asked to give the tutor tips for improvement. Students’ comments were analysed. Four categories of tips could be distinguished: (1) tutors who do not evaluate adequately, (2) tutors who are too directive, (3) tutors who are too passive and (4) tutors who are not content experts. Examples of tips relating to these four factors are presented in .

Table 3.  Tips given by students to improve a tutor's performance: divided into four categories

Conclusion and discussion

The aim of this study was threefold. The first aim was to find out to what degree students differentiate between tutors with different tutoring deficiencies. The results demonstrate that 1 out of 10 tutors were judged as performing low on all factors and another 1 out of 10 tutors were judged as performing high on all five factors. One out of five tutors (21%) had a deficiency on only one factor. Another 19% of the tutors scored low on three or four out of five factors. These findings imply that students are not only able to distinguish between poorly and excellently performing tutors, but are also able to distinguish between tutors with different deficiencies.

The second aim was to investigate how effective tutors are with different tutoring deficiencies. The results demonstrated that tutors scoring relatively low on all five factors had the lowest overall performance score, as was expected (5.8). Tutors scoring relatively high on all five factors had the highest tutor performance score (8.5). Tutors with a deficiency in active learning (factor 1) had relatively the lowest overall tutor performance score (7.2) of all tutors with a deficiency in only one factor. These tutors do not stimulate students towards active learning and probably have a tutor style that is more teacher centred than student centred. The positive news is that this group of tutors was very small (n = 8; 1%). Of all tutors with a deficiency in only one factor, tutors with a deficiency on factor 4, i.e. tutors who stimulate students less towards collaborative learning, scored relatively highest on the overall performance score of all tutors with a deficiency in only one factor (7.9). An explanation for this finding might be that students do not experience this deficiency as a shortcoming as long as the tutorial group does its work on its own, i.e. a tutorial group with no problems of group dynamics. In other words, if a tutorial group performs well, students might probably not really worry when the tutor does not give constructive feedback or evaluate the group cooperation regularly.

The third aim was to find out what tips are given by students to improve a tutor's performance. Four categories of comments were reported: tutors who do not evaluate adequately, tutors who are too directive, tutors who are too passive and tutors who are not content experts. The finding that tutors do not evaluate adequately is consistent with the relatively low score on factor 4, collaborative learning, and especially the item that dealt with giving students constructive feedback. This deficiency has been reported in previous studies. Kaufman & Holmes (Citation1996) for example reported that tutors encountered difficulties in tutorial groups with evaluation of the students and reported difficulties in handling difficult situations in the group. The other categories are also consistent with earlier findings indicating that the best tutor ‘knows’ when and how to intervene (Maudsley, Citation2002).

One shortcoming of this study should be mentioned. Tutors with a deficiency in this study are defined as tutors scoring relatively low on one or more factors. From an absolute point of view, only 6% of the tutors had a score below six on a 10-point scale or an insufficient score. So, tutors that are defined in this study as having a deficiency performed somewhat lower on a specific factor and cannot be perceived in reality as poor tutors. Another shortcoming of this study is that only student ratings were available and no gold standard was available that might be used as a reference point. In future research peer tutors could be asked to evaluate the performance of tutors.

This study has demonstrated that students are not only able to distinguish between poor and excellent tutors, but are also able to diagnose tutors with different tutoring deficiencies. An important implication of this study is that medical schools should pay more attention during faculty development activities, such as tutor training courses, to teach tutors to evaluate their own performance and the tutorial group's performance on a regular basis. Tutors should be better trained in how to provide students with constructive feedback. Asking for feedback is often felt to be threatening by many staff members, but tutors should bear in mind that students can provide them with useful tips to improve their performance. In other words, tutors can learn a lot from evaluations.

Additional information

Notes on contributors

D.H.J.M. Dolmans

DIANA DOLMANS and INEKE WOLFHAGEN are Associate Professors and AMEIKE JANSSEN is Assistant Professor. The authors are educational psychologists working at the Department of Educational Development and Research of the University of Maastricht, the Netherlands. They are involved in a project on programme and teacher evaluation and quality assurance at the Medical School of Maastricht University.

A Janssen-Noordman

DIANA DOLMANS and INEKE WOLFHAGEN are Associate Professors and AMEIKE JANSSEN is Assistant Professor. The authors are educational psychologists working at the Department of Educational Development and Research of the University of Maastricht, the Netherlands. They are involved in a project on programme and teacher evaluation and quality assurance at the Medical School of Maastricht University.

HAP Wolfhagen

DIANA DOLMANS and INEKE WOLFHAGEN are Associate Professors and AMEIKE JANSSEN is Assistant Professor. The authors are educational psychologists working at the Department of Educational Development and Research of the University of Maastricht, the Netherlands. They are involved in a project on programme and teacher evaluation and quality assurance at the Medical School of Maastricht University.

References

  • De Grave WS, Dolmans DHJM, van der Vleuten CPM. Tutor Intervention Profile: reliability and validity. Medical Education 1998; 32: 262–268
  • De Grave WS, Dolmans DHJM, van der Vleuten CPM. Profiles of effective tutors in problem-based learning: scaffolding student learning. Medical Education 1999; 33: 901–906
  • Dolmans DHJM, Wolfhagen HAP, Scherpbier AJJA, Vleuten van der CPM. Development of an instrument to evaluate the effectiveness of teachers in guiding small groups. Higher Education 2003; 46: 431–446
  • Dolmans DHJM, Ginns P. A short questionnaire to evaluate the effectiveness of tutors in PBL: validity and reliability. Medical Teacher 2005; 27(6)534–538
  • Hendry GD, Phan H, Lyon PM, Gordon J. Student evaluation of expert and non-expert problem-based learning tutors. Medical Teacher 2002; 24(5)544–549
  • Hendry GD, Ryan G, Harris J. Group problems in problem-based learning. Medical Teacher 2003; 25(6)609–616
  • Kaufman DM, Holmes DB. Tutoring in problem-based learning: perceptions of teachers and students. Medical Education 1996; 30: 371–377
  • Leung KK, Lue BH, Lee MB. Development of a teaching style inventory for tutor evaluation in problem-based learning. Medical Education 2003; 37: 410–416
  • Maudsley G. Making sense of trying not to teach: an interview study of tutors’ ideas of problem-based learning. Academic Medicine 2002; 77(2)162–172
  • Steinert Y. Student perceptions of effective small group teaching. Medical Education 2004; 38: 286–293

Appendix 1 Short Tutor Evaluation Questionnaire, Maastricht Medical School, 2002–03

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.