1,515
Views
1
CrossRef citations to date
0
Altmetric
Research Article

The relationship between the monitored performance of tutors and students at PBL tutorials and the marked hypotheses generated by students in a hybrid curriculum

, &
Article: 1270626 | Received 15 Oct 2016, Accepted 23 Nov 2016, Published online: 06 Jan 2017

ABSTRACT

Introduction: There have been a number of published studies examining the link between the effectiveness of the problem-based learning (PBL) process and students’ performance in examinations. In a hybrid PBL/lectures curriculum, the results of such studies are of limited use because of the difficulty in dissociating the knowledge gained at lectures from that gained through PBL-related activities. Hence, the objectives of this study were: (1) to develop an instrument to measure the performance of tutors and students at PBL tutorials, and (2) to explore the contribution of such performances to the marks attained by students from the hypotheses generated at PBL tutorials.

Methods: A monitoring instrument for assessing the performances of non-expert tutors and students at tutorials was developed and validated using principal component analysis and reliability analysis. Also, a rubric was formulated to enable a content expert to assign marks to the quality of hypotheses generated.

Results: The monitoring instrument was found to be valid and reliable. There was a significant correlation between the performance of tutors at tutorials and hypotheses marks. In contrast, there was no significant correlation between the performance of students and hypotheses marks.

Discussion: The monitoring instrument is a useful tool for improving the PBL process, especially where the medical programme depends on non-expert PBL tutors. In addition to ensuring good PBL processes, it is important that students achieve the desired output at PBL tutorials by producing hypotheses that help them understand the basic sciences underlying the clinical cases. The latter is achieved by the use of an open-ended rubric by a subject expert to assign marks to the hypotheses, a method that also provides additional motivation to students to develop relevant and detailed hypotheses.

Introduction

Problem-based learning (PBL) which has been a major part of medical education for half a century was promoted initially to improve application of knowledge by students to diagnose and manage clinical problems [Citation1]. PBL, when properly applied, has been found to be an effective approach in medical education because it promotes constructive, self-directed, collaborative, and contextual learning [Citation2,Citation3]. However, there has been some debate on the superior effectiveness of PBL in the learning of basic medical knowledge and clinical skills [Citation4,Citation5]. Additionally, there is the tendency for cracks to develop and weaken the effectiveness of the PBL process and its outcome unless there is keen vigilance over the medical school’s PBL programme [Citation6]. How students learn contributes significantly to what they learn [Citation7]; hence, the focus of PBL should be on both the delivery process (how students learn) as well as the content (what students learn). Assessment, monitoring, and programme evaluation all contribute to improving what and how students learn [Citation7]. A number of publications have reported the effectiveness of the PBL process by measuring students’ satisfaction of the process from surveys [Citation8,Citation9], or the knowledge acquired by students as measured by examinations [Citation10Citation12]. However, there is hardly any reported work that uses an independent PBL specialist to monitor performance of tutor and students during PBL tutorial sessions and how their performance affects the quantity and quality of hypotheses generated by the tutorial group.

The medical school at the University of the West Indies, Trinidad and Tobago, uses a hybrid system of PBL and lectures/laboratory practicals. The school follows the seven-step systematic approach of PBL developed by the University of Linburg, Maastricht [Citation1]. The medical school uses non-expert tutors with MD (or equivalent) or PhD backgrounds who would have some understanding of the content of the cases. A PBL group, which meets once a week, comprises 11–13 students and the tutor. A PBL session lasts approximately three hours and comprises two phases: (1) ‘problem-analysis phase’ during which students brainstorm a new problem, develop relevant hypotheses based on prior knowledge and generate learning objectives for self-study; (2) ‘reporting phase’ during which students discuss the objectives generated from the previous problem and revise the previous hypotheses. In the first two years of our medical programme, students are expected to focus their learning on identifying key issues in the problem and providing detailed explanations for the issues identified; they are not expected to provide diagnosis or a management plan for patients in the problem. The quantity and quality of relevant hypotheses generated by a tutorial group is marked by a content expert and the results given to the students before the next tutorial session.

The medical school recognizes the three key elements that are essential to a successful PBL programme: (1) the problem used to stimulate learning, (2) the tutors as facilitators of learning, and (3) the group work (or team work) that ensures interaction amongst the students [Citation13]. The current study focused on the latter two elements with the following objectives: (1) to validate a monitoring instrument to measure the performance of tutors and students in a tutorial, i.e., how students learn; (2) to determine the extent to which the performance of tutors and students in a tutorial influenced the hypotheses generated by students, i.e., what students learn.

Methods

Developing a monitoring instrument

The study was conducted in accordance with the guidelines of the university’s ethics committee. The first part of the study was to validate a suitable monitoring instrument and determine its reliability. The items on the monitoring instrument were determined by medical education experts using information from published articles on the expected roles of tutor and students; the latter included the group leader, record keeper, and other students in the group [Citation14Citation21]. Following feedback from experienced PBL tutors, the medical education experts settled on two constructs with 21 items: 11 items for evaluating the performance of students during PBL tutorial sessions and 10 items for evaluating the performance of students during the tutorials. The two constructs and items in the instrument are listed in . In order to test further the validity of the instrument, it was given to 40 randomly-chosen PBL tutors to rate the extent to which each item was appropriate for the construct being examined. The participants were asked to evaluate the items using a 5-point Likert scale: 1: strongly disagree, 2: disagree, 3: undecided, 4: agree, 5: strongly agree. The data obtained was subjected to an exploratory factor analysis (principal component analysis) with oblique (direct oblimin) rotation; the latter was chosen with the assumption that the factors would not be independent. The internal consistency of the items in each construct was determined by measuring Cronbach’s reliability coefficients.

Table 1. Summary of exploratory factor analysis of PBL monitoring instrument.

Monitoring performance of tutors & students and marking hypotheses generated by students

In the second part of the study, the instrument (with the same 5-point Likert scale as above) was used by an independent medical education specialist to monitor 23 PBL groups in years 1 and 2 (basic sciences) of the medical programme. The hypotheses generated by each PBL group for a particular clinical problem was marked by an independent content expert to generate a hypotheses mark for the group. The courses had five to eight clinical problems depending on the length of the course. shows the rubric used to determine the hypotheses mark obtained by a group of students for a PBL problem, with the maximum mark being 10. The average hypotheses mark for the group was the mean of all the marks obtained during the course. Pearson’s product-moment correlation analysis was used to determine how the performance of tutors and students influenced the average hypotheses mark for the group. All statistical analyses were performed using SPSS (version 22) software.

Table 2. Rubric used to mark the hypotheses generated by PBL groups.

Results

Validation of monitoring instrument

Responses to the questionnaire were obtained from 31 tutors, giving a response rate of 78%. shows the results of the principal component analysis of the 21 items with oblimin rotation. The Kaiser-Meyer-Olkin measure of sampling adequacy was found to be 0.60 which was above the recommended acceptable limit of 0.5 [Citation22]. The scree plot showed an inflexion that justified retention of the two factors. shows the eigenvalues and percentage of variance accounted for by each factor. also shows the rotated factor loadings of all items with values of 0.5 or greater in bold. There were 10 items that clustered around factor 1, and another 10 items on factor 2. One item had a factor loading of less than 0.5 and was therefore excluded in determining the reliability of the instrument and in part 2 of the study.

A Cronbach’s alpha coefficient of 0.7 or more indicates a reliable scale [Citation22,Citation23]. The two factors, using the 10 items in each construct, had coefficients of approximately 0.87 and 0.86 (). None of the items in either factor had a ‘corrected item-total correlation’ of less than 0.4 or increased the alpha coefficient for the construct when the item was deleted. Hence, the reliability of the instrument was found to be acceptable and the items with bold rotated factors in were used for part 2 of the study.

Influence of performance of tutors and students in a tutorial on the hypotheses generated by students

In the second part of the study, the monitored performance of tutors and students in 23 PBL groups were compared to the hypotheses marks attained by the group of students. The mean score for students performance on the 5-point Likert scale was 3.7 ± 0.4 (mean ±s.d.), whilst that for tutor performance was 4.3 ± 0.4. The quality of hypotheses generated by all student groups which was marked over a maximum of 10 was 8.3 ± 0.8 (mean ±s.d.). Pearson’s product-moment correlation analysis showed a significant correlation between the monitored tutor performance and hypotheses mark attained by students (r = 0.44; p = 0.02, one-tailed). In contrast, the correlation coefficient between student performance and hypotheses mark was very low and not significant (r = 0.08; p = 0.35, one-tailed). Additionally, there was a significant correlation between the monitored tutor performance and students performance (r = 0.43, p = 0.02, one-tailed)

Discussion

There have been several published assessment methods for PBL, many of which assess the process whilst others assess the outcome, e.g., knowledge content [Citation8Citation11,Citation24]. In order to improve the effectiveness of PBL as a learning tool, it is necessary to assess both the process and the outcome.

The instrument we developed for assessing the process initially had two constructs and 21 items. Following a factor analysis, one item was deleted because of low factor loading, thus the final instrument has 20 items, 10 of which clustered around students’ performance at PBL tutorials and the other 10 around tutors’ performance. The results of this part of the study indicate the instrument has both construct validity and internal consistency (reliability) and could be used to monitor the performances of tutors and students during PBL tutorials.

Schmidt (Citation1983) proposed that information processing theory in educational psychology formed the bedrock of PBL, i.e., activation of prior knowledge, encoding specificity, and elaboration of knowledge. Additionally, cooperative learning (rather than competitive learning) is essential for the success of PBL [Citation2,Citation25]. Cooperative learning is promoted when students have shared goals and rewards, and optimize their complementary roles to achieve them. Another key advantage of PBL is to promote the development of clinical reasoning skills during the early stages of medical training. Familiarity with clinical cases leads to the formation of clearly defined illness scripts that enable clinicians (especially specialists) to quickly diagnose a clinical case. However, when confronted with unfamiliar cases (especially complex ones), doctors tend to unravel the case by using hypotheses generation in the form of self-explanations [Citation26,Citation27]. Hypotheses generation involves constructing linkages amongst items in the case and with the underlying mechanisms/reasons from biological, psychological, social, ethical, and legal perspectives. We promoted collaborative learning and the development of clinical reasoning skills in our programme by introducing a system in which the written hypotheses from PBL groups for each problem are marked by a content expert each week and the marks given back to the group, thus ensuring immediate feedback to students on their output. The marks attained by the group formed part of each student’s continuous assessment. This approach encouraged students not only to work together but also gave the faculty a sense of what students have learnt, and to address possible gaps in knowledge across the groups.

Some studies have found significant correlation between the score assigned to a student by a tutor and the student’s marks on traditional examinations, e.g., multiple choice questions; whilst others have found no such correlation [Citation10Citation12]. The design of the current study differs from the other published ones in that (1) we examined the two separate constructs that contribute to the PBL process i.e. performances of students and tutors, (2) we reduced bias in scoring students and tutors at PBL tutorials by using the same medical education expert to do the scoring, (3) the marks obtained were directly associated with the PBL process because it was based solely on the hypotheses generated by the group at PBL and not on examination scores which are affected by other forms of learning in a hybrid medical education curriculum, and (4) the marks were attained by the whole group and not by individual students. Our study has added another dimension to the debate by the finding that the performance of tutors correlated significantly with the quantity and quality of hypotheses generated by students.

In the current study, the hypotheses marks achieved by students were not influenced by the variation in performance of students at tutorials. This suggests that the quantity and quality of hypotheses generated by students is dependent on factors other than the students’ performance in the PBL tutorial process. For example, the hypotheses generated would have reflected the depth of the students’ prior knowledge or the depth of the knowledge gained during the self-study step of the PBL process. This line of reasoning could be supported by the diverse learning resources online and technological tools that are currently available to medical students [Citation28].

A number of studies have demonstrated that both expert and non-expert tutors do influence the PBL process, and the importance of having quality tutors for the PBL process [Citation5,Citation29,Citation30]. In the current study, the performance of tutors had significant correlation with the performance of students during tutorials. In our programme, PBL tutors are trained to be effective facilitators by the medical school’s Centre for Medical Sciences Education. Hence, it was reassuring to note that tutor performance in this study was found to be significantly correlated with students’ performance during tutorial.

Conclusion

An instrument has been developed to monitor the performance of tutors and students at PBL tutorials. The 20-item instrument with two constructs was shown to have good validity and reliability. Whilst having a good PBL process is essential, it is also important that the students’ achieve the desired output at PBL tutorials by producing hypotheses that are relevant to the clinical case being discussed. Hence, we have also provided a rubric for assessing the hypotheses generated by students during PBL tutorials. The results of this study emphasize the important role of the tutors in facilitating the performance of students at tutorials and getting them to engage in discussions that produce excellent hypotheses. The tutors, albeit non-expert, were found to be capable of influencing not only how students learn but also what they learn.

Conflict of interest and funding

The authors did not receive funding from industry or elsewhere for conducting this study and therefore declare that they have no conflict of interest in the results of the study.

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • Schmidt HG. Problem-based learning: rationale and description. Med Educ. 1983;17:11–5.
  • Dolmans DH, De Grave W, Wolfhagen IH, et al. Problem-based learning: future challenges for educational practice and research. Med Educ. 2005;39:732–741. DOI:10.1111/j.1365-2929.2005.02205.x
  • Gwee MC. Problem-based learning: a strategic learning system design for the education of healthcare professionals in the 21st century. Kaohsiung J Med Sci. 2009;25:231–239.
  • Colliver JA. Effectiveness of problem-based learning curricula: research and theory. Acad Med. 2000;75:259–266.
  • Norman GR, Schmidt HG. Effectiveness of problem-based learning curricula: theory, practice and paper darts. Med Educ. 2000;34:721–728.
  • Azer SA, McLean M, Onishi H, et al. Cracks in problem-based learning: what is your action plan? Med Teach. 2013;35:806–814. DOI:10.3109/0142159X.2013.826792
  • Wilkes M, Bligh J. Evaluating educational interventions. Bmj. 1999;318:1269–1272.
  • Addae JI, Wilson JI, Carrington C. Students’ perception of a modified form of PBL using concept mapping. Med Teach. 2012;34:e756–762. DOI:10.3109/0142159X.2012.689440
  • Gari Calzada MA, Iputo JE. Student opinions on factors influencing tutorials at Walter Sisulu University, South Africa. MEDICC Rev. 2015;17:13–17.
  • Von Bergmann H, Dalrymple KR, Wong S, et al. Investigating the relationship between PBL process grades and content acquisition performance in a PBL dental program. J Dent Educ. 2007;71:1160–1170.
  • Whitfield CF, Xie SX. Correlation of problem-based learning facilitators’ scores with student performance on written exams. Adv Health Sci Educ Theory Pract. 2002;7:41–51.
  • Yaqinuddin A, Kvietys P, Ganguly P, et al. PBL performance correlates with content acquisition assessment: A study in a hybrid PBL program at Alfaisal University. Med Teach. 2012;34:83. DOI:10.3109/0142159X.2012.640721
  • Dolmans DHJM, Wolfhagen IHAP, Ginns P. Measuring approaches to learning in a problem based learning context. Intl J Med Educ. 2010;1:55–60. DOI:10.5116/ijme.4c50.b666
  • Allareddy V, Havens AM, Howell TH, et al. Evaluation of a new assessment tool in problem-based learning tutorials in dental education. J Dent Educ. 2011;75:665–671.
  • De Grave WS, Dolmans DHJ. van der Vleuten CPM. Profiles of effective tutors in problem-based learning: scaffolding student learning. Med Educ. 1999;33:901–906.
  • Dolmans DH, Gijselaers WH, Moust JH, et al. Trends in research on the tutor in problem-based learning: conclusions and implications for educational practice and research. Med Teach. 2002;24:173–180. DOI:10.1080/01421590220125277
  • Dolmans DHJM, Wolfhagen IHAP. Improving the effectiveness of tutors in problem-based learning. Med Teach. 1994;16:369. DOI:10.3109/01421599409008275
  • Rosado Pinto P, Rendas A, Gamboa T. Tutors’ performance evaluation: a feedback tool for the PBL learning process. Med Teach. 2001;23:289–294. DOI:10.1080/01421590120048139
  • Valle R, Petra L, Martinez-Gonzaez A, et al. Assessment of student performance in problem-based learning tutorial sessions. Med Educ. 1999;33:818–822.
  • Visschers-Pleijers AJ, Dolmans DH, De Grave WS, et al. Student perceptions about the characteristics of an effective discussion during the reporting phase in problem-based learning. Med Educ. 2006;40:924–931. DOI:10.1111/j.1365-2929.2006.02548.x
  • Visschers-Pleijers AJSF, Dolmans DHJM, Wolfhagen IHAP, et al. Exploration of a method to analyze group interactions in problem-based learning. Med Teach. 2004;26:471–478. DOI:10.1080/01421590410001679064
  • Field A. Discovering statistics using IBM SPSS statistics. 4th ed. London: SAGE Publications Ltd.; 2012.
  • Pallant J. SPSS Survival Manual. 1st ed. Buckingham: Open University Press; 2002.
  • Yamashita M, Norose T, Hayakawa T. Introduction and evaluation of an integrated program to develop pharmacotherapeutic management skills. Yakugaku Zasshi. 2016;136:361–367. DOI:10.1248/yakushi.15-00231-1
  • Qin Z, Johnson DW, Johnson RT. Cooperative versus competitive efforts and problem solving. Rev Educ Res. 1995;65:129–143. DOI:10.3102/00346543065002129
  • Chamberland M, Mamede S, St-Onge C, et al. Students’ self-explanations while solving unfamiliar cases: the role of biomedical knowledge. Med Educ. 2013;47:1109–1116. DOI:10.1111/medu.12253
  • Chamberland M, St-Onge C, Setrakian J, et al. The influence of medical students’ self-explanations on diagnostic performance. Med Educ. 2011;45:688–695. DOI:10.1111/j.1365-2923.2011.03933.x
  • Jin J, Bridges SM. Educational technologies in problem-based learning in health sciences education: a systematic review. J Med Internet Res. 2014;16:e251. DOI:10.2196/jmir.3240
  • Jaiprakash H, Min AK, Ghosh S. Increased correlation coefficient between the written test score and tutors’ performance test scores after training of tutors for assessment of medical students during problem-based learning course in Malaysia. Korean J Med Educ. 2016;28:123–125. DOI:10.3946/kjme.2016.18
  • Jones RW. Problem-based learning: description, advantages, disadvantages, scenarios and facilitation. Anaesth Intensive Care. 2006;34:485–488.