1,978
Views
22
CrossRef citations to date
0
Altmetric
Research Article

Exploring the validity and reliability of a questionnaire for evaluating veterinary clinical teachers’ supervisory skills during clinical rotations

, , , , &
Pages e84-e91 | Published online: 28 Jan 2011

Abstract

Background: Feedback to aid teachers in improving their teaching requires validated evaluation instruments. When implementing an evaluation instrument in a different context, it is important to collect validity evidence from multiple sources.

Aim: We examined the validity and reliability of the Maastricht Clinical Teaching Questionnaire (MCTQ) as an instrument to evaluate individual clinical teachers during short clinical rotations in veterinary education.

Methods: We examined four sources of validity evidence: (1) Content was examined based on theory of effective learning. (2) Response process was explored in a pilot study. (3) Internal structure was assessed by confirmatory factor analysis using 1086 student evaluations and reliability was examined utilizing generalizability analysis. (4) Relations with other relevant variables were examined by comparing factor scores with other outcomes.

Results: Content validity was supported by theory underlying the cognitive apprenticeship model on which the instrument is based. The pilot study resulted in an additional question about supervision time. A five-factor model showed a good fit with the data. Acceptable reliability was achievable with 10–12 questionnaires per teacher. Correlations between the factors and overall teacher judgement were strong.

Conclusions: The MCTQ appears to be a valid and reliable instrument to evaluate clinical teachers’ performance during short rotations.

Introduction

Clinical rotations in hospitals are an important part of the undergraduate curriculum in both medical and veterinary education. They provide an authentic learning environment in which students can participate in patient care and learn to integrate theory and clinical practice. Clinical rotations in a medical and in a veterinary curriculum are quite similar. However, the length of the rotations is often shorter in the veterinary situation because of the different animal species. For example, students not only have an anaesthesia rotation at the Companion Animal Department, they are also required to complete this rotation at the Equine Health Department.

Although routine clinical work provides students with opportunities to learn about history taking, physical examination, clinical reasoning and professionalism (Smith & Walsh Citation2003; Spencer Citation2003; Daelmans et al. Citation2004; Ramani & Leinster Citation2008), the clinical learning environment is not always optimally used for the learning of students (Spencer Citation2003; van der Hem-Stokroos et al. Citation2005; Leinster Citation2009).

Less than optimal use of learning opportunities in clinical settings is related to students’ learning not being the prime objective of the hospital organization (Spencer Citation2003; Dolmans et al. Citation2004). Clinical staff in teaching hospitals face competing demands from patient care, patient safety, administration, research and teaching (Prideaux et al. Citation2000; Lane & Strand Citation2008). As a result they do not always have sufficient time for teaching and observing students, which in turn can diminish the quality of clinical education (Spencer Citation2003; Ramani & Leinster Citation2008).

Most clinical teachers are well prepared for and dedicated to their tasks in patient care, but their training in teaching skills is usually scant despite their general enthusiasm for teaching. The fact that this can be detrimental to their effectiveness as teachers (Spencer Citation2003; Steinert Citation2005) has given rise to the growing prominence of faculty development in medical education (Prideaux et al. Citation2000; Steinert et al. Citation2006).

Institutions that invest in faculty development need instruments to measure teaching effectiveness to identify areas of clinical teaching where training is needed and to measure the return on their investment in faculty development (Litzelman et al. Citation1998). Even more importantly, measurement of teaching effectiveness can provide feedback to guide, support and motivate clinical teachers to improve their teaching. (Copeland & Hewson Citation2000; Snell et al. Citation2000; Ramani & Leinster Citation2008).

Stalmeijer et al. (Citation2008) designed the Maastricht Clinical Teacher Questionnaire (MCTQ) as an instrument to evaluate individual clinical teachers’ supervisory skills during undergraduate clinical rotations in the medical curriculum. The MCTQ is based on the cognitive apprenticeship model and measures students’ evaluations of a teacher as a good role model and supervisor. The cognitive apprenticeship model focuses on the teacher's cognitive processes during performance of complex tasks and comprises six teaching strategies (modelling, coaching, scaffolding, articulation, reflection and exploration) which help to make explicit teachers’ cognitive processes when supervising students (Collins et al. Citation1991; Stalmeijer et al. Citation2008). Stalmeijer et al. added learning climate as a seventh domain to this model, because a positive learning climate is considered essential for successful learning in the clinical workplace (Kilminster & Jolly Citation2000; Stalmeijer et al. Citation2008).

Stalmeijer et al. (Citation2010) demonstrated the validity of the MCTQ for use in clinical rotations lasting between 4 and 6 weeks. The MCTQ can thus be regarded as a valuable instrument for evaluation, feedback, self-assessment, self-reflection and faculty development of clinical teachers. In this study, we investigated whether the MCTQ also provides reliable and valid information about clinical teachers’ supervisory competence in a veterinary learning environment with relatively short rotations.

When assessing the validity of an instrument like the MCTQ in a different context than the one for which it was developed, it is important to collect validity evidence from a broad range of sources (Snell et al. Citation2000; Beckman et al. Citation2005). Beckman et al. investigated the validity and reliability of a range of similar clinical teaching assessment instruments (Beckman et al. Citation2004, Citation2005) using the model of standards published by the American Psychological and Education Research Associations, which distinguishes five sources of validity evidence (American Education Research Association and American Psychological Association Citation1999; Downing Citation2003; Beckman et al. Citation2005):

  1. Content: the relationship between an instrument's content and the construct it is intended to measure.

  2. Response process: analysing the response process affords insight into factors affecting the data collected with an instrument and can be elicited by asking respondents to articulate their thought processes during completion of an instrument. These factors (for example the wording of items) may be irrelevant to the construct being measured but can be a source of unwanted variance. Other aspects of response process are instrument delivery method, scoring and reporting. Evidence about response process can be used to control or eliminate all possible sources of error associated with the administration of an instrument.

  3. Internal structure and reliability: this addresses the question whether the data generated by an instrument fits the underlying construct. It concerns the uni-dimensionality of the sub-scales, the reliability of the scores and the statistical and psychometric characteristics of an instrument.

  4. Relations to other variables: the relations between instrument scores and other variables with relevance to the construct being measured also provide evidence of validity.

  5. Consequences: the effect of the use of an instrument on those being evaluated is another source of validity evidence.

In this study, we examined the first four sources of validity evidence to determine the validity and reliability of the MCTQ in the veterinary education setting.

Methods

Context

We conducted our study at the Faculty of Veterinary Medicine, Utrecht University, The Netherlands, (FVMU) between November 2008 and September 2009. FVMU offers a 6-year undergraduate curriculum consisting of 4 years of preclinical training and 2 years of clinical clerkships.

The first clerkship year follows the Uniform Clinical Rotation Programme, which involves 30 weeks of rotations in different clinical departments. In the second clerkship year, students undertake rotations in disciplines related to their chosen animal species track.

All clinical departments participated in the study: the Equine Health Department, the Companion Animal Health Department, the Farm Animal Health Department and the Pathology Department. These departments provide all the patient-based clinical training in both the first and the second clinical year. In each department, different disciplines (e.g. surgery or gynaecology) also contribute to the clerkship. Rotations last from 1 day to 6 weeks and approximately 190 staff members have roles in clinical teaching. Individual clinical teachers’ supervisory skills during undergraduate clinical rotations are not evaluated systematically at FVMU.

Content

The MCTQ addresses seven domains (the six teaching strategies of the cognitive apprenticeship model and general learning climate, GLC) in 24 items. Content validity is ensured by the fact that the instrument is based on the cognitive apprenticeship model, which is underpinned by theory of effective apprenticeship learning. The MCTQ's content validity is further supported by the involvement of different groups of stakeholders in its development (Collins et al. Citation1991; Stalmeijer et al. Citation2008, Citation2010) and the fact, as stated in the Section ‘Introduction’, that the clinical rotations in the veterinary context are quite similar to the rotations in the medical context.

Response process

Response process was investigated in a pilot study among 28 students who had practical experience with clinical teaching. The students participated in the study when they had almost or completely finished one of the following rotations: first- and second-year Equine Animal Health rotations, first- and second-year Companion Animal Health rotations, first- and second-year Farm Animal Health rotations, and first-year Pathology rotation. The students were asked to fill in the instrument individually and discuss the relevance and wording of the MCTQ items and identify factors that affected their answers. These group discussions lasted 90 min. We asked them to think aloud while filling in the instrument and we asked their opinions as to how to administer and distribute the instrument so as to ensure the highest rate and accurateness of response.

The students gave consent to audiotape and transcribe the discussions. The transcripts were analysed by the authors TB, AJ and PB, who individually assigned codes to all issues of interest. They met several times to discuss the coding until consensus was reached on the emerging themes. If necessitated by the outcome of the analysis, supplemental items would be added to the questionnaire.

Internal structure

To explore the third source of validity evidence (construct validity and reliability) we distributed a version of the MCTQ, modified to accommodate the results of the analysis of the response process (Appendix 1) to all fifth- and sixth-year students on clinical rotations during the 2008/2009 academic year. Students who commenced their rotations during the study period were also invited to participate. Approximately 350 students took part in the study.

The MCTQ asks students to rate the performance of one clinical teacher by indicating their agreement, on a 5-point Likert scale (1 = fully disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = fully agree), with 24 statements relating to different teaching strategies and learning climate. Additionally, the students are asked to give an overall judgement of the clinical teacher's performance on a 10-point scale. Two open-ended questions invite suggestions for ways in which teaching might be improved. The students in our study were also asked to answer the supplemental items that could emerge from the pilot study.

The students could evaluate several clinical teachers, using a separate questionnaire for each teacher. They were instructed to complete the questionnaire as soon as possible after a student–teacher encounter. Several reminders were sent in the form of an electronic newsletter.

Participation was voluntary. Students were asked to fill out their name and student number, but they were not obliged to do so because they could also decide to fill out the questionnaire anonymously. Confidentiality was assured and the questionnaire was coded in such a way that data could not be traced back to individual students. This information was provided to students and teachers via a website and e-mail.

Because we were interested in the relationships between teaching strategies at the level of individual teachers, we analysed the aggregate (mean) scores of teachers for whom we had received four or more completed questionnaires.

We used AMOS 18.0 to conduct confirmatory factor analysis to investigate the construct validity of the instrument. We started from the cognitive apprenticeship model which underlies the MCTQ (see Appendix 1 for the domains and reworded items, the ‘miscellaneous items’ were not included in this analysis). Statistical analysis was performed using robust maximum likelihood estimation, a method less sensitive to violations of the normality assumption than other estimation methods. Inspection of modification indices revealed several items that had a negative impact on the fit of the model. We used several fit indices and criteria as proposed by Byrne (Citation2001) to determine the fit of the model: (1) χ2 divided by the degrees of freedom (CMIN/df) is <2; (2) the goodness-of-fit index (GFI) is >0.80; (3) the comparative fit index (CFI) is >0.90; (4) and the standardized root mean square residual (SRMR) is <0.10 (Byrne Citation2001).

Cronbach's alphas were computed for each scale (subset of items associated with a factor) to determine internal consistency. A coefficient of 0.70 or higher was considered acceptable.

We wanted to determine the number of student ratings required to provide individual teachers with reliable feedback on the factors resulting from the confirmatory factor analysis, as well as on the overall judgement. Therefore, the inter-rater reliability of the teacher evaluation was assessed in a generalizability study for each factor score (mean of subset of items associated with a factor) of the rating form, and for the overall judgement score. Then variance component analysis was applied for the sample of rating forms, thereby decomposing the total variance of each score into the variance component of interest (teacher variance) and error variance components (rater variance, and rater–teacher interaction variance) (Crossley et al. Citation2002). As indices for inter-rater reliability, we estimated the generalizability coefficient (G-coefficient) and the standard error of measurement (SEM) for each factor as well as for overall judgement using the urGENOVA software (Brennan Citation2001). A G-coefficient of 0.60 was considered acceptable but indicative of a need for improvement and 0.80 was considered very reasonable (Gronlund Citation1988). The SEM was used to estimate confidence intervals for the individual factor scores. A SEM of ≤0.26 was considered adequate at the 95% confidence interval, taking into account that we decided to accept a maximum ‘noise level’ of 1.0 on a 5-point scale (1.96 × 0.26 × 2 ≈ 1). An SEM of ≤0.52 (0.26 × 2) was considered satisfactory for the overall judgement because of the use of a 10-point scale.

Relations to other variables

We used SPSS 17.0 to perform bivariate correlation analyses on the factor scores and the scores on the overall judgement score and the supplemental items that could emerge from the pilot study in order to explore the relations of these items with the MCTQ outcomes. Bivariate correlations were obtained for the factor scores and the overall judgement score in order to assess criterion validity.

Results

Response process

All students agreed that the MCTQ is a useful and relevant instrument for providing feedback to teachers. The pilot study led to the rewording of the questionnaire instructions and reformulation of a few items to reduce ambiguity. These changes were mostly induced by the veterinary context. More than 50% of respondents, for example, interpreted a safe learning environment as an environment where there was little chance of being bitten by an animal.

In the pilot study, supervision time was consistently mentioned by the students as a factor that affected their ratings. This comment was evoked by the local context at FVMU where students are assigned to departments and not to individual supervisors, and consequently are supervised by one teacher for only a few hours. While this enables students to observe how different faculty members work within a discipline, the downside is that students do not have the same supervisor for a prolonged period of time. Students said that brief exposure to a supervisor made them hesitant to give an evaluation and prevented judgement of certain aspects of the questionnaire. This result of the pilot study led to the addition of only one supplemental item on supervision time to the questionnaire to enable evaluation of the effect of supervision time on the response process. The item read: how many hours of actual contact time did you have with your supervisor?

According to the majority of the students in the pilot study, filling in paper forms at the clinic – the customary method of collecting student feedback at FVMU – was not the most trusted way regarding anonymity to administer the questionnaire. They suggested that a web-based questionnaire which they could access at home would be the safest and most accurate method to collect evaluation data.

The instrument was therefore turned into a web-based questionnaire and placed on a custom build website with easy access for the participating students. We sent an e-mail to all fifth- and sixth-year students participating in clinical rotations during the 2008/2009 academic year (N = 350) inviting them to fill in the questionnaire on the website when we were gathering data for the analysis of internal structure.

Internal structure

Of the 1223 completed questionnaires received by us, 33 were excluded because of incomplete data. The remaining set of 1190 questionnaires evaluated 163 different teachers. Fifty-three teachers were excluded from the analysis because we received fewer than four evaluations for them. This led to inclusion in the study of 1086 questionnaires evaluating 110 teachers. Because not all students filled out their name and student number, we cannot provide an accurate response rate; but at least 208 different students filled out their student number, resulting in a response rate of at least 59%.

All item scores ranged between 1 and 5. ‘The clinical teacher allowed me to perform tasks independently’ was the highest scoring item (M = 4.11, SD = 0.57). The lowest scoring item was ‘The clinical teacher stimulated me to formulate my own goals’ (M = 2.90, SD = 0.49).

The confirmatory factor analysis showed that the original model – based on all domains of the cognitive apprenticeship model – (see Appendix 1) did not fit the data. After eliminating and reorganizing the items in accordance with the modification indices, the results of the confirmatory factor analysis revealed a reasonably good fit for a five-factor model comprising 15 items ().

Table 1.  Mean scores (5-point scale), standard deviations and Cronbach's alpha (N = 110) for the factors and the individual items of the MCTQ

The results of the five-factor model () show the following results: (1) χ2 divided by degrees of freedom (CMIN/df) is 2.594; (2) the GFI is 0.82; (3) the CFI is 0.93; and (4) the SRMR is 0.049. Only the first statistic did not meet the criteria for a reasonable fit. Models with 1, 2, 3 or 4 factors did not show a better fit ().

Table 2.  Fit indices and criteria for models with 1, 2, 3, 4 and 5 factors

Cronbach's alpha reliability coefficients for factors 1–5 were all above 0.70, at 0.96, 0.86, 0.87, 0.88 and 0.90, respectively (). The mean scores for factor 1 and factor 5 varied between 3.31 for Exploration (F5, scale 1–5) and 3.99 for GLC (F1, scale 1–5), with corresponding standard deviations of 0.45 and 0.62 (N = 110).

We examined the inter-rater reliability of the five-factor scores and the overall judgement in a generalizability analysis. presents the G-coefficients and SEM per factor as a function of the number of student ratings per clinical teacher. The number of students required to obtain reliable outcomes for the overall judgement of one teacher was six to eight. Between 10 and 12 students were needed for reliable outcomes for each factor. The bilateral correlation coefficients for the five mean factor scores ranged from r = 0.47 to r = 0.71 (all p < 0.001).

Table 3.  The generalizability coefficient (G-coefficient) and SEM as a function of the number of student ratings (N) for the five factors (scale 1–5) and the overall judgement (scale 1–10)

Relations to other variables

shows the means and standard deviations for overall judgement and supervision time and their correlations with the five factors. Mean supervision time was 12.87 h (SD = 12.81).

Table 4.  Mean scores and standard deviations (SD) for overall judgement (1–10) and supervision time and the correlations with the five factors

There are large and significant correlations between the five factors and overall judgement (r = 0.84, 0.79, 0.70, 0.58 and 0.62 (p < 0.001 for all)). Although the correlations of the five-factor scores with supervision time are statistically significant, their values are small (r = 0.10, r = 0.15, r = 0.20, r = 0.12 and r = 0.13 (all p < 0.001)).

Discussion

The purpose of this study was to validate the MCTQ as an instrument for measuring clinical teacher effectiveness in a veterinary education context with relatively brief clinical rotations. We sought evidence concerning the first four sources of evidence from the model of standards proposed by Beckman et al. (Citation2005).

The results of the pilot study among senior students to test the applicability of the MCTQ in a veterinary context show that students consider all domains and items relevant and useful for providing feedback to clinical teachers. Studies of the validity of the MCTQ conducted by Stalmeijer among educationalists, doctors and students in a medical education setting (Stalmeijer et al. Citation2008, Citation2009) showed that the cognitive apprenticeship model offers a useful framework for the MCTQ. We therefore conclude that we have met the requirements of the first source of validity evidence: content.

Our analysis of the response process in the pilot study revealed one theme that, according to the students, affected the rating process (Beckman et al. Citation2005): supervision time. Students felt that the short periods of supervision by one teacher, which are characteristic of the FVMU programme, prevented them from giving well-considered ratings for some aspects of the MCTQ. This is in line with Stalmeijer's finding that scaffolding, reflection and exploration can be properly evaluated only in longer rotations with one clinical supervisor (Stalmeijer et al. Citation2009).

The validity evidence related to response process in this study led to the addition of one item on supervision time to enable measurement of its effects. We also improved the response process at the suggestion of students by replacing paper forms filled in at the clinic by Internet-based rating forms.

As for the third source of validity evidence, internal structure (Beckman et al. Citation2005), the results of the confirmatory factor analysis show that a five-factor model comprising 15 items fits the data reasonably well, with 3 of 4 statistical criteria being met. Only the χ2 divided by degrees of freedom (CMIN/df = 2.594) did not meet the criteria for a reasonable fit (<2). We state that failing to meet this criterion is not problematic. Byrne's (Citation2001) fit indices are quite strict. Other authors propose a CMIN/df < 5 as a criterion for an adequate fit (McDonald & Ho Citation2002). The alpha-coefficients demonstrate acceptable levels.

The five-dimensional scale found by us is in line with findings reported by Stalmeijer et al. (Citation2010) in a medical context. In our study, coaching and scaffolding aggregate into one factor as did exploration and reflection. This does not surprise us greatly, since coaching and scaffolding are both concerned with task performance: giving students the opportunity to perform a task, helping students when they are performing a task and giving students feedback during and following task performance. As for exploration, this is about setting and pursuing learning goals, and setting (learning) goals is an integral part of reflection (Korthagen & Vasalos Citation2005).

The findings of the generalizability study indicate that at least 10–12 student responses are needed per teacher to obtain reliable data at the factor level. Fewer responses (6–8) suffice for reliable overall judgement. In most clinical settings these numbers are quite feasible.

Based on the results, we conclude that the MCTQ has a valid internal structure for evaluating clinical teachers based on student ratings.

As for the relations between the MCTQ scores and other variables with relevance to teacher effectiveness (Beckman et al. Citation2005), we found large correlations between overall judgement and each of the factor scores. This adds support to the validity of the instrument.

The absence of strong correlations between supervision time and factor scores suggests that, in contrast to the opinions voiced by the students in the pilot study, supervision time does not have an important effect on the outcomes of the MCTQ. We hypothesize that this finding can be contributed to the fact that the specific behaviours reflected by the items in the MCTQ can be displayed by teachers even when their time for supervision is limited.

Based on the above findings, we conclude that this study has met the requirements with regard to four sources of evidence for the validity of an instrument like the MCTQ. We therefore conclude that the 5-domain, 15-item MCTQ, as shown in , appears to be an instrument with a strong theoretical foundation, which is valid and reliable for the evaluation of clinical teachers in a context with short clinical rotations.

There are some limitations to the study. For one thing, all measurements are based on student perceptions, which is a potential source of resistance from teachers when the MCTQ is used for faculty development purposes (Stalmeijer et al. Citation2009). We therefore recommend using other evaluations in addition to student perceptions, such as teachers’ self-evaluations and observations. Further research should compare these evaluations with those of students.

Another drawback is that in the analysis we made no distinction between ratings from fifth- and sixth-year students. This is a common limitation in clinical teaching assessments (Beckman & Mandrekar Citation2005). However, the applicability of the cognitive apprenticeship model may vary depending on stages of education. Students who have just started clinical rotations, for example, require more observation and support than more advanced students. This is an issue that should be a topic of future research.

With regard to further research, we would also recommend studies to address the fifth source of validity evidence put forward by the American Psychological and Education Research Associations (American Education Research Association and American Psychological Association Citation1999). Such studies should investigate which factors influence the outcomes of the MCTQ and the implications for practice. Study questions to be addressed are: Do clinical teachers accept their scores on the MCTQ? How can they be stimulated to reflect on their scores? Does their behaviour really improve as a result of receiving feedback? In conclusion, our study shows that the MCTQ provides teaching staff with a good basis for reflection on their clinical teaching effectiveness. Further studies should focus on the implementation of the MCTQ in practice and its effectiveness in improving teaching.

Acknowledgement

The authors would like to thank Mereke Gorsira for editing the final version.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.

References

  • American Education Research Association and American Psychological Association. Standards for educational and psychological testing. American Education Research Association, Washington, DC 1999
  • Beckman TJ, Cook DA, Mandrekar JN. What is the validity evidence for assessments of clinical teaching?. J Gen Intern Med 2005; 20(12)1159–1164
  • Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? A review of the published instruments. J Gen Intern Med 2004; 19(9)971–977
  • Beckman TJ, Mandrekar JN. The interpersonal, cognitive and efficiency domains of clinical teaching: Construct validity of a multi-dimensional scale. Med Educ 2005; 39(12)1221–1229
  • Brennan RL. Generalizability theory. Springer, New York 2001
  • Byrne BM. Structural equation modeling with AMOS: Basic concepts, applications, and programming. Lawrence Erlbaum, Mahwah, NJ 2001
  • Collins A, Brown JS, Holum A. Cognitive apprenticeship: Making thinking visible. Am Educ 1991; 15(3)6–11
  • Copeland HL, Hewson MG. Developing and testing an instrument to measure the effectiveness of clinical teaching in an academic medical center. Acad Med 2000; 75(2)161–166
  • Crossley J, Davies H, Humphris G, Jolly B. Generalisability: A key to unlock professional assessment. Med Educ 2002; 36(10)972–978
  • Daelmans HEM, Hoogenboom RJI, Donker AJM, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Effectiveness of clinical rotations as a learning environment for achieving competences. Med Teach 2004; 26(4)305–312
  • Dolmans DHJM, Wolfhagen HAP, Gerver WJ, De Grave W, Scherpbier AJJA. Providing physicians with feedback on how they supervise students during patient contacts. Med Teach 2004; 26(5)409–414
  • Downing SM. Validity: On meaningful interpretation of assessment data. Med Educ 2003; 37(9)830–836
  • Gronlund NE. How to construct achievement tests. Prentice-Hall, Englewood Cliffs, NJ 1988
  • Kilminster SM, Jolly BC. Effective supervision in clinical practice settings: A literature review. Med Educ 2000; 34(10)827–840
  • Korthagen FAJ, Vasalos A. Levels in reflection: Core reflection as a means to enhance professional growth. Teach Teach Theory Pract 2005; 11(1)47–71
  • Lane IF, Strand E. Clinical veterinary education: Insights from faculty and strategies for professional development in clinical teaching. J Vet Med Educ 2008; 35(3)397–406
  • Leinster S. Learning in the clinical environment. Med Teach 2009; 31(2)79–81
  • Litzelman DK, Stratos GA, Marriott DJ, Skeff KM. Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med 1998; 73(6)688–695
  • McDonald RP, Ho MH. Principles and practice in reporting structural equation analyses. Psychol Methods 2002; 7(1)64–82
  • Prideaux D, Alexander H, Bower A, Dacre J, Haist S, Jolly B, Norcini J, Roberts T, Rothman A, Rowe R, et al. Clinical teaching: Maintaining an educational role for doctors in the new health care environment. Med Educ 2000; 34(10)820–826
  • Ramani S, Leinster S. AMEE guide no. 34: Teaching in the clinical environment. Med Teach 2008; 30(4)347–364
  • Smith BP, Walsh DA. Teaching the art of clinical practice: The veterinary medical teaching hospital, private practice, and other externships. J Vet Med Educ 2003; 30(3)203–206
  • Snell L, Tallett S, Haist S, Hays R, Norcini J, Prince K, Rothman A, Rowe R. A review of the evaluation of clinical teaching: New perspectives and challenges. Med Educ 2000; 34(10)862–870
  • Spencer J. ABC of learning and teaching in medicine: Learning and teaching in the clinical environment. BMJ 2003; 326(7389)591–594
  • Stalmeijer RE, Dolmans DHJM, Wolfhagen HAP, Muijtjens AMM, Scherpbier AJJA. The development of an instrument for evaluating clinical teachers: Involving stakeholders to determine content validity. Med Teach 2008; 30(8)272–277
  • Stalmeijer RE, Dolmans DHJM, Wolfhagen HAP, Scherpbier AJJA. Cognitive apprenticeship in clinical practice: Can it stimulate learning in the opinion of students?. Adv Health Sci Educ 2009; 14(4)535–546
  • Stalmeijer RE, Dolmans DHJM, Wolfhagen HAP, Muijtjens AMM, Scherpbier AJJA. The Maastricht Clinical Teaching Questionnaire (MCTQ) as a valid and reliable instrument for the evaluation of clinical teachers. Acad Med 2010; 85(11)1732–1738
  • Steinert Y. Staff development for clinical teachers. Clin Teach 2005; 2(2)104–110
  • Steinert Y, Mann K, Centeno A, Dolmans DHJM, Spencer J, Gelula M, Prideaux D. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach 2006; 28(6)497–526
  • van der Hem-Stokroos HH, van der Vleuten CPM, Daelmans HEM, Haarman HJTM, Scherpbier AJJA. Reliability of the clinical teaching effectiveness instrument. Med Educ 2005; 39(9)904–910

Appendix 1

The modified Maastricht Clinical Teacher Questionnaire

Modelling

The clinical teacher …

  1. demonstrated how different skills should be performed.

  2. explained, while performing a task, which aspects were important and why.

  3. created sufficient opportunities for me to observe him or her.

  4. was a role model for me.

Coaching

The clinical teacher…

  1. observed me while I was performing a task.

  2. provided me with constructive and concrete feedback during or following direct observation.

  3. gave me a better insight into aspects of my performance that needed improvement.

Scaffolding

The clinical teacher …

  1. adjusted his/her teaching activities to my level of experience and competence.

  2. allowed me to perform tasks independently.

  3. was supportive when I experienced difficulties with a task.

  4. gradually decreased the amount of guidance in order to bolster my independence.

Articulation

The clinical teacher …

  1. asked me to explain my reasoning and actions.

  2. alerted me to gaps in my knowledge and skills.

  3. asked questions to increase my knowledge and understanding.

  4. stimulated me to ask questions to increase my knowledge and understanding.

Reflection

The clinical teacher …

  1. stimulated me to think about my own strengths and weaknesses.

  2. stimulated me to think about how to improve my own strengths and weaknesses.

Exploration

The clinical teacher …

  1. stimulated me to formulate my own goals.

  2. stimulated me to achieve my own goals.

  3. challenged me to explore new tasks and possibilities.

General Learning Climate

The clinical teacher …

  1. established an environment where I felt free to ask questions or make comments.

  2. took enough time to supervise me.

  3. showed an interest in me as a student.

  4. treated me with respect.

Miscellaneous

  1. Give an overall mark (1–10) for this doctor as a clinical teacher.

  2. What are the strengths of this clinical teacher? (open-ended question)

  3. Which aspects of the performance of this clinical teacher can be improved? (open-ended question)

  4. How many hours of actual contact time did you have with your supervisor?

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.