3,310
Views
14
CrossRef citations to date
0
Altmetric
Original Articles

From good to excellent: Improving clinical departments’ learning climate in residency training

, ORCID Icon, &

Abstract

Introduction: The improvement of clinical departments’ learning climate is central to achieving high-quality residency training and patient care. However, improving the learning climate can be challenging given its complexity as a multi-dimensional construct. Distinct representations of the dimensions might create different learning climate groups across departments and may require varying efforts to achieve improvement. Therefore, this study investigated: (1) whether distinct learning climate groups could be identified and (2) whether contextual factors could explain variation in departments’ learning climate performance.

Methods: This study included departments that used the Dutch Residency Educational Climate Test (D-RECT) through a web-based system in 2014–2015. Latent profile analysis was used to identify learning climate groups and multilevel modeling to predict clinical departments’ learning climate performance.

Results: The study included 1730 resident evaluations. Departments were classified into one of the four learning climate groups: substandard, adequate, good and excellent performers. The teaching status of the hospital, departments’ average teaching performance and percentage of time spent on educational activities by faculty-predicted departments’ learning climate performance.

Discussion: Clinical departments can be successfully classified into informative learning climate groups. Ideally, given informative climate grouping with potential for cross learning, the departments could embark on targeted performance improvement.

Introduction

Popular media suggest an unsafe learning climate in postgraduate medical education (PGME) leads to dysfunctional behavior of staff and might contribute to adverse events in patient care (Knoop Citation2016). Similarly, international literature stresses that clinical departments’ learning climate is crucial to the quality of PGME (Genn Citation2001a, Citation2001b; World Federation for Medical Education Citation2003; Weiss et al. Citation2013), using key components of faculties’ teaching performance and residents’ career satisfaction, learning styles and knowledge (Daugherty et al. Citation1998; Delva et al. Citation2004; Cross et al. Citation2006; Shimizu et al. Citation2013; Lombarts et al. Citation2014) to demonstrate this claim. Research into residents’ preparedness for practice and professional development (Cross et al. Citation2006; Brown et al. Citation2007; Dyrbye and Shanafelt Citation2016) suggests the significance of a supportive learning climate for patient care as provided by residents. It follows that high-quality learning climates are of paramount importance to both residents and patients. However, the learning climate is a complex and multidimensional (Roff et al. Citation1997; Clapham et al. Citation2007; Boor et al. Citation2011; Riquelme et al. Citation2013; Colbert-Getz et al. Citation2014) construct that resembles the formal and informal context of learning (Roff and McAleer Citation2001; Schonrock-Adema et al. Citation2012), including residents’ perceptions of departments’ common policies, practices and procedures (Lombarts et al. Citation2014) as well as the overall atmosphere (Genn Citation2001b; Schonrock-Adema et al. Citation2012). When learning climate dimensions are distinctly represented across clinical departments, there might be multiple learning climate groups that may require differing efforts to achieve learning climate improvement. Insights into such grouping might contribute to a better understanding of the phenomenon of learning climate and support improvement of departments’ learning climate performance.

Worldwide attention for learning climates in PGME led to the development of evaluation tools aiming to chart strengths and weaknesses of departments’ learning climate (Soemantri et al. Citation2010; Colbert-Getz et al. Citation2014). One of the best researched tools yet is the Dutch Residency Educational Climate Test (D-RECT) (Boor et al. Citation2011; Silkens et al. Citation2016), developed in the Netherlands and rapidly spreading to other nations including Ireland, Australia, Germany, Pakistan and Colombia (Pinnock et al. Citation2013; Bennett et al. Citation2014; Iblher et al. Citation2015; Amin et al. Citation2016). Residents use the instrument to provide valid and reliable feedback (Boor et al. Citation2011; Silkens et al. Citation2016) on nine dimensions of learning climate (Silkens et al. Citation2016). Research into the use of the D-RECT to predict outcomes mainly focuses on the overall construct of learning climate rather than on individual domains. One such example is a study showing that a more supportive learning climate, as indicated by higher overall D-RECT scores, was associated with better outcomes such as a better quality of life of residents (van Vendeloo et al. Citation2014). An exception is a study that investigated the association between learning climates (the overall construct and individual domains) and the teaching performance of individual faculty (Lombarts et al. Citation2014). The study reported positive associations between overall learning climate and teaching performance, and more specific positive relations between three domains – namely coaching and assessment, work being adapted to residents’ competence, and formal education – and faculty’s teaching performance (Lombarts et al. Citation2014).

Our aim is to enable improvement of departments’ learning climate performance. Prediction of departments’ learning climate performance should yield essential improvement insights. For example, Piek et al. (Citation2015) found the accreditation status of gynecology residency training programs and the years of training completed by residents were important predictors of learning climate. Other studies compared gender groups, specialties and type of hospital for differences in their overall learning climate (D-RECT) scores (van Vendeloo et al. Citation2014; Amin et al. Citation2016). We still need comprehensive studies of factors that predict learning climate performance and give departments crucial insights into how to enhance their learning climate. Therefore, this study aimed to shed light on which predictors are relevant for departments’ learning climate performance by investigating: (1) whether distinct learning climate groups could be identified across clinical departments that provide residency training, and (2) the extent to which contextual factors could explain variation in the learning climate performance of clinical departments.

Methods

Setting

In the Netherlands, PGME is arranged in eight geographical regions, each consisting of a coordinating academic hospital and multiple regional affiliated teaching hospitals that provide PGME. These regional hospitals are either top clinical hospitals (highly specialized referral centers that provide top clinical care, scientific research and PGME) or general hospitals (providing patient care and PGME). Residents are trained at a department by a team of clinical teachers that shares joint responsibility for PGME. Each training program is headed by a program director appointed, and periodically assessed, by the Royal Dutch Medical Association.

Dutch PGME requirements dictate departments have the responsibility to guarantee a supportive learning climate for residents (Directive of the Central College of Medical Specialists Citation2009). As a result, many departments throughout the Netherlands choose to evaluate their learning climate by administering the D-RECT. The aim of the D-RECT is to provide insight into: department’s performance concerning their learning climate and to initiate quality improvement activities.

Study population and data collection

We included 254 departments that requested and performed a D-RECT evaluation via a web-based system between January 2014 and December 2015. When a department used the D-RECT more than once during this period (151 departments) only, the most recent data were included in the study (an exception was made if the most recent evaluation yielded insufficient evaluations (<3 completed evaluations per department). In that case, the evaluation period with sufficient data was included (14 departments). Upon a D-RECT request, residents trained at the department were invited by e-mail to fill out the questionnaire during a pre-determined period (commonly a one month timeframe). The number of residents in training at a department varied. Residents received up to three reminders through an automatically generated e-mail.

Data on hospital and departmental level contextual factors were gathered through the same web-based system that administered the D-RECT. Besides learning climate evaluations through the D-RECT, the system offers departments the opportunity to evaluate individual teaching qualities of faculty using the well-researched Systems for Evaluating Teaching Performance (SETQ) (Lombarts et al. Citation2009; Boerebach, Lombarts, et al. Citation2014). The SETQ was developed to evaluate, provide feedback on and improve the teaching effectiveness of faculty in residency programs. For all 254 departments that used the D-RECT in 2014 or 2015, we derived demographic contextual data from SETQ evaluations performed between January 2011 and December 2016 (SETQ data was not necessarily simultaneously collected with the D-RECT data as we assumed these demographic variables to be rather stable in the studied timespan). For departments that used the SETQ prior to or concurrently with the D-RECT (because we assumed causality in relation to the D-RECT performance), we computed a SETQ performance score indicating the average teaching performance of faculty at the department (Boerebach, Arah, et al. Citation2014).

Measurements

The first version of the D-RECT was developed by Boor et al. (Boor et al. Citation2011). The instrument was updated and extensively validated for the Dutch context (Silkens et al. Citation2016). The most recent version of the D-RECT consists of 35 items grouped in nine domains: educational atmosphere, teamwork, role of specialty tutor, coaching and assessment, formal education, resident peer collaboration, work is adapted to residents’ competence, accessibility of supervisors and patient sign-out (Silkens et al. Citation2016). All items were rated using a 5-point Likert scale (1 = totally disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = totally agree), with an additional “not applicable” option.

Concerning contextual factors at the hospital level, we registered the type of hospital (academic/general/top clinical) for every participating hospital in the study. At the department level, we charted the type of specialty (surgical/nonsurgical) and how often departments participated in the D-RECT before. Derived from the participants in the D-RECT, we estimated the following ratios: junior doctors (years 1, 2 and 3) versus senior doctors (years 4, 5 and 6), residents versus nonresidents (doctors not in training or fellows) and sex (male/female) at the departments. Derived from the participants in the SETQ, we estimated the following departmental contextual data: faculties’ sex ratio (male/female), their average age and the percentage of time spent on teaching and other education related activities by faculty, and we computed a SETQ performance score indicating the average teaching performance of faculty at the department. An overview of the used predictors and data collection details is provided in Table A1 (Supplementary Appendix).

Table 1. Characteristics of the study population.

Ethics

The institutional ethical review board of the Academic Medical Center of the University of Amsterdam provided a waiver declaring the Medical Research Involving Human Subjects Act (WMO) did not apply to the current study. Filling out the D-RECT and SETQ was voluntary and anonymous for all participants.

Analysis

The study population was described using descriptive statistics and frequencies. A cutoff of 50% missing data was determined and resident evaluations exceeding this cutoff (missing 17 or more questions) were excluded. We conducted analysis at aggregates to the mean of department levels, since learning climate is a department-level construct. To attain reliable department scores for the overall learning climate, departments with less than three resident evaluations (Silkens et al. Citation2016) were excluded from further analysis. Subscale scores were computed by averaging the items within subscales. Departments with missing data for one or more subscales were excluded.

To address our first research aim, we used a person-centered approach. Traditional research into learning climate instruments for PGME has mainly focused on the soundness of items used and uncovering domains in instruments by using variable-centered approaches such as factor analysis and structural equation models (Schonrock-Adema et al. Citation2012). In contrast, a person-centered approach, such as cluster analysis or finite mixture analysis, can consider heterogeneity of departments and as such provide insight into the grouping of departments (Muthén and Muthén Citation2000; Oberski Citation2016). Therefore, the D-RECT data were analyzed using a Gaussian finite mixture model (latent profile analysis (LPA)) fitted based on an Expectation Maximization (EM) algorithm (Oberski Citation2016). LPA has similarities with probabilistic cluster analysis but is known to have several advantages over the more traditional nonhierarchical approaches like K-means (Vermunt and Magidson Citation2004; Oberski Citation2016). The purpose of LPA is to group data based on the assumption that there are unobserved latent variables that can be derived from the observed data. LPA is suited for continuous observed values and applicable to the current study since data is continuous after aggregation to department level (Vermunt and Magidson Citation2004). Model fit was assessed using the Bayesian information criteria (BIC) for which a smaller value indicates a better model fit. Models with components varying in shape, volume and axis alignment were estimated (Fraley and Raftery Citation2002). It was determined that classes needed to contain at least 5% of the sample in order to prevent extracting too many classes (Hipp and Bauer Citation2006).

To address our second research aim, we built a random intercept multilevel model. By using a random intercept model, the analysis accounted for hierarchical clustering of departments within hospitals. Supplementary Table A1 provides an overview of the predictors used in the model. Resulting associations were reported through regression coefficients (b). The latent profile analysis was performed in the McLust package in R statistical software version 3.3.1 (Scrucca et al. Citation2016). Multilevel modeling was performed using SPSS version 23 (IBM Corp. Citation2015).

Results

The results represent 211 clinical departments in 36 hospitals, based on responses from 1730 residents who completed the D-RECT questionnaire between January 2014 and December 2015. Of these 211 departments, 179 departments used the SETQ between January 2011 and December 2016 (for which departmental demographic data could be derived from the SETQ) and 175 departments used the SETQ prior to or concurrently with the D-RECT (for which a departmental SETQ performance score could be derived from the SETQ). Detailed information about the study sample is presented in .

Latent profile analysis

A four-group model fit the data best (BIC= −1486.7, VEI model) (Supplementary Figure A1). The probability of being placed in each group were, respectively, 0.20, 0.43, 0.32 and 0.05 for group one, two three and four, respectively. Group one (classified as substandard performers) contained 19% of the sample (N = 41) and included departments scoring lowest on all D-RECT subscales. Group two (adequate performers) contained 43% of the sample (N = 90) and included departments performing average on all D-RECT subscales. Group three (good performers) contained 33% of the sample (N = 69) and included departments scoring above average on all D-RECT subscales. Group four (excellent performers) contained 5% of the sample (N = 11) and included departments scoring very high on all subscales of the D-RECT. The sample means per group are summarized in .

Table 2. Summary statistics for the subscales and mean leaning climate (D-RECT) score per group.

Multilevel modeling

Multilevel modeling showed that the teaching status of the hospital (for academic vs. top clinical hospitals: b =−0.21; 95% CI: −0.34 to −0.09; p < 0.01), the departmental SETQ performance score (b = .70; 95% CI: 0.51 to 0.90; p < 0.01) and the percentage of time spent on educational activities by faculty (b =−0.01; 95% CI: −0.02 to −0.00; p < 0.05) were significant predictors of the total mean score of the D-RECT. Detailed information on all predictors is provided in .

Table 3. Regression coefficients (95% confidence intervals) and p values derived from multilevel models of the predictors of learning climate scores.

Discussion

In answer to the first research question (Can distinct learning climate groups be identified across clinical departments providing residency training?), our study identified four learning climate groups across departments: substandard, adequate, good and excellent performers. For the second research question (To what extent can contextual factors explain variations in the learning climate performance of clinical departments?), multilevel modeling identified departments’ overall teaching performance as the strongest predictor for a supportive learning climate. Departments with a strong overall faculty teaching performance, departments originating from top clinical hospitals and departments with faculty spending less time on educational activities were more likely to score high on the D-RECT.

Explanation of findings

The identified learning climate groups seem to distinguish themselves by their scores on the total set of D-RECT domains, rather than by a varying representation of individual domains. This implies that a department scoring low on one domain is likely to score low on all D-RECT domains, and thus will classify as a substandard performer. Similarly, a high score on one domain more likely means overall high learning climate scores and an excellent performer classification. Classification of departments into four performance categories seems above all an intuitive categorization of performance data. In research fields such as economics and econometrics, statistical methods like quantile regression are applied to performance data to obtain information on specific data points in the distribution, instead of solely relying on the conditional mean (Buchinsky Citation1994; Eide and Showalter Citation1998). The benefit of LPA is that, instead of using an outcome variable to classify data into previously designed categories, we could use the whole range of D-RECT domain scores to determine the statistically underlying structure of the data. The underlying idea is that individual quantiles might behave differently from each other and provide different information in analyses. Therefore, knowing that departments naturally split into four comprehensive and individual groups is meaningful for analyzes linking learning climate to educational and patient outcomes.

Although domain scores varied between learning climate groups, no pattern varieties on domain scores between groups were implied. Therefore, one could claim overall learning climate performance is determining a department’s grouping. The value of D-RECT domain scores therefore lies in the response pattern within groups, rather than between groups. Knowledge about which domains scored well or poorly provides information on which aspects do and which do not need immediate attention when aiming to improve departments’ overall learning climate performance (Bennett et al. Citation2014; Amin et al. Citation2016). When reviewing groups’ domain scores (), it becomes evident some aspects consistently score higher (e.g. resident peer collaboration and accessibility of supervisors) and others lower within groups (e.g. coaching and assessment and patient sign-out). In other words, whether an excellent or a substandard performer, departments should work on coaching of residents, provision of useful feedback and provide opportunities to learn from patient sign-out. Which domains score well or poorly is likely to depend on the context of PGME and to vary between health care systems.

How departments should manage the uncontrollable

Mixed modeling shows departments in top clinical departments perform 0.21 units higher on overall learning climate than departments in academic hospitals when all other parameters in the model are kept constant. Although the type of hospital is uncontrollable for departments, they should understand why residents rate one hospital type better than another and subsequently manage conditions in such a way that residents’ needs are likely to be met. We theorize, building on previous research (Schultz et al. Citation2004), that variation between hospitals is related to complex patient care, patient mix, responsibilities for residents, practice opportunities of routine and non-routine procedures and supervising styles. We suggest academic departments should consider environmental limitations for residents and manage these limitations wherever possible.

What is within departments’ control

Mixed modeling identified two factors predicting departments’ learning climate performance that were within departments’ control: teaching performance of faculty and the percentage of time spent on educational activities by faculty. Firstly, departments raising overall teaching performance of faculty (SETQ performance score) by one unit while keeping all other variables constant, increased their D-RECT performance by 0.70. This implies departments that wish to improve their learning climate should facilitate the enhancement of faculties’ teaching performance. Teaching faculty should stimulate discussion and learning, treat residents well, communicate and provide feedback (Lombarts et al. Citation2009). Second, our model showed that when the percentage of time spent on educational activities by faculty at the department was increased by one unit while keeping all other variables constant, the departments’ learning climate score would deteriorate by 0.01. We acknowledge the small effect and as such, consider this result less relevant. To explain the direction of the effect, we theorize that for educational activities without immediate supervisor-resident contact (e.g. preparing lectures and classes), the learning climate might not immediately benefit from more time spent on these educational activities. In the end, learning climate in PGME is inherently linked to workplace based learning and as such, residents might perceive a department’s learning climate as more supportive when more time is spent on supervisor-resident contact, but not when more time is spent on other educational activities.

Strengths and limitations of the study

This study adds to the existing literature by deepening our understanding of the phenomenon of learning climate in PGME through novel analyses techniques. We used the D-RECT, a well-researched and robust measurement tool for measuring departments’ learning climate in the used context (the Netherlands). The instrument is currently researched worldwide, resulting in its availability in multiple languages (including Spanish and English) and its availability for use in a wide variety of health care systems. Previous generalizability studies into the D-RECT in the Dutch context imply each domain has its own cutoff value to establish a reliable subscale score (Silkens et al. Citation2016). It was not feasible to apply the most restricted cutoff to the current study, though to support our trust in the reliability of the data we applied a cutoff providing a reliable overall learning climate score (a minimum of three resident evaluations per department) and additionally combined data from a vast body of departments to construct subscale scores for the D-RECT. We consider the multicenter approach (covering many departments) a strength of this study, guaranteeing generalizability of results to teaching hospitals and residency programs in the Netherlands. The web-based system, though covering multiple regions in the Netherlands, unfortunately does not include D-RECT data from all eight regions, since in some parts of the Netherlands the D-RECT is administered on paper.

Implications for educational and clinical practice

Concerning educational practice, our results imply that clinical departments should shift from quality assurance towards a continuous improvement state of mind for learning climates. To support continuous improvement of learning climates, Dutch accreditation standards require every teaching hospital to have a hospital-wide education committee (Directive of the Central College of Medical Specialists Citation2009). A core task of these committees is to safeguard learning climates in their teaching hospital. Using our results, committees can now group clinical departments providing PGME in their teaching hospital. The outcomes of such a grouping could provide insight into the performance of clinical departments across the teaching hospital and support subsequent interventions undertaken by the committees (e.g. the pairing of substandard and excellent performers to facilitate improvement through the exchange of (best) practices). Given that supportive learning climates are considered to be crucial to the quality of PGME (Genn Citation2001a, Citation2001b), we recommend educational practice to include learning climate performance indicators in accreditation standards worldwide.

As previous research suggests an association between supportive learning climates and the professional development of residents as caregivers (Cross et al. Citation2006; Brown et al. Citation2007; Dyrbye and Shanafelt Citation2016), the excellent performing learning climate groups identified in our results would be most desirable for clinical practice. Although it seems intuitive to advice clinical practice to put efforts in achieving excellent learning climates, a study performed by Smirnova et al. (Citation2017) warns that too much attention for learning climates may lead to impaired patient outcomes. As such, we advise clinical practice to strike a balance between a learning climate that guarantees optimal learning possibilities for residents and a learning climate that is safe for patients treated by residents.

Implications for research

Future research should examine the meaning of learning climate groups for the performance of departments on various educational and patient care outcomes, especially since we theorize that each group might perform different on such outcomes. Furthermore, research might look into the development of learning climate groups over time and especially, whether learning climate interventions lead to improved learning climate performance of clinical departments.

Conclusions

This study identified four natural groups of clinical departments based on their learning climate performance, ranging from substandard to excellent performers. Departments from top clinical hospitals were more likely to perform better on their learning climate, as well as departments that performed high on overall teaching performance of faculty and those with faculty spending slightly less time on educational activities. We aim to contribute to learning climate improvement in PGME and to stimulate departments to not just be good, but to be excellent.

Ethical approval

The institutional ethical review board of the Academic Medical Center of the University of Amsterdam confirmed that the Medical Research Involving Human Subjects Act (WMO) did not apply to the current study on the 23rd of December 2015 (W15_360 # 15.0407) and, as such, provided a waiver for the current study.

Glossary

Learning climate: “The learning climate in postgraduate medical education consists of shared resident perceptions of the formal and informal aspects of education, including perceptions of the overall atmosphere as well as policies, practices, and procedures within the teaching hospital”.

Notes on contributors

Milou Silkens, MSc, is a PhD student at the University of Amsterdam, the Netherlands. Her PhD project focuses on measuring and explaining learning climates in postgraduate medical education as well as on the relation between these learning climates and patient safety.

Saad Chahine, PhD, Works as an assistant professor at Western University in Canada. He is an expert on theories of assessment and learning in medical education. With his elaborate understanding of statistical validity, he aims to contribute to robust statistical interpretations of learning in medical education.

Kiki Lombarts, PhD, is professor Professional Performance at the University of Amsterdam, the Netherlands. Through her research, she contributes to the understanding of the functioning of medical specialists in their role as teachers as well as doctors.

Onyebuchi Arah, PhD, is a professor at the department of Epidemiology at the University of California Los Angeles, the United States. His main interests are in epidemiologic methodology, biostatistics, causal, and bias analysis in relation to public health. Furthermore, he takes interest in the professional performance of medical specialists.

Supplemental material

Supplementary_files.doc

Download MS Word (80 KB)

Acknowledgements

The authors would like to thank the respondents that freed up their time to participate in this study. Furthermore, the authors are grateful for the critical feedback provided by the Professional Performance research group.

Disclosure statement

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.

Additional information

Funding

This project was financed by a grant provided by the Dutch ministry of health, welfare and sports. The ministry had no role in the study design, data collection, analysis, interpretation and reporting of data.

References

  • Amin MS, Iqbal U, Shukr I. 2016. Residency educational climate in a pakistani postgraduate medical institute. PAFMJ. 66:606–612.
  • Bennett D, Dornan T, Bergin C, Horgan M. 2014. Postgraduate training in Ireland: expectations and experience. Ir J Med Sci. 183:611–620.
  • Boerebach BC, Arah OA, Heineman MJ, Busch OR, Lombarts KM. 2014. The impact of resident- and self-evaluations on surgeon's subsequent teaching performance. World J Surg. 38:2761–2769.
  • Boerebach BC, Lombarts KM, Arah OA. 2014. Confirmatory Factor Analysis of the System for Evaluation of Teaching Qualities (SETQ) in Graduate Medical Training. Eval Health Prof. 39:21–32.
  • Boor K, van der Vleuten C, Teunissen P, Scherpbier A, Scheele F. 2011. Development and analysis of D-RECT, an instrument measuring residents' learning climate. Med Teach. 33:820–827.
  • Brown J, Chapman T, Graham D. 2007. Becoming a new doctor: a learning or survival exercise? Med Educ. 41:653–660.
  • Buchinsky M. 1994. Changes in the US wage structure 1963–1987: Application of quantile regression. Econometrica. 62:405–458.
  • Clapham M, Wall D, Batchelor A. 2007. Educational environment in intensive care medicine—use of Postgraduate Hospital Educational Environment Measure (PHEEM). Med Teach. 29:e184–e191.
  • Colbert-Getz JM, Kim S, Goode VH, Shochet RB, Wright SM. 2014. Assessing medical students' and residents' perceptions of the learning environment: exploring validity evidence for the interpretation of scores from existing tools. Acad Med. 89:1687–1693.
  • Cross V, Hicks C, Parle J, Field S. 2006. Perceptions of the learning environment in higher specialist training of doctors: Implications for recruitment and retention. Med Educ. 40:121–128.
  • Daugherty SR, Baldwin DC Jr, Rowley BD. 1998. Learning, satisfaction, and mistreatment during medical internship: a national survey of working conditions. JAMA. 279:1194–1199.
  • Delva M, Kirby J, Schultz K, Godwin M. 2004. Assessing the relationship of learning approaches to workplace climate in clerkship and residency. Acad Med. 79:1120–1126.
  • Directive of the Central College of Medical Specialists. 2009. Utrecht: Royal Dutch Medical Association.
  • Dyrbye L, Shanafelt T. 2016. A narrative review on burnout experienced by medical students and residents. Med Educ. 50:132–149.
  • Eide E, Showalter MH. 1998. The effect of school quality on student performance: A quantile regression approach. Econ Lett. 58:345–350.
  • Fraley C, Raftery AE. 2002. Model-based clustering, discriminant analysis, and density estimation. J Am Stat Assoc. 97:611–631.
  • Genn JM. 2001a. AMEE Medical Education Guide No. 23 (Part 1): Curriculum, environment, climate, quality and change in medical education-a unifying perspective. Med Teach. 23:337–344.
  • Genn JM. 2001b. AMEE Medical Education Guide No. 23 (Part 2): Curriculum, environment, climate, quality and change in medical education - a unifying perspective. Med Teach. 23:445–454.
  • Hipp JR, Bauer DJ. 2006. Local solutions in the estimation of growth mixture models. Psychol Methods. 11:36.
  • Iblher P, Zupanic M, Ostermann T. 2015. The Questionnaire D-RECT German: Adaptation and testtheoretical properties of an instrument for evaluation of the learning climate in medical specialist training. GMS Z Med Ausbild. 32. https://doi.org/10.3205/zma000997.
  • IBM Corp. 2015. IBM SPSS Statistics for Windows (Version Version 23.0). Armonk, NY: IBM Corp.
  • Knoop B. 2016. Opleidingsklimaat KNO-afdeling AMC 'niet veilig'. https://www.medischcontact.nl/nieuws/laatste-nieuws/artikel/opleidingsklimaat-kno-afdeling-amc-niet-veilig.htm
  • Lombarts KM, Bucx MJ, Arah OA. 2009. Development of a system for the evaluation of the teaching qualities of anesthesiology faculty. Anesthesiology. 111:709–716.
  • Lombarts KM, Heineman MJ, Scherpbier AJ, Arah OA. 2014. Effect of the learning climate of residency programs on faculty's teaching performance as evaluated by residents. PLoS One. 9:e86512.
  • Muthén B, Muthén LK. 2000. Integrating person‐centered and variable‐centered analyses: Growth mixture modeling with latent trajectory classes. Alcohol Clin Exp Res. 24:882–891.
  • Oberski D. 2016. Mixture models: latent profile and latent class analysis Modern statistical methods for HCI. Switzerland: Springer.
  • Piek J, Bossart M, Boor K, Halaska M, Haidopoulos D, Zapardiel I, Grabowski J, Kesic V, Cibula D, Colombo N, et al. 2015. The work place educational climate in gynecological oncology fellowships across Europe: the impact of accreditation. Int J Gynecol Cancer. 25:180–190.
  • Pinnock R, Welch P, Taylor-Evans H, Quirk F. 2013. Using the DRECT to assess the intern learning environment in Australia. Med Teach. 35:699.
  • Riquelme A, Padilla O, Herrera C, Olivos T, Roman JA, Sarfatis A, Solís N, Pizarro M, Torres P, Roff S. 2013. Development of ACLEEM questionnaire, an instrument measuring residents' educational environment in postgraduate ambulatory setting. Med Teach. 35:e861–e866.
  • Roff S, McAleer S. 2001. What is educational climate? Med Teach. 23:333–334.
  • Roff S, McAleer S, Harden RM, Al-Qahtani M, Ahmed AU, Deza H, Groenen G, Primparyon P. 1997. Development and validation of the Dundee ready education environment measure (DREEM). Med Teach. 19:295–299.
  • Schonrock-Adema J, Bouwkamp-Timmer T, van Hell EA, Cohen-Schotanus J. 2012. Key elements in assessing the educational environment: where is the theory? Adv in Health Sci Educ. 17:727–742.
  • Schultz KW, Kirby J, Delva D, Godwin M, Verma S, Birthwhistle R, Knapper C, Seguin R. 2004. Medical students' and residents' preferred site characteristics and preceptor behaviours for learning in the ambulatory setting: a cross-sectional survey. BMC Med Educ. 4:12.
  • Scrucca L, Fop M, Murphy TB, Raftery AE. 2016. Mclust 5: clustering, classification and density estimation using Gaussian finite mixture models. RJ. 8:289–317.
  • Shimizu T, Tsugawa Y, Tanoue Y, Konishi R, Nishizaki Y, Kishimoto M, Shiojiri T, Tokuda Y. 2013. The hospital educational environment and performance of residents in the General Medicine In-Training Examination: a multicenter study in Japan. Int J Gen Med. 6:637–640.
  • Silkens ME, Smirnova A, Stalmeijer RE, Arah OA, Scherpbier AJ, Van Der Vleuten CP, Lombarts KM. 2016. Revisiting the D-RECT tool: validation of an instrument measuring residents' learning climate perceptions. Med Teach. 38:476–481.
  • Smirnova A, Ravelli ACJ, Stalmeijer RE, Arah OA, Heineman MJ, van der Vleuten CPM, van der Post JAM, Lombarts KMJMH. 2017. The association between learning climate and adverse obstetrical outcomes in 16 Nontertiary Obstetrics-Gynecology Departments in the Netherlands. Acad Med. https://doi.org/10.1097/ACM.0000000000001964.
  • Soemantri D, Herrera C, Riquelme A. 2010. Measuring the educational environment in health professions studies: a systematic review. Med Teach. 32:947–952.
  • van Vendeloo SN, Brand PL, Verheyen CC. 2014. Burnout and quality of life among orthopaedic trainees in a modern educational programme: importance of the learning climate. Bone Joint J. 96-B:1133–1138.
  • Vermunt JK, Magidson J. 2004. Latent class analysis. The sage encyclopedia of social sciences research methods. Thousand Oaks: Sage Publications
  • Weiss KB, Bagian JP, Nasca TJ. 2013. The clinical learning environment: the foundation of graduate medical education. JAMA. 309:1687–1688.
  • [WFME] World Federation for Medical Education. 2003. Postgraduate Medical Education: WFME Global Standards for Quality Improvement. Copenhagen: WFME.