Publication Cover
Journal of Education for Teaching
International research and pedagogy
Volume 49, 2023 - Issue 3
1,494
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Assessing social configurations in teacher learning groups: the ‘Dimensions of Social Learning Questionnaire’

ORCID Icon, ORCID Icon & ORCID Icon
Pages 521-533 | Received 18 Dec 2020, Accepted 22 Mar 2022, Published online: 03 Nov 2022

ABSTRACT

Increasingly, teacher learning groups (TLGs) are being deployed as a way to realise high-quality educational designs. There is a need for monitoring and for insights into the development of TLGs. Therefore, in the present study, the ‘Dimensions of Social Learning Questionnaire’ (DSL-Q) is developed that can be used to map the social configuration of TLGs. This article describes the validation of the questionnaire for student teachers, teacher educators and in-service teachers (n = 488) by means of successive exploratory and confirmatory factor analysis, resulting in an instrument with good psychometric properties. The final version of the questionnaire contains 13 items, divided into three factors: practice integration, long-term orientation and goals, and shared identity and equal relationships. The instrument is suitable for quantitative research to gain more insights into the conditional and the outcome variables of social learning. Future research can incorporate theoretical dimensions regarding the value of the social learning process and its outcomes as well as insights concerning socially shared regulation of learning.

1. Introduction

In contemporary education, we expect teachers to anticipate educational change, preferably with colleagues (Hargreaves et al. Citation2013). Therefore, teacher collaboration is increasingly being initiated by schools to provide opportunities for teachers to play an active role in constructing knowledge together (Van Schaik et al. Citation2019). Within the general concept of teacher collaboration, the literature uses a variety of terms to represent forms of collective learning such as professional learning communities, communities of practice, teams and groups (Vangrieken et al. Citation2015). Although the terms represent different foci, they all refer to teacher collaboration as a social learning process that takes place within a social configuration that is embedded in schools and linked with school-wide capacity for improvement (Sleegers Citation2013; Vangrieken et al. Citation2015; Van Schaik et al. Citation2019).

One of the manifestations of collective learning is teacher learning groups (TLGs) which are built upon notions of network learning (i.e. development of connections), community learning (i.e. identity development) and team learning (i.e. formal learning structures) as the main perspectives (Van Schaik et al. Citation2019; Vrieling-Teunter, Van den Beemt, and de Laat Citation2016). The term TLGs is broadly defined because collective learning often demonstrates mixed phenomena of social learning in groups of learners that are found to influence a group’s ability to put theory into practice (Doornbos and De Laat Citation2012). TLGs can be defined as social configurations where teachers undertake learning activities in collaboration with colleagues, resulting in a change in cognition and/or behaviour at the individual and/or group level (Doppenberg, Bakx, and Den Brok Citation2012). TLGs are generally heterogeneous in composition (Vrieling-Teunter, Hebing, and Vermeulen Citation2021), involving student teachers, teacher educators and in-service teachers. Working in TLGs intertwines school development and teacher professionalisation, thus explaining their increasing popularity (Van Schaik et al. Citation2019). Teachers in TLGs use different sources to collectively construct knowledge, such as colleagues’ practical knowledge, educational research literature, external expert knowledge, and collaborative research activities (Van Schaik et al. Citation2019). In this way, teachers deal with educational change and problems that are too complex to solve individually (Hargreaves et al. Citation2013).

Co-construction of knowledge in TLGs is not easy and requires facilitation (De Jong, Meirink, and Admiraal Citation2021). The presence of support from a TLG facilitator in activities, such as in the planning of the meetings and reminding teachers of previous agreements, is important for collective learning (Hanraets, Hulsebosch, and De Laat Citation2011; De Jong, Meirink, and Admiraal Citation2021). Therefore, Vrieling-Teunter, Van den Beemt, and de Laat (Citation2016) developed the ‘Dimensions of Social Learning (DSL) Framework’ () to facilitate TLGs in assessing their social configuration to improve learning processes within TLGs. In line with the rationale of TLGs, the framework is informed by a literature review that applied notions of network, community and team perspectives, all highlighting that learning is a process of constructing meaning in interaction with the social context (Wenger, Trayner, and De Laat Citation2011). For TLGs, it can be of use to view their social configurations from an overarching social learning point of view. For instance, a shared vision is important for TLGs because it gives direction to activities and binds members in the purposefulness of their actions (Wenger, Trayner, and De Laat Citation2011). The DSL Framework includes four dimensions (), each consisting of two to four indicators () showing the extent to which TLGs present specific attitudes and social behaviour. In this way, the framework gives a ‘snapshot’ of TLGs’ social learning at a certain point in time. Below, the framework is briefly outlined. For more details, see the review by Vrieling-Teunter, Van den Beemt, and de Laat (Citation2016).

Table 1. DSL Framework Including the Original 30 Items.

The first dimension, Practice, indicates the necessity for a relationship between the knowledge created and shared in the group and teachers’ day-to-day activities. This dimension distinguishes two indicators: (1a) ‘integrated or non-integrated activities’, representing the extent to which TLG knowledge and activities are integrated in their practice; and (1b) ‘temporary or permanent activities’, which describes the social learning attitude as reflected in the duration or sustainability of learning activities. Domain and value creation, the second dimension, is referred to as the sharing of experience and expertise among group members. Key indicators are as follows: (2a) ‘sharing or broadening/deepening knowledge and skills’, reflecting the extent to which the group develops collective knowledge and skills through dialogue and (2b) ‘individual or collective value creation’, which describes the developed level of shared value, such as group ownership, mutual inspiration, or positive interdependence. When group members work interdependently with a shared purpose and responsibility for collective success, the group can demonstrate a Collective identity. This third dimension can be characterised by (3a) ‘shared or unshared identity’, which is related to group history and social and cultural background; (3b) ‘weak or strong ties’, which reflects the sense and intensity of general contact among group members, and (3c) the extent to which group members perceive each other as ‘task executors or knowledge workers’. The final dimension, Organization, exhibits how the group is organised and can be indicated by (4a) the extent to which the group shows ‘directed or self-organised activities’; (4b) the focus on ‘local or global activities’; (4c) the presence of ‘hierarchic or equal relationships’, and (4d) the extent to which the group shows a shared interactional repertoire, reflected in ‘shared or non-shared interactional norms’.

Earlier findings of De Laat, Vrieling-Teunter, and Van den Beemt (Citation2016) within Dutch pre-service teacher education showed that the framework suited the analysis of TLGs’ social configurations. TLGs were able to get a picture of the social configuration of the group based on the framework. This led to greater awareness of the social learning processes and supported TLGs in their social learning activities. Although the original framework indicators were visible and recognisable, they were formulated in a manner that appeared too abstract for independent use by TLG facilitators. Therefore, De Laat, Vrieling-Teunter, and Van den Beemt (Citation2016) developed the framework towards a biographical interview. Biographical interviews (Bornat Citation2008) are interviews during which respondents are asked to look back over a period of time and narrate experiences related to a certain topic, in this case, the TLG.

The study of De Laat, Vrieling-Teunter, and Van den Beemt (Citation2016) also displayed that the participating teachers were looking for an instrument that can be used to follow the group process and development over a longer period of time with several measurements that are less time-consuming than the interviews. Therefore, the indicators of the DSL Framework were translated into 30 descriptions that were applied in practice during several TLG projects (). Validation of these 30 items is necessary for broader use than the current self-diagnostic purpose in TLGs. Thus, there is need for an instrument that draws a picture of the social configuration of TLGs and is optimally efficient to enhance teachers’ response rate. The instrument must be applicable as a survey instrument when the social configuration is a dependent variable, to get more general insight on conditions that are related to social configurations. And as an independent variable, to gain more clarity into how social configurations affect the outcome of TLGs. In this way, a broader understanding of how TLGs function for the various participants is created. Overall, the aim of this study is to validate the quantitative instrument that assesses social configurations in TLGs, as a next stage in the development of a reliable instrument.

2. Method

2.1. Samples

The DSL Questionnaire (DSL-Q), was distributed in TLG meetings within four pre-service teacher education colleges in different regions in the Netherlands. These institutes formed TLGs on various educational topics (e.g. giftedness), all consisting of student teachers of different years (from now on students), teacher educators, and in-service teachers (from now on teachers). The data were anonymously gathered in compliance with ethical norms; all participants provided active informed consent and participated voluntarily. The participants could withdraw from the study at any time. Responses were gathered from academic year 14/15 to 19/20 (219 students, 290 teachers). The DSL-Q aimed to measure social configurations of TLGs via 30 questions (). Participants indicate the applicability of their experiences in a TLG (e.g. ‘discussion about group products’) on a four-point scale (1 ‘not applicable’ to 4 ‘entirely applicable’). Items are hypothesised to measure 11 distinct indicators, which in turn measure four higher dimensions ().

Although it is important for students to join TLGs as equal participants to prepare them for their profession, students should develop the skills and competencies needed to function in these constellations of working and learning together (Elster et al. Citation2014). Thus, students have yet to develop the necessary skills for social learning and may have a different role than the more experienced participants. For example, students as novices do not see themselves as being equal to the teachers which may influence their perceptions of the social configuration (, indicator 4c, ‘the presence of hierarchic or equal relationships’). Therefore, it is interesting to consider possible differences in perception between students and teachers in the analyses of the DSL-Q. Also, gender might be of importance in exploring validity in educational settings (Martin Citation2007). Because our sample included a large group of missing data for gender, a three-group distinction was made in the analyses (male, female, missing). Another interesting angle is potential differences between institutes because of different cultures and experiences. Therefore, this was included as a grouping variable in factor analyses. The TLG level could also be interesting as a grouping variable, but was not included in the analyses because it did not meet sample size requirements for separate analyses (Maas and Hox Citation2005).

2.2. Data analysis

2.2.1. Confirmatory factor analysis for initial model

To examine how the acquired data fits the initial model (), confirmatory factor analysis (CFA) was conducted with Jamovi (v1.2) and MPlus (v8.5) using maximum likelihood method for data extraction. The sample consisted of 488 respondents, 219 students (41 male, 99 female, 79 no gender data) and 269 teachers (48 male, 221 female). Model fit was determined by X2 statistic, normed Chi square (NC), Root Mean Square Error of Approximation (RMSEA), Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), and Standardized Root Mean Square Residual (SRMR). CFA models were compared by computing ΔX2, where p ≤ .05 indicates a significant improvement of the model.

The X2 test should be non-significant, where smaller values indicate better model fit (Gatignon Citation2010). As X2 is highly dependent on sample size, NC (X2/df) is considered with values of 3.00 or less indicating a better fit (Hair et al. Citation2010). For RMSEA, values indicate a good- (<0.05), fair- (>0.05, <0.08), mediocre- (>0.08, <0.10), and poor-fit (>0.10), respectively (Hu and Bentler Citation1999). For CFI and TLI, values above 0.90 and 0.95 are considered acceptable and excellent fits (McDonald and Marsh Citation1990). For SRMR, a value below 0.08 is considered a good fit (Hu and Bentler Citation1999). As the research was done in a real-life educational context, subsamples were based on cohorts within academic years which did not allow for (e.g.) random sampling.

2.2.2. Subsample 1 exploratory factor analysis

Subsample 1 consisted of 258 respondents,49 students (11 male, 37 female, 1 missing information) and 209 teachers (35 male, 174 female), to perform EFA using maximum likelihood method. Parallel analysis was used to determine the number of factors (Zwick and Velicer Citation1986) with oblimin rotation (Costello and Osborne Citation2005).

2.2.3. Subsample 2 confirmatory factor analysis

Subsample 2 consisted of 230 respondents,170 students (30 male, 62 female, 78 missing information) and 60 teachers (13 male, 47 female). This sample was used for CFA, to confirm validity of the factor structure found in EFA.

3. Results

3.1. Initial model fit

The 30 items from the initial model showed a poor fit, with X2 = 2057 (df = 399, p = <.001), NC = 5.16, RMSEA = 0.09, CFI = 0.73, TLI = 0.71, and SRMR = 0.11. Therefore, subsamples were created to perform exploratory factor analyses (EFA) to explore factor structures, and CFA to confirm the structures found in the EFA.

3.2. Subsample 1: EFA

The initial 30 items were subjected to EFA which led to a three-factor solution (). Kaiser–Meyer–Olkin (KMO) overall measure of sampling adequacy was .866, no items below .70 (‘great’, Hutcheson and Sofroniou Citation1999). Correlations between items were sufficiently large for factor analysis as indicated by Bartlett’s test of sphericity (X2 = 2610, df = 435, p = < .001). Items with factor loadings ≤0.40 were considered to load insufficiently (Costello and Osborne Citation2005) and were removed one by one until all loadings were ≥0.40. The remaining 22 items were again subjected to EFA leading to a three-factor solution; KMO overall measure was .868; no items ≤.70; sphericity was again sufficient; Bartlett’s test X2 = 2182, df = 231, p = < .001. This three-factor solution accounted for 50.8% of the variance (). From a content perspective, three principles emerged: (1) the selected items had to do justice to the three factors; (2) the practice instrument had to be quick to administer and therefore not contain any overlapping items; (3) the items had to be evenly distributed over the factors.

Table 2. Sample 1 EFA Highest factor Loadings.

Factor 1 consisted of 12 items, from indicators ‘shared or not shared identity’ (items 13 and 14), ‘weak or strong ties’ (item 17), ‘task executors or knowledge workers’ (items 19 and 20), ‘directed or self-organised activities’ (items 21 and 22),‘hierarchic or equal relationships’ (items 26 and 27), and all three items from indicator ‘shared or non-shared interactional norms’ (items 28 to 30). Factor 2 consisted of seven items, four of which (items 1–4) originally related to the indicator ‘integrated or non-integrated activities’ and the other three items (9–11) were derived from the indicator ‘sharing or broadening/deepening knowledge and skills’. Factor 3 consisted of three items (5–7), all related to the indicator ‘temporary or permanent activities’. Correlations between factors indicate they measure similar but separate dimensions of social learning ().

Table 3. Subsample 1: Final EFA Correlation Matrix.

3.3. Subsample 2: CFA

3.3.1. Model fit

provides fit indices for the CFA models. The final EFA model was first tested on subsample 2. Model fit for CFA model 1 was not acceptable; four items (10, 19, 22, 28) were therefore removed based on item content and modification indices, leading to a second CFA model. This model showed a better fit and ΔX2 showed a significant improvement, although fit measures were still not acceptable. After a third iteration based on item content and modification indices, five items were removed (9, 13, 20, 21, 30) leading to CFA model 3. Model fit showed a good to excellent fit, X2 = 151, p = <.001, df = 62, RMSEA = 0.08 (fair fit), CFI = 0.96 (excellent fit), TLI = 0.95 (excellent fit), SRMR = 0.05 (good fit).

Table 4. Subsample 2 CFA Fit Indices for the DSL-Q.

3.3.2. Gender, role, and institution

Psychometric differences between gender, role, and institution can provide evidence of the DSL-Q’s stable validity independent of specific contexts. This was done with gender, role, and institution as grouping variables for CFA model 3 (). For gender and role, results show good to excellent model fit for all subgroups. This indicates that CFA model 3 is stable, regardless of gender or role in a TLG. Sample sizes for institution were insufficient to examine model fit for all four institutions separately. Therefore, institutions 2, 3, and 4 were combined to be compared with institution 1. Model fit showed overall acceptable fit for institution.

Table 5. CFA Model 3 Fit Indices, Grouping Variables are Gender, Role, and Institution.

Possible measurement (in)variance was also examined for gender, role, and institution (). A stepwise approach (configural, then metric, then scalar model) was used in which equality constraints were added (first equal factor loadings, then equal factor loadings and equal intercepts; cf. Van de Schoot, Lugtig, and Hox Citation2012). For role and gender, significant differences between the metric and the configural model were found, as well as between the scalar and the metric model. For institution, the metric model did not differ significantly from the configural model, and the scalar model did not differ significantly from the metric model. This means there is good measurement invariance between different institutions.

Table 6. CFA Model 3 Measurement (in)variance between Groups.

3.3.3. Internal reliability

Internal reliability for CFA model 3 is expressed by Cronbach’s α Values of α ≥ .90are considered excellent, between .90 and .80 good, between .80 and .70 acceptable, between .70 and .60 questionable; however, the more items, the higher the alpha should be (Field Citation2018). All factors show good internal reliability ().

Table 7. CFA Model 1,2, and 3 Means, Standard Deviations, and Measures of Internal Reliability.

Finally, presents the definite DSL-Q.

Table 8. Final DSL-Q Based on CFA Model 3.

4. Discussion

The current study is part of a long-term project in which a number of steps have already been taken to better facilitate TLGs. First, four dimensions of social learning were extracted from a literature review to develop an instrument. Second, 30 items were constructed to enable validation for research on a larger scale. Third, the instrument was surveyed among respondents who had attended at least one TLG. To validate thoroughly, two samples were used. First, after insufficient fit on initial CFA, EFA (sample one) resulted in 22 items divided over three factors. Then, to confirm EFA findings, CFA resulted in good fit indices and improved reliability after removing another nine items. However, measurement variances existed for role and gender, suggesting that item interpretation or weight may have differed between these groups. This could be due to small group sizes. Future research can therefore explore how DSL-Q constructs and measurements relate to theoretically linked constructs and other instruments, for empirical validation with larger groups. Furthermore, future research could examine alternative models to fit the data, which may prove additional insights when measuring dimensions of social learning.

The validated DSL-Q with 13 items, provides a picture of TLGs on three distinct factors. Factor 1 (practice integration, five items) originates from the ‘practice’ dimension with the indicator ‘integrated or non-integrated activities’. Factor 2 (long-term orientation and goals, three items) also stems from the ‘practice’ dimension combined with the indicator ‘temporary or permanent activities’. Factor 3 (shared identity and equal relationships, five items) concerns a selection of core items from the original dimensions of ‘collective identity’ and ‘organisation’.

The final DSL-Q does not include the original dimension of ‘domain and value creation’ with the underlying indicators (‘sharing or broadening/deepening knowledge and skills’, and ‘individual or collective value creation’), except for the item ‘adjustment of group products after discussion or feedback’ now belonging to ‘practice integration’. This implies that the instrument does not address the collective creation of new knowledge and innovation, the importance of which is described by McKenney and Reeves (Citation2018). Perhaps, the indicators belonging to domain and value creation are more part of the TLGs’ outcome, than on the configuration itself and thus should be operationalised in a different way, integrating the value creation cycles developed by Wenger, Trayner, and De Laat (Citation2011). Value creation theory focuses on individual perceived value of the participants, while a collective perspective is also highly appropriate for the assessment of the social configuration. Thus, adjustments to the original framework of Wenger, Trayner, and De Laat (Citation2011) are needed to operationalise collective value creation. Therefore, in future research, we will reformulate the current ‘domain and value creation’ items and supplement them with value creation cycles from a collective perspective. In this way, TLGs can not only monitor their social configuration but also the product side of social learning fitting the discussion about the quality of the designed products that has become increasingly intense in recent years.

The indicators ‘Shared and unshared identity’ and ‘Weak and strong ties’ have both only one item in the final factor ‘shared identity and equal relationships’. The remaining items ‘feeling of belonging to the group’ and ‘reciprocal relationships between group members’ are core components of a shared identity (Van Meeuwen et al. Citation2020). Therefore, we expect that the eliminated items do not have to be reformulated. With regard to the indicators ‘task executors or knowledge workers’ and ‘local or global activities’, which were not included in the DSL-Q, the items could be reformulated simpler and more straightforward and validated in the future research.

Another missing element in the DSL-Q concerns social regulation skills (original indicator ‘directed or self-organised activities’). In line with self-regulated learning theories, it is important to look for a balance between control and social regulation in TLGs (Kirschner, Sweller, and Clark Citation2006). Without facilitation, it is difficult for TLGs to regulate their learning process. It is therefore essential for group facilitators to gradually diminish their support (scaffolding) during the process (De Laat, Vrieling-Teunter, and Van den Beemt Citation2016). By way of identifying and modelling the expected behaviours, novices can be guided in developing sufficient social skills (Panadero Citation2017). The insights of Järvelä et al. (Citation2016) concerning the interaction between self-regulation, co-regulation and socially shared regulation can support the further development regarding social regulation in the DSL-Q.

Overall, the results showed that the remaining items of the DSL-Q cover three dimensions with seven underlying indicators of the original framework, but it has not yet been possible to find valid factors that measure the full scope of the framework. However, the fact that only three factors and 13 items remain does not make this instrument less valuable. The dimensions found, including the separate items, can be used by TLGs as a self-assessment to reflect on the functionality of their own social configuration on the three dimensions. Grounded on this ‘snapshot’, participants can decide together whether the social configuration fits their goals or should lead to adjustments in the way they work in the short and medium term. For example, a TLG may come to the conclusion that the group products are already being integrated in local practice, but not yet outside local practice, while widespread implementation is among the goals. This can lead to an adjustment of the working method including measuring its effects. Within teacher education, for example, the tool can provide more insight how to support the specific role of students within TLGs. Also, due to the limited number of items, completion takes little time.

Finally, this validated instrument incorporates important characteristics of social configurations of TLGs as was found in literature as well as practice, and can be viewed as a first exploratory step in the research on this phenomenon. It presents a promising, user-friendly alternative to monitor social learning in TLGs that is regarded as a stimulus for professional development.

Ethics declarations

The research was ethically approved by the cETO committee of the Open Universiteit under number U/2019/09081/MQF.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data supporting the findings of this study are available on request from the corresponding author.

References

  • Bornat, J. 2008. “Biographical Methods.” In The Sage Handbook of Social Research Methods, edited by P. Alasuutari, L. Bickman, and J. Brannen, 344–356. Los Angeles: Sage.
  • Costello, A. B., and J. Osborne. 2005. “Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most from Your Analysis.” Practical Assessment, Research, and Evaluation 10 (7): 1–9. doi:10.7275/jyj1-4868.
  • De Jong, L., J. Meirink, and W. Admiraal. 2021. “Teacher Learning in the Context of Teacher Collaboration: Connecting Teacher Dialogue to Teacher Learning.” Research Papers in Education 1–24. doi:10.1080/02671522.2021.1931950.
  • De Laat, M., E. Vrieling-Teunter, and A. Van den Beemt. 2016. “Facilitation of Social Learning in Teacher Education: The ‘Dimensions of Social Learning-Framework.” In Communities of Practice: Facilitating Social Learning in Higher Education, edited by J. McDonald and A. Cater-Steel, 153–174. Berlin: Springer.
  • Doornbos, A., and M. De Laat. 2012. De Waarde van CoPs in Het Groene Onderwijs [The Value of CoPs in Green Education]. Heerlen: Open Universiteit.
  • Doppenberg, J., A. Bakx, and P. Den Brok. 2012. “Collaborative Teacher Learning in Different Primary School Settings.” Teachers and Teaching: Theory and Practice 18 (5): 547–566. doi:10.1080/13540602.2012.709731.
  • Elster, D., T. Barendziak, F. Haskamp, and L. Kastenholz. 2014. ““Raising Standards Through Inquiry in Pre-Service Teacher Education.” Science Education International 15 (1): 29–39.
  • Field, A. 2018. Discovering Statistics Using IBM SPSS Statistics. Los Angeles: Sage.
  • Gatignon, H. 2010. Statistical Analysis of Management Data. New York: Springer.
  • Hair, J. F., W. C. Black, B. J. Babin, and R. E. Anderson. 2010. Multivariate Data Analysis. 7th. J. F. Hair, W. C. Black, B. J. Babin, and R. E. Anderson. London: Pearson Education.
  • Hanraets, I., J. Hulsebosch, and M. De Laat. 2011. “Experiences of Pioneers Facilitating Teacher Networks for Professional Development.” Educational Media International 48 (2): 85–99. doi:10.1080/09523987.2011.576513.
  • Hargreaves, E., R. Berry, Y. C. Lai, P. Leung, D. Scott, and G. Stobart. 2013. “Teachers’ Experiences of Autonomy in Continuing Professional Development: Teacher Learning Communities in London and Hong Kong.” Teacher Development 17 (1): 19–34. doi:10.1080/13664530.2012.748686.
  • Hu, L. T., and P. M. Bentler. 1999. “Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives.” Structural Equation Modeling: A Multidisciplinary Journal 6 (1): 1–55. doi:10.1080/10705519909540118.
  • Hutcheson, G. D., and N. Sofroniou. 1999. The Multivariate Social Scientist: IntroductoryStatistics Using Generalized Linear Models. Los Angeles: Sage.
  • Järvelä, S., H. Järvenoja, J. Malmberg, J. Isohätälä, and M. Sobocinski. 2016. “How Do Types of Interaction and Phases of Self-Regulated Learning Set a Stage for Collaborative Engagement?” Learning and Instruction 43: 39–51. doi:10.1016/j.learninstruc.2016.01.005.
  • Kirschner, P. A., J. Sweller, and R. E. Clark. 2006. “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching.” Educational Psychologist 41 (2): 75–86. doi:10.1207/s15326985ep4102_1.
  • Maas, C.J., and J.J. Hox. 2005. “Sufficient Sample Sizes for Multilevel Modeling.” Methodology 1 (3): 86–92. doi:10.1027/1614-2241.1.3.86.
  • Martin, A. J. 2007. “Examining a Multidimensional Model of Student Motivation and Engagement Using a Construct Validation Approach.” The British Journal of Educational Psychology 77 (2): 413–440. doi:10.1348/000709906X118036.
  • McDonald, R. P., and H. W. Marsh. 1990. “Choosing a Multivariate Model: Noncentrality and Goodness of Fit.” Psychological Bulletin 107 (2): 247–255. doi:10.1037/0033-2909.107.2.247.
  • McKenney, S., and T. C. Reeves. 2018. Conducting Educational Design Research. Abingdon: Routledge.
  • Panadero, E. 2017. “A Review of Self-Regulated Learning: Six Models and Four Directions for Research.” Frontiers in Psychology 8: 1–28. doi:10.3389/fpsyg.2017.00422.
  • Sleegers, P. 2013. “Toward Conceptual Clarity: A Multidimensional, Multilevel Model of Professional Learning Communities in Dutch Elementary Schools.” The Elementary School Journal 114 (1): 118–137. doi:10.1086/671063.
  • Van de Schoot, R., P. Lugtig, and J. Hox. 2012. “A Checklist for Testing Measurement Invariance.” The European Journal of Developmental Psychology 9 (4): 486–492. doi:10.1080/17405629.2012.686740.
  • Van Meeuwen, P., F. Huijboom, E. Rusman, M. Vermeulen, and J. Imants. 2020. “Towards a Comprehensive and Dynamic Conceptual Framework to Research and Enact Professional Learning Communities in the Context of Secondary Education.” European Journal of Teacher Education 43 (3): 405–427. doi:10.1080/02619768.2019.1693993.
  • Van Schaik, P., M. Volman, W. Admiraal, and W. Schenke. 2019. “Approaches to Co-Construction of Knowledge in Teacher Learning Groups.” Teaching and Teacher Education 84: 30–43. doi:10.1016/j.tate.2019.04.019.
  • Vangrieken, K., F. Dochy, E. Raes, and E. Kyndt. 2015. “Teacher Collaboration: A Systematic Review.” Educational Research Review 15: 17–40. doi:10.1016/j.edurev.2015.04.002.
  • Vrieling-Teunter, E., R. Hebing, and M. Vermeulen. 2021. “Design Principles to Support Student Learning in Teacher Learning Groups.“teachers and Teaching: Theory and Practice. doi:10.1080/13540602.2021.1920909.
  • Vrieling-Teunter, E., A. Van den Beemt, and M. de Laat. 2016. “What’s in a Name: Dimensions of Social Learning in Teacher Groups.” Teachers and Teaching: Theory and Practice 22 (3): 273–292. doi:10.1080/13540602.2015.1058588.
  • Wenger, E., M. Trayner, and M. De Laat. 2011. Promoting and Assessing Value Creation in Communities and Networks: A Conceptual Framework. Heerlen: Open Universiteit.
  • Zwick, W. R., and W. F. Velicer. 1986. “Comparison of Five Rules for Determining the Number of Components to Retain.” Psychological Bulletin 99 (3): 432–442. doi:10.1037/0033-2909.99.3.432