912
Views
2
CrossRef citations to date
0
Altmetric
Article

A Global Measure of Professional Learning Communities

ORCID Icon
Received 04 Aug 2021, Accepted 31 Mar 2022, Published online: 20 Apr 2022

ABSTRACT

The concept of professional learning communities (PLCs) has received considerable attention in research as well as in school practice since the late 1990s. PLCs have been positively associated with a variety of outcomes for both teachers and students, but differences in the way the concept is operationalised, and the fact that most of the research is related to specific national contexts, make it difficult to accumulate and compare research related to PLCs. In order to give PLC research an international scope, the aim of this paper is to develop a global measure of PLCs. The paper includes an overview of the literature on PLCs, as well as an examination of existing measures. Informed by these findings, multi-level confirmatory factor analysis is applied to develop a measure of PLCs in the Teaching and Learning International Survey 2018. This measure includes three distinct dimensions and an overall measure of PLCs at school level in 42 countries/economies, and is available for researchers interested in the concept of PLCs and international perspectives. The measure is applied in an example which investigates the relationship between PLCs and teacher job satisfaction in the Nordic countries. Strengths, limitations, international comparability and possibilities for future developments are discussed.

Introduction

The concept of the professional learning community (PLC), which can be defined briefly as ‘a group of people sharing and critically interrogating their practice in an ongoing, reflective, collaborative, inclusive, learning-oriented, growth-promoting way’ (Stoll et al. Citation2006, p. 223), has received considerable attention internationally since it emerged in the late 1990s. While many scholars have investigated the relationship between PLCs and various aspects within schools, most of the research is based on data from specific national samples. Most of this research is in the context of the United States (Bolam et al. Citation2005, Lomos et al. Citation2011a), while there is less research in other contexts (Sleegers et al. Citation2013, Bellibas et al. Citation2016, Zhang et al. Citation2020), and even fewer studies that have an international scope (Vieluf et al. Citation2012). Additionally, there are inconsistencies in the ways in which the concept is defined and operationalised, making it difficult to compare and accumulate research.

Arguably, the field of research could benefit from a global measure of PLCs that can facilitate research with an international perspective, as well as validating existing findings in other national contexts. This can only be done by using international studies which provide substantial information about different educational systems by using the same instruments across countries. The Teaching and Learning International Survey (TALIS) is specifically relevant for this ambition since it includes a great deal of information about teachers, principals and schools in 47 countries/economies.

Within this context, the aim of this paper is: 1) to outline the concept of PLCs by providing various theoretical perspectives and describing key elements; 2) to highlight inconsistencies and similarities between theory, practice and research related to PLCs; 3) to develop a global measure of PLCs at school level in TALIS 2018, drawing upon common conceptualisations, and 4) to demonstrate the utility of this measure with an example of the relationship between PLCs and teacher job satisfaction in the Nordic countries.

PLCs in theory and practice

Defining the concept of PLCs

PLCs are essentially concerned with improving instructional quality through an ongoing, practice-orientated collaboration of the teachers that provide it. For decades, the development of teacher capacity has been seen as an important aspect of school improvement, with the ultimate aim of supporting student learning. This endeavour has led to a substantial amount of literature on teacher professional learning or teacher professional development, including a wide range of activities. A broad definition of teacher professional development is ‘ … activities that develop an individual’s skills, knowledge, expertise and other characteristics as a teacher’ (OECD Citation2009). It is argued that this learning or development takes place in three overlapping and recursive systems: the individual teacher, the school, and the activity in question (Opfer and Pedder Citation2011). In this broad area of literature, PLCs stand out by emphasising that learning is situated, and that it should be developed in the context of the teacher’s actual practice. It is also emphasised that of the development of this situated learning is a continuous process that involves collaborative work and knowledge sharing with colleagues, in the context where this knowledge is generated and has its validity (Putnam and Borko Citation2016, Stoll et al. Citation2016). The concept has similarities with earlier pedagogical concepts, such as action research, communities of practice and the learning organisation (Stenhouse Citation1975, Senge Citation1990, Lave Citation1991).

While the concept of PLCs has become popular during recent decades, it lacks a clear definition and is used ambiguously within research. Central aspects of the concept go back to school research in the 1960s, as described below, but ‘Professional Learning Communities: Communities of Continuous Inquiry and Improvement’ by Hord (Citation1997) is regarded as the origin of the concept and the first systematic description of the professional learning community (Hargreaves and Fullan Citation2012). Hord suggests:

… professional learning communities produce positive outcomes for both staff and students. For staff, being part of a professional learning community reduces teacher isolation, increases commitment to the mission and goals of the school, creates shared responsibility for the total development of students, creates powerful learning that defines good teaching and classroom practice … (Hord Citation1997, p. 1).

Since then, much has been written about the concept of PLCs. So much, in fact, that Richard DuFour wrote the article ‘What Is a Professional Learning Community?’ in 2004 as an attempt to define the key elements at a time when the concept, as described by DuFour, was at risk of losing all meaning because it was being used in very different contexts and with different meanings (Dufour Citation2004). A recent and more comprehensive definition of PLCs is provided by Hargreaves and Fullan (Citation2012), by breaking down the meaning of each of the three words (community, learning, and professional):

1. Communities, where educators work in continuing groups and relationships, where they are committed to and have collective responsibility for a common educational purpose, and where they are committed to improving their practice in relation to that purpose.

2. Learning communities, where improvement is driven by the commitment to improving students’ learning, well-being, and achievement, where the process of improvement is heavily informed by professional learning and inquiry into students’ learning and into effective principles of teaching and learning in general, and where any problems are addressed through organisational learning in which everyone in the organisation learns their way out of problems instead of jumping for off-the-shelf, quick-fix solutions.

3. Professional learning communities, where collaborative improvements and decisions are informed by but not dependent on scientific and statistical evidence, where they are guided by experienced collective judgement, and where they are pushed forward by grownup, challenging conversations about effective and ineffective practice (Hargreaves and Fullan Citation2012).

Having established the notion and context of PLCs, the next section will describe the most important elements of PLCs.

Key elements of PLCs

In the following, the main aspects of how PLCs are envisioned in practice will be described. The aim is to clarify how teachers and other staff are expected to behave in practice, and which organisational features are necessary for effective PLCs. While some of the elements described naturally overlap, it is an attempt to divide the concept of PLCs into smaller segments which in sum encompass PLCs at the school level.

Shared vision

In creating PLCs in schools, the development of a shared vision (or values or beliefs) is frequently mentioned (Albrechtsen Citation2013, Olivier and Huffman Citation2016). Shared visions are intended to guide the behaviour of individuals within an organisation, moving the school as a community closer to realising this vision. The teachers should be involved in the development of this vision, the aim of which is to reach a consensus by sharing and discussing knowledge. The idea is that the focus on this shared vision, in time, will result in changes in the attitude and behaviour of individuals, thereby changing the culture at the school. Although the staff should be involved in this development, it is essential that the overall goal of this vision is focused on the learning of the students, which of course imposes some clear boundaries (Dufour et al. Citation2016). This emphasis on student learning leads to the next element.

A strong focus on (individual) student learning

It is an assumption of PLCs that the core mission of schools is to ensure that every student learns, not just that the students are taught. This has implications for how teachers should view their task. It is not sufficient to make sure you have taught a subject, because what really matters is whether or not the students have actually learnt what you intended. This makes the continuous monitoring of student learning (formative evaluation/assessment) imperative, making it necessary to set goals that can be evaluated in terms of what the individual students have learnt. This raises the importance of using data in monitoring the learning of the individual student, and precludes the use of class averages in monitoring student learning (Dufour Citation2004). The formative evaluation of student learning is central to the concept of PLCs, and teachers must clarify exactly what the students must learn, continuously monitor the degree to which this learning takes place, and develop systematic interventions that support students who experience difficulties in learning. Practice must be evaluated not in terms of the intention behind it, but in terms of whether or not it facilitates the intended student learning. The development of practice with the aim of increasing student learning is a continuous process (Dufour et al. Citation2016).

Collaborating and reducing teacher isolation

What actually goes on in the classroom, which traditionally has been a private matter for the individual teacher, must become subject to discussion in a systematic process in which teachers collaboratively analyse and improve classroom practice. An important aspect of this ambition is the review of classroom practice by peers (peer supervision/observation), with the aim of improving the professional practice of the individual and the community. This requires respect, trust, and the willingness to debate, accept feedback and change (Hord Citation1997).

Teachers must recognise that this form of collaboration is necessary in order to enhance their professional practice, and thereby student learning. The willingness to disagree and discuss the evidence and improvements of practice, which in turn will benefit the students, is the heart and soul of a strong and sustainable PLC (Dufour Citation2004, Hargreaves Citation2007). By sharing and discussing concrete experiences and testing possible solutions and new ideas collaboratively, the learning of each student becomes a project for the group and not the individual teacher, which leads to the next key element.

Shared responsibility for student learning

When teachers collaborate and discuss their professional practice, the learning of the individual student is a collective responsibility. As described by Dufour (Citation2004), every professional in the building must engage with colleagues in the ongoing exploration of three crucial questions:

  • What do we want each student to learn?

  • How will we know when each student has learned it?

  • How will we respond when a student experiences difficulty in learning?

Notice the ‘we’ in the three questions, highlighting the cooperative aspects of PLCs instead of the individual teacher. DuFour highlights the third question as one that separates PLCs from traditionally organised schools: remedies and interventions for students struggling with learning are an issue for the entire community, and not just the individual teacher. The emphasis on collective responsibility for student learning is also emphasised by Hord (Citation1997) and Dufour et al. (Citation2016) in describing schools with successful PLCs.

Leadership and supportive conditions

The final key aspect of PLCs in practice is the need for leadership support. Without the support of the school management for the development of the community, and prioritising that the teachers continuously devote time and effort to the work done in their respective communities, the ideal of PLCs is unlikely to become a reality. As stated by Mclaughlin and Talbert (Citation2006, p. 56):

Administrators who use their authority to build a teacher community convey new expectations for teachers’ work in the school, and they ensure that teachers have the time, space, and knowledge resources needed for collaborative work.

Stoll et al. (Citation2006) also maintain that it is difficult to see how a PLC can develop without active support from the leadership, and that distributed leadership aligns well with the idea of PLCs because it can provide the opportunity for teachers to take the lead on the development of their practice.

Measurements of PLCs in research

The following section will present an overview of strategies applied in measuring PLCs in quantitative research, with a focus on differences in terms of which aspects are included, and how the different approaches align with the theoretical framework. As mentioned in the introduction, there is a considerable amount of research related to the concept of PLCs, and within this research there is general consensus that the concept lacks a clear definition or conceptualisation. This ambiguity in terms of definition and operationalisation makes it difficult to compare and accumulate research related to PLCs (Stoll and Louis Citation2007, Watson Citation2014, Kruse and Johnson Citation2016).

Although there is no clear definition, a great deal of research has defined PLCs using various combinations of the following elements: shared vision/values, a strong focus on (individual) student learning, collaborating and reducing teacher isolation, and shared responsibility for student learning and inquiry (Vescio et al. Citation2008, Lomos et al. Citation2011b, Lee and Kim Citation2016, Prenger et al. Citation2018).

Other studies use additional, fewer or different aspects than those explicitly described in the review above. For example, Hurley et al. (Citation2018) include democratic environment, teacher efficacy and confidence and the extent to which teachers feel valued in the measurement of PLCs; while Vanblaere and Devos (Citation2016) include reflective dialogue, and ‘The Professional Learning Community Assessment-Revised’ (PLCA-R) includes the dimensions shared and supportive leadership, supportive conditions – relationships and supportive conditions – structures (Olivier and Hipp Citation2010). While some researchers include job satisfaction and commitment as well as mutual trust, respect and support between colleagues in measuring PLCs (Bolam et al. Citation2005, Sigurðardóttir Citation2010, Chen et al. Citation2016), others view these as mediating aspects or outcomes of PLCs (Zheng et al. Citation2016, Doğan and Adams Citation2018).

One important distinction between many of the existing measures of PLCs is whether or not leadership support is included as an element of PLCs. One possible explanation for this difference is that the concept of PLCs in research has emerged from two different strains of theory. According to Zhang et al. (Citation2020), one of these strains is rooted in Senge’s work on the learning organisation, where PLCs are regarded as school-wide phenomena including cultural norms and values. This strain is closely related to the conceptualisation presented by Hord (Hord Citation1997), which includes shared and supportive leadership as a prerequisite of PLCs. The second strain of theoretical research is more closely related to Wenger’s concept of communities of practice, and within this perspective PLCs are perceived as communities that consist only of teachers who meet in teams, subgroups or subject departments. This strain of research is thereby aligned with the research that defines PLCs as having five elements but does not include leadership. In sum, the learning organisation perspective perceives leadership and supportive conditions as basic characteristics of PLCs, while the community of practice perspective considers leadership and school structure to be external to the concept of PLCs, and treats these elements as crucial factors influencing PLCs (Zhang et al. Citation2020).

It is clear from this overview that while there are many communalities, there are also distinct differences between these ways of measuring PLCs, and we still do not have a clear, universal definition. These differences within this field of research make it very important to pay close attention to conceptualisation, possible limitations related to the theoretical descriptions, and the multidimensional nature of PLCs (Hairon et al. Citation2017, Doğan and Adams Citation2018). With these lessons in mind, the following section will present the development of global measure of PLCs based on TALIS 2018 data, which aligns with the Senge/Hord approach, in which PLCs are perceived as a school-level phenomenon in which values, norms and leadership support are included.

Developing a global measure of PLCs in TALIS 2018

The ultimate aim of this paper is to develop a measure of PLCs that can be used in the international (pooled) data of TALIS 2018, as well as in a wide range of different national contexts, where PLCs are currently understudied, and where differences in data sources and measures applied make it difficult to compare results. The TALIS study is arguably the most suitable data source for developing such a measure, as its extensive teacher and principal questionnaire is administered to nationally representative samples in 47 countries/economies. Besides using the TALIS data for analysis on country-level and in comparisons between countries, using the pooled TALIS data is a strategy applied in several recent studies, controlling for country differences to test hypotheses across this extensive international data source (Liu and Bellibas Citation2018, Bellibaş et al. Citation2021, Jerrim and Sims Citation2021).

Like the previous cycles, the 2018 cycle of TALIS includes a wide variety of questions related to teacher practice, collaboration and professional development, but no official measure of PLCs. Closest related to the ambition of this paper is an official OECD report, using data from the 2008 cycle to investigate teaching practice and pedagogical innovation, in which PLCs are measured using a relatively simplistic measure including six items to measure five latent constructs (Vieluf et al. Citation2012). The same items are also used to measure PLCs in two additional articles using TALIS 2008 data, with minor changes in the first of these (Lee and Kim Citation2016, Doğan and Yurtseven Citation2018). To the author’s knowledge, no measure of PLCs has previously been created or validated to fit the combined TALIS data, and/or using data from the latest cycle in 2018.

Methods and procedure

PLCs are envisioned as school-wide communities, which is one of the aspects that distinguishes them from communities of practice, for instance. This indicates that PLCs should be measured at school level. At the same time, the core of any PLC is the teachers collaborating at the school in question, and consequently the measure should be created using responses from the teachers within the school. Recent methodological articles in multilevel modelling suggest that a ‘shared cluster construct’ is the most suitable method in pursuing this ambition. The following sections will describe the methods and procedures, the data and the final measure of PLCs in TALIS 2018.

A shared cluster construct using multilevel confirmatory factor analysis

When data is hierarchical, e.g. teachers nested in schools, and the researcher is interested in the school level, individual-level items can be used to measure characteristics of the cluster in a shared cluster construct using multilevel confirmatory factor analysis (ML-CFA). One prerequisite for such a measure is that the responses given by individuals within the same cluster are highly correlated, which can be assessed by the intraclass correlations ICC 1 and ICC 2, measuring the degree of clustering and the reliability at the cluster level, respectively (Bliese Citation2000). In such a model, any variance at the individual level is not of interest. Therefore, the construct of interest is specified at the between level, using the individual-level responses. Simultaneously, a saturated model of variances is constructed at the within level (Stapleton et al. Citation2016, Bonito and Keyton Citation2019). The data was prepared using the statistical software ‘R’ and the ‘MplusAutomation’ package, and CFA analysis was performed using Mplus version 8.5 (Muthén and Muthén Citation2017, Hallquist and Wiley Citation2018, R Core Team Citation2020).

Since the items included in the final model are all ordered categorical, weighted least square mean and variance adjusted (WLSMV) would theoretically be the ideal estimation technique (Brown Citation2015). However, because of a series of limitations related to using WLSMV, robust maximum likelihood (MLR) is applied. Major limitations of using WLSMV in this case are that it is not possible to export factor scores and to do multiple regression, which significantly limits the use of the PLC measure in secondary analysis. Moreover, OECD applied MLR for the official teacher scales, and another study has shown no substantial difference between the two in TALIS 2018 data (OECD Citation2019, p. 209, Zakariya Citation2020). A series of comparisons between the two estimators have similarly shown no substantial difference in the case of the PLC measure developed in this article. The R-script for processing the data and the Mplus syntax for the ML-CFA are available in the supplemental materials, along with a SPSS-file including all original TALIS data and the PLC scales, allowing researchers to apply the measures using their preferred statistical software.

Procedure adopted

Based on the theoretical review and existing measures of PLCs in research, a very broad selection of theoretically relevant items was selected from the TALIS 2018 teacher questionnaire (OECD Citationn.d.). This initial selection of items was reviewed by two international scholars with experience in working with TALIS data and the concept of PLCs. Based on their feedback, multiple items were dropped mainly because of theoretical arguments, limitations in using binary items and to avoid differences in answer types within dimensions. The full preliminary selection of items is presented in Appendix 1.

As this structure of PLCs has not been validated previously, it would be sensible to perform exploratory factor analysis (EFA). However, the specific way in which a shared cluster construct is operationalised does not translate directly into the EFA framework. Therefore, EFA was only used to explore how the items loaded together at the individual level, in order to inform the changes to the ML-CFA model at the early stages. The procedure involved in developing the final measure was to test different competing models aligned with theory, assessing changes in model fit and factor loadings as the basis for and restructuring and reducing dimensions and items. The final model was selected based on the best possible theoretical coverage as well as acceptable model fit statistics (Thompson and Daniel Citation1996, Brown Citation2015).

Data

TALIS 2018 includes 47 countries/economies in the publicly available data from the core survey of teachers and principals in the UNESCO International Standard Classification of Education (ISCED) level 2, which is the data used in this analysis. The sample strategy involved randomly selecting 20 teachers from 200 schools. Different stratification strategies have been applied across the participants (OECD Citation2019).

Results

The final model consists of 14 items that are used to measure three latent constructs: collaborative practice with focus on student learning (CPL), shared vision and responsibilities (SVR), and supportive conditions (SC). These three latent constructs are used to create the overall measure of PLCs at school level. The items that comprise the three latent constructs and the answer categories are presented in , and the final model is illustrated in along with the standardised factor loadings.

Figure 1. Final model with standardised factor loadings, pooled data.

Figure 1. Final model with standardised factor loadings, pooled data.

Table 1. Factors and items in final model of PLCs.

shows the ICC 1, ICC2 and the amount of missing data for each item. The ICC 1 varies from 0.09 to 0.37, and the ICC 2 is within the range of 0.63 to 0.91. For the ICC 1, Lebreton and Senter (Citation2008) suggest that a value of 0.01 is a small effect, a value of 0.10 is a medium effect, and 0.25 is a large effect. Similarly, Fleiss (Citation1999) suggests that in assessing reliability, as measured by the ICC 2, values below 0.4 indicate poor reliability, values between 0.4 and 0.75 indicate fair to good reliability, and values above 0.75 represent excellent reliability. In sum, all the items in the final model demonstrate satisfactory levels of clustering and reliability, making them suitable for modelling a shared cluster construct.

Table 2. Item statistics for pooled data.

illustrates the final model and includes the standardised factor loadings. All the factor loadings are statistically significant (p ≤ 0.01). For the factors SVR and SC, the loadings range from 0.679 to 0.949. In the CPL factor, the items tt3g33a and tt3g33b have relatively low factor loadings of 0.282 and 0.355, but have been kept in the model owing to the theoretical importance, the fact that the loading is statistically significant, and the fact that the model fit is good. The three latent constructs CPL, SVR and SC all demonstrate statistically significant loadings on the overall PLC construct at school level in the range from 0.494 to 0.744.

A range of fit statistics aligned with common recommendations for reporting CFA models are presented in . Hu and Bentler (Citation1999) recommend the following general cut points for assessing model fit: TLI and CFI close to 0.95, SRMR close to 0.08, and RMSEA close to 0.06. For TLI and CFI, bigger values indicate a better fit, whereas the opposite is the case for RMSEA and SRMR. Others argue for less strict cut points, e.g. CFI and TLI ≥ 0.90, and RMSEA ≤ 0.08 (Brown Citation2015, Pendergast et al. Citation2017). For multilevel models, Mplus calculates an SRMR value separately for the within (SRMR-W) and the between level (SRMR-B) of the model. At the between level, the cut point of 0.08 is generally considered to be too strict (Asparouhov and Muthén Citation2018). Additionally, simulations have shown that SRMR-B is sensitive to large ICC 1 values (above 0.1) and small sample sizes even under optimal simulated conditions, and therefore this measure should be used with caution in applied ML-CFA (Ene Citation2020).

Table 3. Final model statistics.

In sum, the model shows good fit by these metrics. For the Chi-square test, the p-value is highly significant, which is not ideal. However, this is most likely due to the sensitivity of the Chi-square fit statistic to large samples. CFA analysis generally requires large samples, which makes Chi-square a poor measure of fit (Pendergast et al. Citation2017). In this case, with a sample size of 136,281 teachers in 8,128 schools, the significant Chi-square is disregarded.

Model fit for individual countries/economies and Measurement invariance

Although the ambition in this article is to provide a global measure of PLCs in the pooled data, the model has been tested on each of the participants in TALIS 2018 separately. This step is taken to identify participants where the overall model cannot be identified, in order exclude these from the factor analysis, inspired by the approach taken in creating the official TALIS scales (OECD Citation2019). Belgium, France, Hungary, Mexico and Saudi Arabia are excluded from the analysis on this basis. For France and Hungary, at least one of the questions was not administered. For the others, the model is not identified. The fit statistics for the 42 participants where the model is successfully identified is presented in Appendix 2. It should be noted, that modelling the construct using only the data from specific participants in most cases results in warnings related to non-positive definite matrices, which is a common issue in factor analysis that can be causes by a number of things, for example inadequate sample size, very high correlations, non-normality or model misspecification (Chen et al. Citation2001, Brown Citation2015). Researchers with an interest in modelling the PLC measure for specific contexts, instead of using the pooled version, should assess the output and make adjustments if necessary. As an example, the warning of a non-positive definite matrix for the Chile is due to two items having small negative residual variances. Fixing these residuals at zero solves the issue, and model fit is not changed substantially. Similar adjustments was necessary for modelling the official TALIS scales at the participant level (OECD Citation2019, p. 217).

Measurement invariance

The extent to which constructs are comparable across different populations is often evaluated using the measurement invariance approach. Frequently, three stages of invariance are considered: configural (identical structure), metric (identical structure and equal factor loadings), and scalar (identical structure, factor loadings and intercepts). In the case of large-scale international surveys, Rutkowski and Svetina (Citation2014) recommend evaluating the invariance of constructs by changes in CFI and RMSEA, and suggest cut points of −0.020 and 0.030 for change in CFI and RMSEA, respectively.

It is not currently possible to test the invariance of ML-CFA models using MLR and/or second-order factors in Mplus. To provide some evidence of the invariance, the three constructs CPL, SVR and SC are tested simultaneously for measurement invariance in a single-level model across the 42 countries/economies included in the final model. The results of this analysis are presented in Appendix 3, showing that metric invariance is achieved (ΔCFI = −0.020, Δ RMSEA = 0.003). In comparison, eight of the official teacher scales in the data achieve configural invariance, 22 achieve metric invariance, and only one achieves scalar invariance. For cross-country comparisons, metric invariance implies that it is reasonable to compare strengths of associations between countries, but not scale means (OECD Citation2019, p. 213–215).

Example use: PLCs and teacher job satisfaction in the Nordic countries

The following section will provide a brief example of how the developed measure of PLCs may be useful for research in different contexts, by investigating the relationship between PLCs and teacher job satisfaction at the school level in the Nordic countries participating in TALIS 2018. Engaging in PLCs have been associated with higher levels of teacher job satisfaction theoretically and qualitatively (Louis Citation1994, Hord Citation1997), while few studies have investigated this relationship quantitatively, and results have been inconsistent, as has the measures of both the PLCs and job satisfaction (Zhang et al. Citation2020).

The official TALIS data includes a composite measure of teacher job satisfaction in the school data (t3pjobsa), which will be used as the dependent variable in this analysis. Previous research related to teacher job satisfaction in TALIS has varied in relation to control variables (Gil-Flores Citation2017, Liu et al. Citation2021). This example includes a stepwise model, where Model 1 is without controls, and Model 2 includes controls for school location,Footnote1 school type (public/private) and percentage of students coming from low socioeconomic homes ). The school- and replicate weights are applied in the analysis, using the EdSurvey package in R, to account for the complex survey design (Bailey et al. Citation2020).

Table 4. Stepwise regression on teacher job satisfaction.

Among these four Nordic countries, Model 1 shows that only for Norway there is a statistically significant relationship between PLC and teacher job satisfaction, while there is a positive but not significant relationship in the three other countries. When controlling for school location, type and student composition, we also find a statistically significant relationship in Sweden in addition to Norway. This analysis suggests that the strength of the association between these two latent constructs, varies significantly even among relatively similar education systems. While this analysis has served as an example utility of the scale, the relationship between PLCs and teacher job satisfaction calls for further investigation, potentially using more advanced models such as hierarchical linear models or structural equation models. Various school and/or country level contextual factors (e.g. school climate, leadership, centralisation, school/teacher autonomy) might also be included in similar research as mediating and moderating variables to find out their roles in such relations in different countries.

Discussion

This section will discuss the strengths and limitations of the developed measure of PLCs in terms of the alignment with theory and for the use of this measure in secondary research. Additionally, potentials for the further development of such a measure in TALIS are discussed.

Initially, each of the three constructs; collaborative practice with focus on student learning (CPL), shared vision and responsibilities (SVR), supportive conditions (SC), and the overall measure of PLCs are evaluated against the theoretical concept of PLCs. Comparing the indicators in the (CPL) construct with the key elements of PLCs, it is apparent that the CPL construct overlaps more than one of these elements. Specifically, it is theorised that a strong focus on individual student learning and collaborating and reducing teacher isolation can be measured by this construct, while shared responsibility for student learning is measured partly by CPL and partly by SVR. Initially, attempts were made to measure the focus on individual student learning as a separate construct, but this model did not fit the data as expected, and therefore the dimensions were merged. The CPL construct contains indicators of important aspects of each of the elements, such as peer observation and feedback, engaging in discussions about the learning of individual students, collaboration related to evaluating student learning, and involvement in collaborative professional learning. These individual items are associated with important aspects of the practical behaviour that teachers who are part of a PLC should be engaged in, and the answer categories reveal how often such behaviour takes place, enabling the construct to describe the extent to which the individual school is functioning as a PLC, as reported by the teachers. While the CPL construct does include indicators of collaboration related to student assessment, it does not explicitly mention the use of data in assessment, which is a limitation in relation to the theoretical framework.

The SVR construct relates to the elements known as shared vision and shared responsibility for student learning. As described in the theory section, schools operating as PLCs should have a shared vision, and the teachers should be involved in defining this. Likewise, the responsibility for the learning development of individual students is supposed to be a shared responsibility of the teachers. The indicators in the SVR construct measure the extent to which the teachers agree that they share common beliefs about teaching and learning, and whether or not the teachers have opportunities to participate in school decisions. The SVR construct also includes information about the existence of a collaborative culture with mutual support and shared responsibility for school issues. In PLC literature, the responsibility for student learning is shared by the teachers, and it is a limitation that the item included is not directed at this issue more specifically. Arguably, the indicators in CPL and SVR together capture the notion of which teachers collaborate to resolve such difficulties, so this aspect is present in the overall measure of PLCs.

The final construct, SC, measures the conditions for (and priority of) professional development in the school, as perceived by the teachers. If the school principal and management do not support and dedicate time for the teachers to participate in professional development, as indicated by the items in this construct, PLCs are highly unlikely to be successful. The indicators in this construct do not address specific types of professional learning, but agreement on these indicators will translate into professional development being prioritised by the school principal or management, thereby making the engagement in PLCs, as it is measured by CPL, more likely.

Researchers interested in the concept of PLCs, and in testing or validating hypotheses across a wide variety of national contexts, may find this measure useful. The concept of PLCs is popular world-wide, and the theoretical assumptions and expectations of positive outcomes seems to be considered as universal. This universal nature of PLCs has been difficult to assess given that quantitative research so far has used widely different measures of PLCs and data. The example analysis of job satisfaction in this paper suggest that there is reason to be sceptical about the universal and positive effects of PLCs across different contexts. The developed measure along with the extensive teacher- and school data available in TALIS provides an opportunity conduct comparable research across a wide range of national contexts.

When applying this measure, researchers should be explicit about strengths, limitations and the overlap between the individual constructs in relation to the PLC literature, on the basis of the discussion above. Some evidence of the invariance of the measure is provided; and while there are methodological limitations, the metric invariance of the items included suggests the feasibility of comparisons of statistical associations across participants in TALIS 2018.

Given the significance of the concept of PLCs internationally, future cycles of TALIS could pursue the option of developing an official measure of PLCs in the data. Based on the development of this measure and the review of items included in the questionnaire, it can be argued that items measuring the use of data in student assessment, and the specific ways in which teachers collaborate to improve teaching practice, would be particularly useful in improving the chance of creating an even more comprehensive measure of PLCs at school level in TALIS.

Supplemental material

Supplemental Material

Download Zip (2.3 MB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed here

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

This work was supported by The Carlsberg Foundation under grant CF19-0751.

Notes on contributors

Anders Astrup Christensen

Anders Astrup Christensen is currently a Ph.D. fellow at the department of Educational Sociology, Danish School of Education at Aarhus University, Denmark, and is primarily interested in school context, teacher practice, student outcomes and inequality.

Notes

1. The categories ‘City’ and ‘Large city’ has been collapsed to ‘City’ because ‘Large city’ was unused in Finland and Norway.

References

  • Albrechtsen, T.R.S., 2013. Professionelle læringsfællesskaber: teamsamarbejde og undervisningsudvikling. 1st ed. [Professional learning communities: teamwork and instructional development]. Frederikshavn, Denmark: Dafolo.
  • Asparouhov, T. and Muthén, B., 2018. SRMR in Mplus [ online]. Semantic Scholar. Accessed 3 March 2022. online: https://www.semanticscholar.org/paper/SRMR-in-Mplus-Asparouhov-Muth%C3%A9n/7d1ade06295314ea4fce371d62577c9057096280
  • Bailey, P., et al., 2020. EdSurvey: analysis of NCES education survey and assessment data. [software]. Accessed 3 March 2022. Available from https://cran.r-project.org/package=EdSurvey
  • Bellibas, M.S., Bulut, O., and Gedik, S., 2016. Investigating professional learning communities in Turkish schools: the effects of contextual factors. Professional development in education, 43 (3), 353–374. doi:10.1080/19415257.2016.1182937
  • Bellibaş, M.Ş., Gümüş, S., and Liu, Y., 2021. Does school leadership matter for teachers’ classroom practice? The influence of instructional leadership and distributed leadership on instructional quality. School Effectiveness and School Improvement, 32 (3), 387–412. doi:10.1080/09243453.2020.1858119
  • Bliese, P.D., 2000. Within-group agreement, non-independence, and reliability - implications for data agrregation and analysis. In: K.J. Klein and S.W.J. Kozlowski, eds. Multilevel theory, research, and methods in organizations: foundations, extensions, and new directions. San Francisco: Jossey-Bass, 349–381.
  • Bolam, R., et al., 2005. Creating and sustaining effective professional learning communities. University of Bristol. Research Report No 637.
  • Bonito, J.A. and Keyton, J., 2019. Multilevel measurement models for group collective constructs. Group dynamics, 23 (1), 1–21. doi:10.1037/gdn0000096
  • Brown, T.A., 2015. Confirmatory factor analysis for applied research. 2nd ed. New York, New York: The Guilford Press.
  • Chen, F., et al., 2001. Improper solutions in structural equation models: causes, consequences, and strategies. Sociological methods & research, 29 (4), 468–508. doi:10.1177/0049124101029004003
  • Chen, P., et al., 2016. Factors that develop effective professional learning communities in Taiwan. Asia Pacific journal of education, 36 (2), 248–265. doi:10.1080/02188791.2016.1148853
  • Doğan, S. and Adams, A., 2018. Effect of professional learning communities on teachers and students: reporting updated results and raising questions about research design. School effectiveness and school improvement, 29 (4), 634–659. doi:10.1080/09243453.2018.1500921
  • Doğan, S. and Yurtseven, N., 2018. Professional learning as a predictor for instructional quality: a secondary analysis of TALIS. School effectiveness and school improvement, 29 (1), 64–90. doi:10.1080/09243453.2017.1383274
  • Dufour, R., 2004. What is a “professional learning community”? Educational leadership, 61 (8), 6–11.
  • Dufour, R., et al. 2016. Håndbog i professionelle læringsfællesskaber [Learning by Doing: a Handbook for Professional Learning Communities at Work]. 1st. F.L. Christensen trans. Frederikshavn: Dafolo.
  • Ene, M.C., 2020. Investigating accuracy of model fit indices in multilevel confirmatory factor analysis. (Doctoral dissertation). Accessed 3 March 2022. Available from https://scholarcommons.sc.edu/etd/6024
  • Fleiss, J.L., 1999. The design and analysis of clinical experiments. Wiley classics library ed. New York: Wiley.
  • Gil-Flores, J., 2017. The role of personal characteristics and school characteristics in explaining teacher job satisfaction. Revista de Psicodidáctica (English ed.), 22 (1), 16–22. doi:10.1387/RevPsicodidact.15501
  • Hairon, S., et al., 2017. A research agenda for professional learning communities: moving forward. Professional development in education, 43 (1), 72–86. doi:10.1080/19415257.2015.1055861
  • Hallquist, M.N. and Wiley, J.F., 2018. Mplus automation: an R package for facilitating large-scale latent variable analyses in Mplus. Structural Equation Modeling, 25, 621–638. doi:10.1080/10705511.2017.1402334
  • Hargreaves, A., 2007. Sustainable professional learning Communities. In: L. Stoll and K.S. Louis, eds. Professional learning communities: divergence, depth and dilemmas. Maidenhead: Open University Press , 181–195 .
  • Hargreaves, A. and Fullan, M., 2012. Professional capital: transforming teaching in every school. London: Routledge.
  • Hord, S.M., 1997. Professional learning communities: communities of continuous inquiry and improvement. Austin: Southwest Educational Development Laboratory.
  • Hu, L.-T. and Bentler, P.M., 1999. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Structural equation modeling, 6 (1), 1–55. doi:10.1080/10705519909540118
  • Hurley, N., Seifert, t., and sheppard, b., 2018. an investigation of the relationship between professional learning community practices and student achievement in an Eastern Canadian School Board. Canadian journal of educational administration and policy, 4, 4–18.
  • Jerrim, J. and Sims, S. , 2021. School accountability and teacher stress: international evidence from the OECD TALIS study . Educational Assessment, Evaluation and Accountability. doi:10.1007/s11092-021-09360-0
  • Kruse, S.D. and Johnson, B.L., 2016. Tempering the normative demands of professional learning communities with the organizational realities of life in schools: exploring the cognitive dilemmas faced by educational leaders. Educational management, administration & leadership, 45 (4), 588–604. doi:10.1177/1741143216636111
  • Lave, J., 1991. Situating learning in communities of practice. In: L.B. Resnick, J.M. Levine, and S.D. Teasley, eds. Perspectives on socially shared cognition. American Psychological Association , 63–84 .
  • Lebreton, J.M. and Senter, J.L., 2008. Answers to 20 questions about interrater reliability and interrater agreement. Organizational research methods, 11 (4), 815–852. doi:10.1177/1094428106296642
  • Lee, M. and Kim, J., 2016. The emerging landscape of school-based professional learning communities in South Korean schools. Asia Pacific journal of education, 36 (2), 266–284. doi:10.1080/02188791.2016.1148854
  • Liu, Y. and Bellibas, M.S., 2018. School factors that are related to school principals’ job satisfaction and organizational commitment. International journal of educational research, 90, 1–19. doi:10.1016/j.ijer.2018.04.002
  • Liu, Y., Bellibaş, M.Ş., and Gümüş, S., 2021. The effect of instructional leadership and distributed leadership on teacher self-efficacy and job satisfaction: mediating roles of supportive school culture and teacher collaboration. Educational management, administration & leadership, 49 (3), 430–453. doi:10.1177/1741143220910438
  • Lomos, C., Hofman, R.H., and Bosker, R.J., 2011a. Professional communities and student achievement – a meta-analysis. School effectiveness and school improvement, 22 (2), 121–148. doi:10.1080/09243453.2010.550467
  • Lomos, C., Hofman, R.H., and Bosker, R.J., 2011b. The relationship between departments as professional communities and student achievement in secondary schools. Teaching and teacher education, 27 (4), 722–731. doi:10.1016/j.tate.2010.12.003
  • Louis, K.S., 1994. Professionalism and community: perspectives on reforming urban schools. Madison, WI: Center on Organization and Restructuring of Schools.
  • Mclaughlin, M.W. and Talbert, J.E., 2006. Building school-based teacher learning communities: professional strategies to improve student achievement. New York: Teachers College Press.
  • Muthén, L.K. and Muthén, B.O., 2017. Mplus user’s guide. Eighth. Los Angeles, CA: Muthén & Muthén.
  • OECD, 2009. Creating effective teaching and learning environments: first results from TALIS. [online]. Accessed 3 March 2022. Available from https://www.oecd.org/education/school/43023606.pdf
  • OECD, 2019. TALIS 2018 Techincal Report [online]. Paris: OECD. Accessed 3 March 2022. Available from https://www.oecd.org/education/talis/TALIS_2018_Technical_Report.pdf
  • OECD. n.d. Teaching and Learning International Survey (TALIS) 2018 - teacher questionnaire. Accessed 3 March 2022. Available from https://www.oecd.org/edu/school/TALIS-2018-MS-Teacher-Questionnaire-ENG.pdf
  • Olivier, D.F. and Hipp, K.K., 2010. Assessing and analyzing schools as professional learning communities. In: K.K. Hipp and J.B. Huffman, eds. Demystifying professional learning communities: school leadership at its best. Lanham: The Rowman & Littlefield Publishing Group , 29–42 .
  • Olivier, D.F. and Huffman, J.B., 2016. Professional learning community process in the United States: conceptualization of the process and district support for schools. Asia Pacific journal of education, 36 (2), 301–317. doi:10.1080/02188791.2016.1148856
  • Opfer, V.D. and Pedder, D., 2011. Conceptualizing teacher professional learning. Review of educational research, 81 (3), 376–407. doi:10.3102/0034654311413609
  • Pendergast, L.L., et al., 2017. Measurement equivalence: a non-technical primer on categorical multi-group confirmatory factor analysis in school psychology. Journal of school psychology, 60, 65–82. doi:10.1016/j.jsp.2016.11.002
  • Prenger, R., Poortman, C.L., and Handelzalts, A., 2018. The effects of networked professional learning communities. Journal of teacher education, 70 (5), 441–452. doi:10.1177/0022487117753574
  • Putnam, R.T. and Borko, H., 2016. What do new views of knowledge and thinking have to say about research on teacher learning? Educational researcher, 29 (1), 4–15. doi:10.3102/0013189X029001004
  • R Core Team. 2020. R: a language and environment for statistical computing [software]. Vienna, Austria: R Foundation for Statistical Computing. Accessed 3 March 2022. Available from: https://www.r-project.org/ .
  • Rutkowski, L. and Svetina, D., 2014. Assessing the hypothesis of measurement invariance in the context of large-scale international surveys. Educational and psychological measurement, 74 (1), 31–57. doi:10.1177/0013164413498257
  • Senge, P.M., 1990. The fift discipline. New York: Currency Doubleday.
  • Sigurðardóttir, A.K., 2010. Professional learning community in relation to school effectiveness. Scandinavian journal of educational research, 54 (5), 395–412. doi:10.1080/00313831.2010.508904
  • Sleegers, P., et al., 2013. Toward conceptual clarity: a multidimensional, multilevel model of professional learning communities in Dutch elementary schools. The Elementary school journal, 114 (1), 118–137. doi:10.1086/671063
  • Stapleton, L.M., Yang, J.S., and Hancock, G.R., 2016. Construct meaning in multilevel settings. Journal of educational and behavioral statistics, 41 (5), 481–520. doi:10.3102/1076998616646200
  • Stenhouse, L., 1975. An introduction to curriculum research and development. London: Heinemann.
  • Stoll, L., et al., 2006. Professional Learning Communities: a Review of the Literature. Journal of Educational Change, 7 (4), 221–258. doi:10.1007/s10833-006-0001-8
  • Stoll, L., et al., 2016. Educational effectiveness and improvement research, and teachers and teaching. In: C. Chapman, et al., eds. The Routledge International Handbook of Educational Effectiveness and Improvement. New York: Rotuledge , 348–364 .
  • Stoll, L. and Louis, K.S., 2007. Professional learning communities: elaborating new approaches. In: L. Stoll and K.S. Louis, eds. Professional learning communities: divergence, depth and dilemmas. Maidenhead: Open University Press , 1–14 .
  • Thompson, B. and Daniel, L.G., 1996. Factor analytic evidence for the construct validity of scores: a historical overview and some guidelines. Educational and psychological measurement, 56 (2), 197–208. doi:10.1177/0013164496056002001
  • Vanblaere, B. and Devos, G., 2016. Exploring the link between experienced teachers’ learning outcomes and individual and professional learning community characteristics. School effectiveness and school improvement, 27 (2), 205–227. doi:10.1080/09243453.2015.1064455
  • Vescio, V., Ross, D., and Adams, A., 2008. A review of research on the impact of professional learning communities on teaching practice and student learning. Teaching and teacher education, 24 (1), 80–91. doi:10.1016/j.tate.2007.01.004
  • Vieluf, S., et al.,2012. Teaching practices and pedagogical innovation: evidence from TALIS. Paris: OECD. Accessed 3 March 2022. Available from. 10.1787/9789264123540-en
  • Watson, C., 2014. Effective professional learning communities? The possibilities for teachers as agents of change in schools. British educational research journal, 40 (1), 18–29. doi:10.1002/berj.3025
  • Zakariya, Y.F., 2020. Investigating some construct validity threats to TALIS 2018 teacher job satisfaction scale: implications for social science researchers and practitioners. Social sciences (Basel), 9 (4), 38. doi:10.3390/socsci9040038
  • Zhang, J., Yin, H., and Wang, T., 2020. Exploring the effects of professional learning communities on teacher’s self-efficacy and job satisfaction in Shanghai, China. Educational studies. doi:10.1080/03055698.2020.1834357
  • Zheng, X., et al., 2016. Effects of leadership practices on professional learning communities: the mediating role of trust in colleagues. Asia Pacific education review, 17 (3), 521–532. doi:10.1007/s12564-016-9438-5

Appendices

Table 1. Full list of variables considered in initial selection.

 

Table 2. Individual country/economy analysis of the final model.

 

Table 3. Results of measurement invariance test, single level model of the constructs CPL, SVR and SC.