13,155
Views
12
CrossRef citations to date
0
Altmetric
Research Article

How to enhance teachers’ professional learning by stimulating the development of professional learning communities: operationalising a comprehensive PLC concept for assessing its development in everyday educational practice

, ORCID Icon, & ORCID Icon
Pages 751-769 | Received 07 Aug 2018, Accepted 28 May 2019, Published online: 26 Jun 2019

ABSTRACT

Fostering the development of Professional Learning communities (PLCs) should be a priority for education because of their capacity to enhance teachers’ professional development. However, in everyday educational (school) contexts studying and supporting the growth of PLCs is a complex endeavour. As a consequence, there is a lack of instruments for extensively investigating PLCs on different levels of development as well as the factors that influence the development. This article describes the first steps of the construction of measuring instruments, suitable to investigate PLCs in the complex school context: a study of the operationalisation of PLC characteristics and influencing factors into attitudinal and behavioural indicators. The operationalisation is founded on relevant literature and on educational practice, the latter by focus groups consisting of educational practitioners. This study yielded indicators for the eleven characteristics of the PLC concept and two context factors, which were subsequently used for selecting and constructing instruments. This study contributes to the research methodology for investigating PLCs and to bridge the gap between educational research and educational practice. The latter by constructing instruments that are considered relevant, recognisable and practical by educational practitioners.

1. Introduction

Research shows that the chances of creating lasting change in education increase when teachers are involved in the early stages of its design (Handelzalts Citation2009). In such cases the role of the teacher shifts from that of individual actor to that of joint developer and ‘learner’ (Mitchell and Sackney Citation2010). Innovation then becomes a collective learning process that takes place within a school environment (Castelijns et al. Citation2013). Thus teachers should have a mindset which is not solely aimed at their lessons, but also at contributing to the common ambition of the organisation in collaboration with colleagues (Van Veen et al. Citation2010). Bergen and Van Veen (Citation2004, p. 30) state that ‘A new culture is needed in schools, in which the teacher acting individually and autonomously is replaced by collegial collaboration in which working, learning, and innovating are integrated.’ One may speak of a culture of collective learning (Castelijns et al. Citation2009). Many teacher professional development activities are still traditional by nature, directed at the individual teacher and often not situated at the workplace, such as workshops, informative meetings, courses, training sessions (Reynders et al. Citation2015). Research on professional development has shown that professionalisation is more effective when teachers collaborate, activities take place at the workplace and are integrated into daily practice (Van Veen et al. Citation2010). PLCs may provide an environment for working and learning in collaboration with colleagues. By participating in PLCs teachers are actively engaged in their own professional learning and that of their colleagues, presumably resulting in the enhancement of their teaching practice, which ultimately may lead to improved students’ achievements. Professional Learning Communities in education function in complex and dynamic contexts (Sleegers et al. Citation2013, Hairon et al. Citation2015). To learn more about them and to foster their development, more research is needed that take these complex settings into account. Therefore, a comprehensive and multidimensional PLC concept was developed for research on PLCs; its characteristics, the different stages of development and factors that may influence the PLC development, further referred to as ‘steering factors’ (Van Meeuwen et al. submitted). Despite a fair amount of research instruments, until now in the literature no complete set of instruments was found for extensively investigating this comprehensive and multi-dimensional PLC concept extensively. Existing instruments are inadequately suitable because they are based on only a part of the characteristics or on less comprehensive conceptualisations of PLCs. For instance, there are many questionnaires but they do not cover all the characteristics of the PLC concept or do not go to the core of the various PLC characteristics (Hipp and Hufmann Citation2003). There are instruments for discourse analysis of Computer Supported Collective Learning in online meetings (Gunawardena and Lowe Citation1997, Strijbos et al. Citation2006, Casanova and Alvarez Citation2012), for analysing collaborative learning in teacher teams (Sjoer and Meirink Citation2015), for class room talk between teacher and students (Mercer Citation2004), but these instruments do not focus specifically on every aspect and dimension of the PLC development or on face to face contexts. Before measuring instruments can be developed or can be partly (re)used, an intermediate step is required: the operationalisation of the characteristics of the PLC concept into attitudinal and behavioural indicators. By grounding research instruments on the same set of indicators, links can be established between the PLC characteristics and the steering factors, measuring results be compared, conclusions be drawn and the complexity of the concept can be more clarified.

This study was conducted as part of a longitudinal research project, a multi-case study in which the development of seven PLCs in various Dutch Secondary Schools were investigated. The ‘overall’ focus of this research project was to enhance teachers’ professional learning by means of PLCs, to gain insight in how PLCs can be fostered and the construction of various evidence based measuring instruments that could be used by educational practitioners to foster and assess PLC development. The aim of this study at hand was the conversion of conceptual definitions of PLC characteristics into behavioural indicators, as a first stage in the development of the above-mentioned measuring instruments, suitable for monitoring and assessing PLC development. Developing a PLC and participating in a PLC may lead to improving teaching practice and students’ achievements. This study thus contributes to research into PLCs by using the attitudinal and behavioural indicators for constructing suitable measuring instruments for conducting research on PLCs. Furthermore, this intermediate step is important for the teaching practice as well. Once valid instruments have been constructed, PLC development can be assessed by collecting information about processes that play a role in the development of a PLC in ‘real’ school practice and thus contribute to the support of these processes. This study has also a more general contribution to make; a contribution to bridging the gap between educational research and educational practice. The relationship between educational research and educational practice is a problematic and a persistent one (Biesta Citation2007). Teachers consider research to be irrelevant, inaccessible and unreliable. Teachers often find that researchers do not know what really goes on in the classroom, that the research language is often incomprehensible and that the research topics do not match their practical teaching needs or didactic problems (Broekkamp and Van Hout-Wolters Citation2007). Teachers complain that they have to fill out complicated questionnaires, containing questions that do not match their practical experience or do not fit with the real classroom situations. As a result of those negative opinions, practitioners make little use of educational research and its results (Teurlings et al. Citation2011). That is regrettable because educational research is meant and able to make a contribution to improve educational practice. By involving practitioners in constructing instruments, chances that the instruments will be used, increase. In this study teachers and school leaders were engaged in the conversion of the PLC characteristics into indicators which ensures that the wording, the practical situations and examples will reflect real school practice and therefore will be recognisable and accessible to teachers.

Thus, this article, based both on a literature review and on educational practice by means of focus group interviews with school leaders and teachers describes and accounts for the conversion into indicators of the definitions of PLC characteristics and two steering factors

2. Theoretical framework

Studying the relevant literature shows that an agreed-on universal definition of the PLC is lacking (Stoll et al. Citation2006, Vescio et al. Citation2008) and that in research on PLCs too little attention is paid to its multi-dimensional character, its complexity and possible mutual relations of the constituent characteristics and context factors (Lomos Citation2012, Sleegers et al. Citation2013, Hairon et al. Citation2015). In order to investigate PLCs in a complex school environment a ‘comprehensive’ and ‘dynamic’ PLC concept was constructed where comprehensive refers to the number and variety of characteristics and dynamic refers to the mutual interactions between characteristics and steering factors in the course of time (see for more details on how it is constructed Van Meeuwen et al. submitted). This comprehensive concept comprises eleven characteristics grouped into three clusters: Individual and collective learning, Group dynamic characteristics and Professional orientation (, first and second column). The cluster called ‘Individual and collective learning’ is about (group) learning activities. The cluster denoted as ‘Group dynamic characteristics’ is about group processes which are thought to have a positive influence on the learning activities. The professional orientation’ cluster concerns the common attitude and involvement in the learning of students and is expected to guide the learning activities. In constructing the conceptual framework external factors of influence were limited to leadership and the amount of individual and collective autonomy for the members of the PLC. Leadership (Sleegers et al. Citation2013) and professional autonomy (Kessels Citation2012) are considered in the literature as being conducive to the development of PLCs.

Table 1. Overview of consulted sources.

Based on relevant literature conceptual definitions were phrased for each of the eleven characteristics (Appendix 1). Because the definitions are not formulated in ‘measurable’ terms, they have to be reformulated as visible behavioural and attitudinal phenomena: so called indicators. These indicators form the basis for the construction of research instruments: questionnaires, semi- structured interviews and observations of PLC meetings. This article aims at describing how the PLC characteristics have been operationalised into attitudinal and behavioural indictors.

3. Method

In describing the method of obtaining indicators the methodology and the process using literature and teaching practice are distinguished.

3.1. The methodology

The methodology used to develop the instruments is partly based on Design Based (educational) Research; a research method, characterised by the development and implementation of solutions to practical educational problems, by conducting the research in a real educational context and by explicitly involving educational practitioners. Furthermore, it contributes to theoretical understanding as well (Van Den Akker et al. Citation2006, McKenney and Reeves Citation2012). This study draws partly on this research method insofar practitioners were involved and the component parts of the (first) analysis and exploration phase were applied: initial orientation, literature review and field-based investigation, among which was the creation of instruments (McKenny and Reeves Citation2012). By definition, Design Based Research contributes to bridging the gap between theory and practice. As for the development of instruments, in the literature much information can be found on the various ‘types’ of qualitative instruments. Information on how to develop them in collaboration with practitioners and which criteria should be used, is scarce. The transformation of definitions into indicators, the subject of this study, was an iterative process, carried out in a cyclic way and provides a methodology in five stages to transform conceptual definitions into measurable indicators by using theory and practice.

3.2. The process

The process of transforming the PLC characteristics and the steering factors into indicators was carried out along two lines: a review of relevant literature and the consultation of experts in the field of teaching practice.

3.2.1. Literature

The relevant literature was searched for both operationalisations of PLC-concepts and the separate PLC characteristics. In this phase, existing questionnaires have been consulted as well as the sources upon which they are based. provides an overview of the sources that have been consulted. From the literature indicators have been selected for each characteristic and steering factor that closely match with the conceptual definitions.

3.2.2. Teaching practice

Because PLC characteristics manifest themselves in practice, involving practitioners (people in the field of education) in identifying indicators is important. Thus in order to develop an instrument that is recognised and useful for the educational practice, focus groups were used as a method for the conversion of characteristics into indicators. In focus groups knowledge, experience and perceptions are discussed and individual and joint opinions are formed about the topics at hand. Interactions in focus groups may offer valuable data on the extent of consensus and diversity among the participants. Furthermore, discussions in focus groups are more than the sum of separate individual interviews (Morgan Citation1996, Rakow Citation2011). Due to their experience and practical knowledge, teachers and school leaders are the experts who can contribute to translating definitions into indicators which should be especially suitable for an instrument that is constructed for use in practice. Due to their different roles in schools, they may have different perspectives on teaching practice and therefore their experiences and views may supplement or confirm. As school leaders might play a dominant role in the focus groups, which may hamper the contribution and input of the teachers, it was decided to form homogeneous focus groups; one group consisting of teachers, the other one consisting of school leaders. The two focus groups were tasked to convert the conceptual definitions into concrete behaviour, related to educational practice. Two groups, because an extra focus group in this stage of the investigation would not have provided additional information for formulating indicators since saturation occurred (Morgan Citation1996). In addition an assessment group was formed and asked to assess the results of the two focus groups.

3.3. Focus groups

The participants of the focus groups were, for reasons of geographical accessibility, recruited from schools in the South of the Netherlands, mainly schools belonging to OMO; the greatest school association in this part of the Netherlands, comprising 50 schools for Secondary Education. An overview of all schools (https://www.duo.nl/open_onderwijsdata/databestanden/vo/adressen/) was used, provided by the Ministry of Education, Service Execution Education (DUO). This ensures that the samples of the focus groups are an average reflection of the educational field to be considered.

3.3.1. Sampling school leaders

In selecting school leaders for the focus groups a number of objectively distinguishable criteria was used related to their work situation: function, school type, school size and Academic Training School (AOS). Other criteria were: male/female, interest in the topic at hand and in taking part in the focus group. These criteria increase the chances of obtaining a varied collection of indicators, assuming that these criteria have an effect on their points of view. The school leaders were contacted by telephone and by e-mail and were explicitly asked about their interest in the subject and for taking part in the focus group. An appropriate size for a focus group is eight to ten participants (Morgan Citation1996). Therefore 12 school leaders were selected. Four school leaders appeared to be unable to attend at the scheduled date resulting in eight school leaders participating in the focus group. supplies information about the participating school leaders. Eight participants may well provide a wide range of indicators.

Table 2. Composition focus group school leaders (n=8).

3.3.2. Sampling teachers

For selecting teachers the following criteria were used: their interest in the subject at hand and in taking part in the focus group. Other criteria were: the teachers are known to be experienced teachers with a wide interest, knowledge and experience in teaching and, possibly, have additional experiences as well (for instance: coach, PhD, familiar with specific educational concepts). To some extend they can be regarded as extended professionals (Hoyle Citation1975), having a wider perspective on and experience in their secondary roles within schools. This broad experience increases the chances of yielding a reliable collection of indicators. School principals were requested to propose suitable teachers. The teachers were contacted by telephone and by e-mail. Eventually 14 teachers were selected, four of which appeared to be unable to attend at the scheduled date, among whom were two female teachers. shows information about the participating teachers. More male than female teachers attended the focus group which is not a proper reflection of the male/female ratio of Secondary Schools (Centrum Arbeidsverhoudingen Overheidspersoneel [CAOP], Citation2016).

Table 3. Composition focus group teachers (n = 10).

3.3.3. Sampling assessment group

The assessment group consisted of the four school leaders who were initially selected, but who were unable to attend the first session. shows the composition of the assessment group. Because it was not a matter of phrasing indicators, influenced by opinions, experiences or attitudes, but of ‘assessing’ whether indicators were reliable renderings of the definitions, the distinction between school leader and teacher is irrelevant here.

Table 4. Composition of the assessment group (n=4).

3.4. The procedure

The procedure for arriving at useful indicators consisted of five stages described in . In the first stage school leaders and teachers were tasked to transform the conceptual definitions into concrete behavioural indicators. In the second stage the indicators were compared with the criteria (matching) by the researchers for usefulness in developing instruments. In the third stage the indicators were reduced to a limited number of categories for each characteristic. In the fourth stage the assessment group assessed all the indicators once again for recognition and practical relevance. In the fifth stage the researchers performed a last check. These five stages will be described in detail below. The numbers in refer to the stages of the process, the squares refer to the results.

Figure 1. The process of transforming definitions into indicators.

Figure 1. The process of transforming definitions into indicators.

3.4.1. First Stage: the Session

The two sessions of teachers and school leaders took place on separate days and lasted from two to two and a half hours. In both groups the same approach was used. Participants received an information package in preparation, consisting of an explanation of the purpose of the session, a brief summary of the research, an overview of the definitions of the characteristics and the steering factors and a briefly illustrated example. The task for the participants: transform the definitions into concrete behaviour related to educational practice. The session began with an explanation of the aim and method. Then participants noted down individually three indicators for each conceptual definition. These indicators were discussed by the participants and then added to an overview (flip over). In order to prevent repetition, any indicator already listed was not included in the overview again. As long as they were not previously listed by the participants, indicators from the relevant literature (see ) were included by the researchers and added to the overview. All the listed indicators were shared, elaborated on, discussed and then it was debated which indicators would ultimately be included in the overview. Both sessions were audio recorded. The recordings were listened back to by the researchers and compared with remarks made and notes taken for purposes of verification. This did not lead to any adjustments. A member check was also carried out, which means that all participants were sent the complete overview of indicators by mail, to see whether they agreed with the record (Mortelmans Citation2009). All participants responded positively, no adjustments or additions were proposed.

3.4.2. Second stage: matching

In stage 2 the indicators were assessed by the researchers. For this purpose the following criteria were used:

  1. The indicator should have a bearing on the conceptual definition;

  2. The indicator should not be normative or be a condition;

  3. The indicator should be related to the functioning of the teacher in the PLC.

The indicators that did not match with the criteria were removed and no longer used.

3.4.3. Third stage: reduction into categories and rephrasing indicators

The remaining indicators were abstracted into categories for each characteristic, based on the content: in other words a common denominator was looked for. For example, the indicator ‘The teacher analyses diagnostic tests in order to ascertain how far the student is.’, is actually another way of saying that this is a ‘means of collecting information’. The defining of categories was carried out by two researchers separately, after which the results were compared and agreement was reached on the categories to be used. After the categories had been decided on, the procedure was repeated in reverse order to see whether all indicators could be included in the categories and whether the categories ‘covered’ all the indicators. Then the categories were converted into indicators (see ).

3.4.4. Fourth Stage: checking

The assessment group was tasked to check the indicators, formulated from the categories, against the three research questions (see ).

3.4.5. Fifth Stage: last Check

At the end of this process the researchers performed a last check for inconsistencies and possible textual improvements.

3.5. The session

In preparation for the session participants received an information package consisting of an explanation of the aim of the session, a brief summary of the research and the overview of indicators of the characteristics and of the steering factors; the results of the stages 1 and 2.

The assessment group had to answer the following research questions:

  • Are the indicators a reliable rendering of the definitions and their appearance in practice;

  • Are any relevant indicators missing or is the list simply too detailed;

  • Is the group in agreement on this?

All the indicators were discussed during the session one by one. The discussion of each indicator was rounded off with a conclusion agreed on by all participants. Adjustments were discussed, approved and the text adapted. The session was audio-recorded. The recording was listened back to by the researchers for verification and compared with remarks made and notes taken. This did not lead to any adjustments.

4. Results

The sessions went as planned. Since participants in both focus groups referred to ‘attitudinal’ as well as ‘behavioural’ indicators, the research task was widened to include both attitudinal and behavioural indicators. Considering the fact that during the research various instruments (observation, interview, survey) will be used, not all indicators need to be distinguishable in ‘behaviour’. Information on emotional or attitudinal indicators can be gathered by means of an interview or a questionnaire. The intermediate result is a complete list of indicators for each PLC characteristic and the steering factors. gives an quantitative overview of the indictors proposed by the focus groups and from the literature, before reduction took place.

Table 5. Numbers of indicators constructed in the focus groups and from the literature (Stage 1).

By comparing the ‘content’ of the indicators proposed by the focus groups and from the literature, it appeared that despite differences in wording, many indicators overlapped, thus having about the same meaning. For instance, for the characteristic mutual trust and respect: ‘Teachers are not afraid to discuss problems or differences of opinions. (proposed by teachers) and ‘There is room for discussing different views, approaches and opinions.’ (proposed by school leaders). They were considered and treated as one and the same indicator. Because of this overlap the numbers of indicators in should not be added up.

The results of the five consecutive stages will be illustrated by means of examples in the and . For the sake of clarity, a distinction is made between ‘provisional’ indicators, the result of stage 1 and the re-formulated indicators at the end of the process; the ‘final’ indicators, the result of stage 5. The school leaders and the teachers transformed the definitions into concrete behavioural and attitudinal indicators (stage 1). Then the researchers compared the indicators with the criteria (second stage: matching). shows examples of indicators that did and did not match the criteria (stage 2). The numbers in the columns refer to the criteria they do or do not match with.

Table 6. Examples of indicators that match and do not match the criteria (Stage 2).

Table 7. Division of indicators into categories and rephrasing final indicators (Stage 3).

In the third stage the researchers looked for different ‘types’ of indicators for each characteristic; indicators that belonged together because they had a common denominator. These indicators were grouped together into categories and rephrased in a more general way so that the rephrased indicator comprised all the regarding indicators. The process of grouping indicators together, the forming of categories and rephrasing them (see section 3, stage 3) was performed simultaneously but are presented separately for the sake of clarity and transparency. The intermediate result was a fresh list of indicators presented in .

The assessment group checked this intermediate result against the three research questions (stage 4). The response to the research questions was unanimous on the part of all the participants: the indicators were a faithful rendering of the definitions, no relevant indicators were missing. The group did propose a number of adjustments.

  • The group proposed to omit words such as varied, regular and so forth and to formulate more accurately. For example ‘taking account of differences … ’ to be changed into ‘attuning to differences … ’. These proposals were adopted.

  • The group paid a great deal of attention as to whether or not teachers’ ‘competences’ should be used as an indicator. For instance, ‘being able to give feedback’ is a competence, ‘giving feedback’ is an activity. Some participants proposed to use competences as indicators. This suggestion was not adopted. A competence is supposed to be imbedded in an activity.

  • The group proposed changing the order of indicators with two characteristics for the sake of logic. This proposal was adopted.

  • The group proposed referring to teachers (plural) in the learning activities instead of teacher (singular) because this would emphasize the aspect of collaboration. This proposal was adopted.

In the fifth stage the researchers scrutinised the list of indicators again. This led to a number of textual adjustments:

  • More current terminology was used. For instance: ‘Teachers develop and try out new things’ became ‘Teachers develop new teaching methods and try them out’. ‘Teachers work at a good relationship with students’ became ‘Teachers work at a good pedagogical relationship with students’.

  • The indicators were consistently phrased from the point of view of the teacher, active in the PLC. Example: ‘The school leader ensures the development of a shared vision’ became ‘Teachers experience the fact that the school leader ensures the development of a shared vision.’

  • In a number of cases an indicator contained more than one concept. Where this was the case, the indicator was split.

  • Although informal leaders, such as teacher leaders, do play a part in the (development of) PLCs, here leadership refers to ‘formal’ leadership (on various levels). In order to prevent ambiguity, leadership was changed into ‘principal’.

The adjustments made no difference to the content of the indicators.

Although there was considerable overlap between the intention and content of the indicators from the literature and from the focus groups, a substantive comparison reveals a number of striking differences. Because of the importance of combining the input from the ‘practitioners’ in the focus groups and the ‘theoretical’ input from the literature, all different indicators were retained. There was a difference in the characteristic of ‘Shared focus on student learning’. For instance data-driven teaching; teaching based on information about the way students learn, was not referred to by principals or teachers. A possible explanation may be that only the last few years in educational practice data-driven teaching has got more and more focus. With regard to the characteristic of ‘Shared focus on continuous teacher learning’ principals and teachers mainly refer to reflection and feedback. It is likely that traditional professionalisation activities, such as attending courses or reading subject literature, are taken for granted. The joint discussion of student results as an indicator of ‘Shared responsibility’ only occurs in the relevant literature, not with practitioners. An explanation may be that teachers still operate in isolation and only feel responsible for their own subject and students. As far as the steering factor of ‘Leadership’ is concerned, teachers and principals only distinguish the part principals play in initiating and developing a vision and in creating facilities (time and space). In the literature personal consideration and support for individual teachers on the part of principals and the stimulation of professional development is listed as an indicator. Accounting for the quality of teaching and for the teaching results as an element of ‘Individual and collective autonomy’ is only found in literature. This is understandable because accounting for the former is not customary in current education (Mausethagen Citation2012). The final result was an overview consisting of 42 indicators of the PLC characteristics and two steering factors (Appendix 1).

5. Conclusion and discussion

The aim of this study was the construction of indicators on which research instruments could be developed in such a way that they are considered useful and understandable by educational practitioners. Teachers find that researchers often investigate issues that are not really relevant from a practitioners’ point of view (Biesta Citation2007). They often complain about research instruments that do not reflect real educational practice, that do not align with their learning needs or are written in research language that they do not easily understand. To ensure that the constructed instruments don’t have the same shortcomings, educational practitioners were involved. This article describes the transformation of conceptual definitions into attitudinal and behavioural indicators which are discernible in teaching practice, with the aim to develop instruments for research into PLCs in a complex educational environment. Both the relevant literature and the input of principals and teachers in focus groups were needed to get a broad set of indicators, to develop relevant instruments for practice based research on PLCs.

The process of constructing reliable indicators had to be systematic and transparent (McKenney and Reeves, Citation2012). Therefore the process of creating indicators comprised five stages, including checks, which are extensively described in this article. The aim of the focus groups was to reach consensus on a set of indicators for eleven characteristics and two steering factors in a relatively short period of time. This required discussions and dialogue between participants who did not know each other, who had different opinions and experiences regarding complicated issues such as collaboration and differentiated instruction; issues that teachers often disagree about. But that was precisely the reason to choose for the research method of focus groups; the coming together of different educational views and experiences.

The task of the moderator was a difficult one (Morgan Citation1996). On the one hand he had to keep a balance between giving the participants room to think, to re-think and to discuss freely, on the other hand keeping the discussion focussed on the various topics (definitions and indicators) and on reaching consensus. Furthermore, leading the discussions required experience. As the researches had limited experience, the sessions might have been more effective. In future research, for this purpose experienced moderators should be used.

The assessment group consisted merely of school leaders. That was a practical choice because these school leaders were available. It is possible but not probable that teachers would have made different observations.

In order to transform definitions into indicators ‘two’ focus groups were used. Although it is not impossible that a third focus group would have yielded supplementary indicators, this approach led to a saturation point and therefore we refrained from using a third group. It was decided not to inform the participants of the criteria for indicators prior to the session in order to give them optimal leeway for discussion. During the session it appeared that not all indicators matched with the criteria. We did not intervene, so as not to influence the discussion but gave the participants ample room for thinking and discussing freely. Had the participants known the criteria beforehand, the sessions might have gone more effectively: less indicators would have been omitted for reasons that they didn’t match the criteria, but on the contrary, less freely the discussion would have developed. The scheduled time for the focus groups was relatively short. Within two hours 11 characteristics and two steering factors had to be discussed. Owing to this time constraint, discussions had to be kept short. Two sessions per focus group would probably have been better, but time consuming for the participants, and taken into account that one third of the selected focus group members was not able to attend the session, organising two sessions would probably have led to more cancelations. The discussions of some characteristics/definitions took more time than was expected. It was not immediately clear to the participants what was exactly meant by them. For instance the definition of ‘Shared responsibility for students learning’ appeared to be difficult. Instead of only giving them an information package, it might have been better to introduce each characteristic/definition with a short explanation about what was exactly meant by it, so that all the participants were on the same wavelength on that topic from the beginning of the discussion. Because of the cancelations the male/female ratio did not match the real population in Secondary Schools (CAOP Citation2016). What the differences and their effects of these evaluations might have been, is impossible to determine afterwards. This could be assessed in further research.

Educational research must be conducted in collaboration with (not for or on) practice (McKenny and Reeves Citation2012). As for the development of instruments, in the literature much information can be found on the various ‘types’ of qualitative instruments. Information on how to develop them in collaboration with practice and which criteria should be used, is scarce. Future research should address the development of guidelines and criteria for the design of qualitative instruments in collaboration with educational practitioners. The final result is an overview in which for each characteristic and each steering factor, indicators were formulated, which can be used to further develop instruments for research into PLCs (Appendix 1). Actual practice will have to show whether research instruments can be selected and constructed with the aid of these indicators. At a later date the instruments will have to be tested for reliability and validity in measuring the (possible) development of both the characteristics of a PLC and its guidelines.

This study contributes to the methodology for researching PLCs from a practice based viewpoint. Phrasing the indicators is an intermediate step in developing new measuring instruments for research into PLCs that serve science and practice simultaneously. Once instruments have been developed, information can be collected on the processes which play a part in the development of a PLC in actual practice and steps can be taken to foster this development. Future research will focus on developing suitable instruments for assessment which both denote the development of a PLC and the steering factors, such a questionnaire, a method of assessing semi-structured interviews and PLC meetings.

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • Bergen, T. and van Veen, K., 2004. Het leren van leraren in een context van onderwijsvernieuwingen: waarom is het zo moeilijk? [The learning of teachers in a context of innovation of education: why is it so difficult?]. Velon, 25 (4), 29–39.
  • Biesta, G., 2007. Bridging the gap between educational research and educational practice: the need for critical distance. Educational Research and Evaluation, 13 (3), 295–301. doi:10.1080/13803610701640227
  • Bolam, R., et al. (2005). Creating and sustaining effective professional learning communities. Research Report 637. London: DfES and University of Bristol.
  • Bolhuis, S., 2009. Naar evidence based onderwijs? [Towards evidence based education?]. Vector, 9, 17–19.
  • Broekkamp, H. and Van Hout-Wolters, B., 2007. The Gap between educational research and practice: A literature review, symposium, and questionnaire. Educational research and evaluation, 13 (3). doi:10.1080/13803610701626127
  • Casanova, M. and Alvarez, I.M., 2012. Online cooperative learning and key interpsychological mechanisms: an exploratory study through the analysis of the discourse content. Creative education, 3 (8), 1345–1353. doi:10.4236/ce.2012.38197
  • Castelijns, J., Koster, B., and Vermeulen, M., 2009. Vitaliteit in processen van collectief leren: samen kennis creëren in basisscholen en lerarenopleidingen. [Vitality in processes of collective learning: creating knowledge together in primary schools and student training schools]. Antwerpen, Belgium: Garant.
  • Castelijns, J., Vermeulen, M., and Kools, Q., 2013. Collective learning in primary schools and teacher education institutes. Journal of Educational Change, 14 (3), 373–402. doi:10.1007/s10833-013-9209-6
  • Centrum Arbeidsverhoudingen Overheidspersoneel (CAOP), 2016. Onderwijsatlas Voortgeze Onderwijs. De onderwijsarbeidsmarkt in beeld. [Education Atlas Secondary Education. Education market in images]. Den Haag, The Netherlands: CAOP.
  • Clarà, M., 2015. What is reflection? Looking for clarity in an ambiguous notion. Journal of teacher education, 66 (3), 261–271. doi:10.1177/0022487114552028
  • Day, C., et al., 2010. Ten strong claims about successful school leadership.Nottingham, UK: University of Nottingham/ National College for School Leadership.
  • Doppenberg, J.J., Den Brok, P.J., and Bakx, A.W.E.A., 2012. Collaborative teacher learning across foci of collaboration: perceived activities and outcomes. Teaching and teacher education, 28 (6), 899–910. doi:10.1016/j.tate.2012.04.007
  • DuFour, R., et al., 2010. Learning by doing: A handbook for professional learning communities at work. Bloomington: Solution Tree Press.
  • Evers, A.T. 2012. Teachers’ professional development at work and occupational outcomes: an organisational and task perspective. Thesis. (PhD). Open University, Heerlen, The Netherlands.
  • Gunawardena, C. and Lowe, C., 1997. Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of educational research, 17 (4), 397–431.
  • Hairon, S., et al., 2015. A research agenda for professional learning communities: moving forward. Professional development in education, 43 (1), 72–86. doi:10.1080/19415257.2015.1055861
  • Handelzalts, A. (2009). Collaborative curriculum development in teacher design teams.Thesis (PhD). Twente University, The Netherlands.
  • Hattie, J., 2009. Visible learning: a synthesis of over 800 meta-analyses relating to achievement. London: Taylor and Francis group.
  • Hattie, J.T. and Timperley, H., 2007. The power of feedback. review of educational research, 77 (1), 81–112. doi:10.3102/003465430298487
  • Hipp, K.K. and Huffman, J.B. (2003). Professional learning communities: assessment development-eects. Paper presented at the International Congress for School Eectiveness and Improvement, Sydney, Australia.
  • Hord, S.M., 1997. Profession al learning communities: communities of continuous inquiry and improvement. Leadership, 40 (1), 58–59.
  • Hoyle, E., 1975. Management in Education: the management of Organizations and Individuals. In: V. Houghton, ed.. Professionality, professionalism and control in teaching. London: Ward Lock Educational in association with Open University Press, 283–304.
  • Hulsbos, F., et al., 2012. Professionele ruimte en gespreid leiderschap. [Professional space and distributed leadership]. Look - Open Universiteit. 1–51. Heerlen, the Netherlands: Ruud de Moor Centrum, Open University.
  • Kelchtermans, G., 2012. De leraar als (on)eigentijds professional: reflecties over de ‘moderne professionaliteit’ van leerkrachten. [The teacher as a (non) contemporary professional: reflections on the modern professionality’ of teachers]. The Hague, The Netherlands: Dutch Educational Counsel.
  • Kessels, J., 2012. Leiderschapspraktijken in een professionele ruimte [Leadership practice in professional space]. Oration, Heerlen: Open University.
  • Korthagen, F. and Vasalos, A., 2005. Levels in reflection: core reflection as a means to enhance professional growth. Teachers and Teaching: Theory and Practice, 11 (1), 47–71. doi:10.1080/1354060042000337093
  • Kramer, R.M., 2010. Collective trust within organizations: conceptual foundations and empirical insights. Corporate Reputation Review, 13 (2), 82–97. doi:10.1057/crr.2010.9
  • Kruse, S.D., Louis, K.S., and Bryk, A.S., 1995. An emerging framework for analyzing school- based professional community. In: K.S. Louis and S. Kruse, Associates, eds.. Professionalism and community: perspectives on reforming urban schools. Long Oaks, CA: Corwin, 23–44.
  • Kwakman, K. (1999). Leren van docenten tijdens de beroepsloopbaan. Studies naar professionaliteit op de werkplek in het voortgezet onderwijs. [Teacher learning during their career. Studies on professionality in the workplace in Secondary Education]. Thesis (PhD). Catholic University of Nijmegen, The Netherlands).
  • Levine, T.H. and Marcus, A.S., 2010. How the structure and focus of teachers’ collaborative activities facilitate and constrain teacher learning. Teaching and Teacher Education, 26 (3), 389–398. doi:10.1016/j.tate.2009.03.001
  • Little, J. W., 2006. Professional community and professional development in the Learning-Centered school. Best-practices working paper, National Education Association, Washington DC, United States of America.
  • Lockhorst, D. and van der Pol, J. (2008). A descriptive model of teacher communities. Proceedings of the 6th international conference on networked learning. Hallidiki, Greece: Lancaster University, 253–261.
  • Lomos, C. 2012. Professional community and student achievements. Thesis. (PhD). University of Groningen, The Netherlands.
  • Louis, K.S., 2006. Changing the culture of schools: professional community, organizational learning and trust. Journal of School Leadership, 16, 477–489. doi:10.1177/105268460601600502
  • Martens, R., 2009. RdMC onderzoeksprogramma 2009-2011. Succesvol leven lang leren op de werkplek: onderzoek naar de praktijk van docentprofessionalisering [Research program 2009-2011: lifelong learning successfully: research on the practice of teachers’ professionalization]. Heerlen, The Netherlands: Ruud de Moor Centrum, Open University.
  • Mausethagen, S., 2012. A research review of the impact of accountability policies on teachers’ workplace relations. Educational Review, 9, 6–13.
  • Mc Kenny, S. and Reeves, T.C., 2012. Conducting Educational Design Research: what it is, How we do it, and Why. London: Routledge.
  • McLaughlin, M.W. and Talbert, J.E., 2001. Professional communities and the work of high school teaching. Chicago: University of Chicago Press.
  • Meijer, P.C., et al., 2012. Teacher research in secondary education: effects on teachers’ professional and school development, and issues of quality. International journal of educational research, 57, 39–50. doi:10.1016/j.ijer.2012.10.005
  • Meirink, J.A., et al., 2010. Teacher learning and collaboration in innovative teams. Cambridge journal of education, 40 (2), 161–181. doi:10.1080/0305764X.2010.481256
  • Mercer, N., 2004. Sociocultural discourse analysis: analysing classroom talk as a social mode of thinking. Journal of applied linguistics, 1 (2), 137–168. doi:10.1558/japl.2004.1.issue-2
  • Mitchell, C., and Sackney, L., 2010. Profound improvement: Building learning-community capacity on living-system principles. 2nd ed. New York: Routledge. doi:10.4324/9780203826027
  • Morgan, D.L., 1996. Focus groups. Annual Reviewof Sociology, 22, 129–152. doi:10.1146/annurev.soc.22.1.129
  • Mortelmans, D., 2009. Handboek kwalitatief onderzoek [Handbook qualitative research]. Den Haag, The Netherlands: Acco.
  • Oolbekkink-Marchand, H.W., et al., 2017. Teachers’ perceived professional space and their agency. Teaching and teacher education, 62, 37–46. doi:10.1016/j.tate.2016.11.005
  • Rakow, L.F., 2011. Interviews and focus groups as critical and cultural methods. Journalism and mass communication quarterly, 88, 416–428. doi:10.1177/107769901108800211
  • Reynders, L., Vermeulen, M., Kessels, J., and Kreijns, K., 2015. Stimulating teachers’ continuous professional development in the Netherlands. Malta review of educational research, 9 (1), 115–136.
  • Ros, A. (2009). Is ons onderwijs nog up-to-date? [is our education still up-to-date?]. In A. Ros R. Timmermans,J. van der Hoeven M. Vermeulen, eds.Leren en laten leren. Ontwerpen van leeractiviteiten voor leerlingen en docenten. MESOfocus 75 ed., Vol.75. Alphen aan den Rijn:Kluwer.
  • Runhaar, P., Sanders, K., and Yang, H., 2010. Stimulating teachers reflection and feedback asking: aninterplay of self-efficacy, learning goal orientation and transformational leadership. Teaching and teacher education, 26, 1154–1161. doi:10.1016/j.tate.2010.02.011
  • Six, F., Nooteboom, B., and Hoogendoorn, A., 2010. Actions that build interpersonal trust: A relational signalling perspective. Review of social economy, 68 (3), 285–315. doi:10.1080/00346760902756487
  • Sjoer, E. and Meirink, J., 2015. Understanding the complexity of teacher interaction in a teacher professional learning community. European journal of teacher education, 39 (1), 110–125. doi:10.1080/02619768.2014.994058
  • Sleegers, P., et al., 2013. Toward conceptual clarity: A multidimensional, multilevel model of professional learning communities in Dutch elementary schools. The elementary school journal, 114 (1), 118–137. doi:10.1086/671063
  • Stevens, L., 2010. Zin in onderwijs. [Relishing education]. Belgium. Antwerpen: Garant.
  • Stoll, L., et al., 2006. Professional learning communities: A review of the literature. Journal of Educational Change, 7, 221–258. doi:10.1007/s10833-006-0001-8
  • Strijbos, J.W., et al., 2006. Content analysis: what are they talking about? Computers and education, 46 (1), 29–48. doi:10.1016/j.compedu.2005.04.002
  • Teurlings, C., et al. (2011). “Als ik er maar wat aan heb … If I get anything out of it?” Eindrapportage ‘Onderwijsonderzoek: de praktijk aan het woord’. [Final report: Educational research: Giving the floor to practice]. Heerlen: Ruud de Moor Centrum/Open University, The Netherlands.
  • Thoonen, E.E.J. (2012). Improving classroom practices: the impact of leadership, school organizational conditions and teacher factors. Thesis. (PhD). University of Amsterdam, The Netherlands.
  • Tomlinson, C.A., et al., 2003. Differentiating instruction in response to student readiness, interest, and learning profile in academically diverse classrooms: A review of literature. journal for the education of the gifted, 27 (2–3), 119–145. doi:10.1177/016235320302700203
  • Tschannen-Moran, M., 2009. Fostering teacher professionalism in schools: the role of leadership orientation and trust. Educational administration quarterly, 45 (2), 217–247. doi:10.1177/0013161X08330501
  • Van Den Akker, J., et al., Eds., 2006. Educational design research. London: Routledge.
  • Van Den Bossche, P., et al., 2006. Social and cognitive factors driving teamwork in collaborative learning environments team: learning beliefs and behaviors. Sagepublications, 34, 490–521.
  • Van Veen, K., et al., 2010. Professionele ontwikkeling van leraren: een reviewstudie naar effectieve kenmerken van professionaliseringsinterventies van leraren [Professional development of teachers: a review study on effective characteristics of teachers’ professionalizing interventions]. Leiden: ICLON, expertise centrum leren van docenten.
  • Van Wessum, L. 1997. De sectie als eenheid [The subject department as a unit]. Thesis (PhD). The Netherelands: Utrecht Universtity.
  • Verbiest, E., ED., 2008. Scholen duurzaam ontwikkelen: bouwen aan een professionele leergemeenschap [Improving schools sustainably: building a professional learning community]. Apeldoorn, The Netherlands: Garant.
  • Vescio, V., Ross, D., and Adams, A., 2008. A review of research on the impact of professional learning communities on teaching practice and student learning. Teaching and teacher education, 24 (1), 80–91. doi:10.1016/j.tate.2007.01.004
  • Waslander, S., Dückers, M., and Dijk, G.V., 2012. Professionalisering van schoolleiders in het voortgezet onderwijs: een gedeeld referentiekader voor dialoog en verbetering [Professionalization of school leaders in Secondary Education: A shared framework for dialogue and improvement]. Utrecht: Leadership Development Centre TiasNimbas Business School, IVA beleidsonderzoek en advies & VO-Raad.
  • Zwart, R.C., et al., 2007. Experienced teacher learning within the context of reciprocal peer coaching. Teachers and teaching: theory and practice, 13, 165–187. doi:10.1080/13540600601152520

Appendix 1.

Overview of conceptual definitions, categories, indicators of PLC characteristic and steering factors