2,165
Views
5
CrossRef citations to date
0
Altmetric
Articles

The supportive character of teacher education triadic conferences: detailing the formative feedback conveyed

ORCID Icon
Pages 116-130 | Received 24 Jun 2018, Accepted 15 Nov 2018, Published online: 20 Nov 2018

ABSTRACT

This study explored feedback conveyed in nine triadic conferences in teacher education practicum. The supportive character of formative feedback was explored in detail by employing a framework that combines two conceptualisations of feedback.The study depicted feedback directed backwards, upwards or forward and focusing performance, strategies, self-regulation or personal characteristics inside a framework of self-regulated learning. Findings show the teacher student to be more active in feed up, feed back and feed forward than previous studies have shown. Furthermore the conversations were found to be characterised by joint problem-solving in which all three parties focused on the student’s professional teacher-becoming. In conclusion, the findings indicate a feedback practice characterised by ‘sustainable’ feedback that scaffolds students’ self-assessing competence while fostering student self-reflexivity and self-regulation.

Introduction

There is compelling agreement regarding the essential role of practicum in teacher education (Al-Malki and Weir Citation2014; Chiang Citation2008; Darling-Hammond Citation2010; Farrell Citation2008; Grossman & McDonald Citation2008; Kaldi Citation2009; Smith & Lev-Ari Citation2005; White, Bloomfield & Le Cornu Citation2010; Zeichner Citation2010). Scholars in the field point to the fact that the practicum part of teacher education serves the educational institution and the profession in terms of quality assurance by playing a gatekeeping role (Cheng and Tang Citation2008; Goodwin and Oyler Citation2008; Hegender Citation2010). Simultaneously, practicum provides teacher students with experiential competence in teaching practice to support their development towards becoming professional teachers (Cheng &Tang Citation2008; Chiang Citation2008; Farell Citation2008; Kaldi Citation2009).

A common practice seems to be for the university supervisor to observe a lesson taught by the student, after which a conference aimed at assessing the S performance takes place (eg Aspden Citation2014; Haigh, Ell, and Mackisack Citation2013). In this conference, the student (S), the university supervisor (US) and the practicum mentor (PM) assess and reflect upon teacher student performance and development to agree on the degree to which the intended learning outcomes (ILOs) pertaining to the practicum are met. In addition, they discuss possible measures that ought to be taken in order to accomplish them. Through the communication of constructive feedback and establishing a process of collaborative reflection regarding S performance, the triadic conference is meant to help the teacher students build a bridge between their theoretical knowledge base and their practical experience. Several scholars point out that these triadic conferences constitute an important component of teacher education practicum and are regarded as crucial for the teacher students’ professional development (Akcan and Tatar Citation2010; Tang and Chow Citation2007). However, the supportive character of triadic conferences is by no means self-evident. In a New Zealand study for example, Zhang et al. (Citation2015) found a high degree of consistency among 10 Ss, 20 PMs and 5 USs who perceived the pre-conference observation to be invalid, obtrusive, and biased. There was furthermore consensus that the triadic conference was threatening, assessing, dishonest, uncomfortable, or stressful – a perception likely to induce anxiety and possibly anger in the S, thereby risking affecting S learning adversely (cf Falchikov Citation2007). Both PMs and Ss in the Zhang et al study wanted the USs to be more supportive and less focused on assessing. The more assessment-focused approach might be explained by the fact that the triadic conference is meant to (or is misunderstood to be meant to) assess the S performance summatively. Some studies report an expectation and preference for the US to take responsibility for the summative assessment and the PM for the formative assessment (Aspden Citation2014). Other studies however, suggest that both PMs and USs perceive their roles to be primarily formative (Aspden Citation2014; Ortlipp Citation2009; Parker and Volante Citation2009). Either way, the combining of these two kinds of feedback in the same forum is deemed problematic (Aspden Citation2014; Bates Citation2004; Parker and Volante Citation2009). As several scholars point out, a practice aimed at simultaneously performing formative and summative assessment risks being contradictory, since formative assessment is performed within a learning paradigm, whilst summative assessment is performed as part of a certifying paradigm (Bates Citation2004; Black & William Citation2003; Boud Citation2007; Cheng and Tang Citation2008; Goodwin and Oyler Citation2008; Maclelland Citation2004; Parker and Volante Citation2009; Tillema, Smith, and Leshem Citation2011).

Thus, while the triadic conference is regarded as playing a crucial, supportive role in the S’s development, it can also be perceived as being quite the opposite. This makes the details and nuances of the feedback conveyed in these conferences interesting to explore (cf Akcan and Tatar Citation2010). Which of the three parties, for example, dominates the conference? How does the S contribute to the feedback? In what ways do the US and the PM support the S’s development? Is the feedback conveyed directed backwards or forward? Does it focus S performance, strategies, self-regulation or personal traits? Is it clarifying, correcting, facilitating or encouraging? This article reports on a study of nine triadic conferences in a Swedish upper secondary school, in which the feedback conveyed was explored with regard to its details, aiming at identifying the supportive character of the formative feedback. What details then, do earlier studies reveal about the formative feedback conveyed in triadic conferences in teacher education practicum?

What do we know about formative aspects of triadic conferences?

Studies report that triadic conferences in teacher education practicum typically last about 60 minutes (Aspden Citation2014; Blanton, Berenson, and Norwood Citation2001; Goodwin and Oyler Citation2008). According to Fernandez and Erbilgin (Citation2009), the USs, PMs and Ss each speak about one third of that time, while Tsui et al. (Citation2001) found the USs dominating overall, taking up 45% of the time, in comparison to the 29% and 28% by the PMs and the Ss respectively.

Aspden’s (Citation2014) study of four triadic conferences describes some essential differences in the quality of feedback provided to the S, namely prescriptive, informative, confronting, catalytic (encouraging self-discovery), cathartic (allowing for discharging of emotions and feelings) and supporting. The first conference studied was almost exclusively characterised by supporting feedback, by the US as well as the PM, and at no point engaged in confronting or catalytic feedback. The second, in contrast, was characterised by confronting feedback, although interspersed with supportive and informative feedback. The third case was characterised by supportive feedback with some confronting and catalytic feedback, while the fourth was mostly comprised of informative feedback, with clusters of supporting, catalytic and confronting feedback. Thus, the two most prevalent kinds of feedback in Aspden’s (Citation2014) study were identified as supportive and confronting, followed by catalytic and informative feedback. The least common forms of feedback were prescriptive and cathartic.

Akcan and Tatar (Citation2010) found the USs in their study to be concerned about establishing a conversation in which the teacher students described, questioned and reflected on their own teaching. By encouraging the teacher students to talk about how they felt about the lesson and to reflect on how it went, before stating their own opinions, the USs strove to engage the teacher students in a joint reflective practice. The PMs, on the other hand, in the study by Akcan and Tatar (Citation2010) were found to be more situation specific and aimed at building empathy with the teacher students. They tended to focus on specific incidents from the lesson, reporting their observations and giving concrete advice for better instruction. Thus, while the USs in the Akcan and Tatar (Citation2010) study encouraged reflection and helped the teacher students to evaluate their lessons critically during their post-lesson conferences, the PMs were more situation specific and offered a kind of feedback which directed and prescribed what to do next time.

Johnsen-Høines (Citation2009, Citation2011) identified two different kinds of conversation in triadic conferences, characterising them as evaluative and subject-oriented reflective conversations, respectively. The authors define the latter as representing an investigative and probing kind of interaction that is subject-focused and reflective. It is described as a kind of dialogue that uses the practicum situation as a starting point but then detaches from there to develop a subject knowledge and vocational competence based on subject and subject-didactical knowledge. It is characterised as referential, forward-oriented and continuing. The evaluative dialogue, on the other hand, is characterised primarily as retrospective in that it reviews what happened and evaluates what should have been done differently. According to Johnsen-Høines (Citation2009, Citation2011), these different types of dialogue might occasionally support one another. However, an evaluative dialogue might also inhibit or impede a subject-oriented reflective dialogue. Also, it might be difficult to establish a subject-oriented reflective dialogue while an evaluative discourse is dominating, because, they suggest, an evaluative conversation might promote statements that are perceived to be accusations, reproaches or negative criticism, regardless of the way it is meant. If the feedback is interpreted as a negative evaluation it might well prevent constructive reflection.

Fernandez and Erbilgin (Citation2009) coded types of communication used by USs leading triadic post-lesson conferences and PMs leading dyadic conferences in order to examine approaches to supervision. Noteworthy in their findings are the differences between the USs and the PMs as leaders of the conference with regard to the use of questioning and assessing communication. While questioning, defined as ‘leader asked questions to understand the teacher student’s thinking’, made up 45-50 % of communication in conferences led by the US, it comprised only between 9 and 13 % of communication in conferences led by the PM. Conversely, assessing, defined as ‘leader provided positive or negative assessments of aspects (in a general way) of the teacher student’s work or ideas’, made up 46-52% of PM communications and 10-11% that of the US. The other three categories were Describing, i e leader described specifics from direct observations of the teacher student’s work; Suggesting, i e leader made directive or nondirective suggestions related to the teacher student’s work; and Explicating, i e leader explained her own perspectives on issues based in theories or beliefs grounded in recent reforms or personal experience. These showed a more even distribution in the findings, ranging between 7 and 20 % of the communications in both triadic conferences led by the US and dyadic conferences led by the PM. According to Fernandez and Erbilgin (Citation2009), the feedback practice of the USs aligned with an approach to supervision that they denoted as educative. Educative supervision aims, according to the authors, at supporting S learning by helping them to connect ideas from the teacher education program to practicum. It is described as a feedback practice characterised by a tendency to use open ended questioning related to observed classroom experiences and dig at the teacher students’ thinking, particularly related to mathematics pedagogy and mathematics, i.e. related to the subject and subject didactic. In detailing the findings, Fernandez and Erbilgin (Citation2009), report that the US combined open-ended questions with descriptions of S undertakings with suggestions; the latter also clarified by explications. The consequence of this approach, the author claims, is that the US hereby signals that the suggestions are open for consideration and reflection. Furthermore, their study showed the USs typically not giving assessments until after they had fostered analyses of the observed lessons by the Ss and their PMs; i.e. after long exchanges, or towards the end of the conference. The approach to supervision employed by the PMs was, on the other hand, denoted as evaluative. Fernandez and Erbilgin (Citation2009) describe the evaluative feedback practice in terms of primarily positive assessments and affirmations, occasionally combined with direct suggestions in areas where the PMs judged it plausible for the teacher students to be able to perform better. In detailing the PM-led conferences, the authors give an account of a feedback practice evidenced by provision of mostly positive evaluations, affording suggestions along with assessing or describing comments, however without posing the suggestions as ideas for discussion. In the case of PMs, Fernandez and Erbilgin (Citation2009) found explicating communication to be used foremost in order to justify approval of S performance.

Tsui et al. (Citation2001), focusing on speech functions in triadic conferences, constructed three categories according to which they analysed their findings: eliciting, offering and managing interaction. Eliciting and offering was used to code the speaker – whether US, S or PM – eliciting and offering reflection/analysis, evaluation, suggestion/alternative, information, observation, explanation, expression of feelings and support/empathy. The category ‘managing interaction’ was created to code comprehension checking, asking for and giving confirmation and agreement, acknowledgement and meta-discourse. Tsui et al’s (Citation2001) findings show offering to make up the bulk of the dialogues in the conferences whereas eliciting accounted for the smallest portion. Looking at the contributions of each participant, S communication comprises offering 23%, followed by managing (5%) and eliciting (0,1%); US communication comprises 26% offering, 13 % managing and 6 % eliciting; while PM communication is made up of 23% offering, 4% managing and 2% eliciting.

Thus, studies show the division of time in triadic conferences tending to be either evenly distributed among the three participants or dominated by the US or the PM. As regards the detail and character of the feedback reported, the findings in these studies rely on self-developed conceptual frameworks, making it somewhat difficult to get a consistent picture of the state of the field. However, the studies identify and describe various kinds of feedback, such as informative/describing/explicating, suggesting, confronting, reflecting, prescriptive, analysing, evaluating/assessing, supportive, catalytic, and cathartic (Akcan and Tatar Citation2010; Aspden Citation2014; Fernandez & Ergilbin Citation2009; Johnsen-Høines Citation2009, Citation2011; Tsui et al Citation2001). Furthermore, there is a detectable pattern where the US tends to convey forward-oriented, eliciting and reflective feedback, while the PM engages in retrospective, reviewing and offering feedback (Akcan and Tatar Citation2010; Fernandez and Erbilgin Citation2009; Johnsen-Høines Citation2009, Citation2011). Additionally, not much is reported with regard to the S’s contribution. However, while these studies have a wider rationale that encompasses other aspects of teacher education practicum or triadic conferences, they fail to provide details of the feedback that is necessary for identifying the supportive character of formative feedback. This study sets out to provide a detailed account of the feedback employed in triadic conferences in teacher education by exploring details and nuances in the feedback. It aims to identify the supportive character of formative feedback conveyed in triadic conferences.

Methodology

The study was conducted at an upper secondary school associated with one of the largest teacher education programs in Sweden and specifically assigned as a site for teacher student training. The practicum amounts to 30 ECTS, distributed in three full-time periods during semesters two, five and nine. Triadic conferences recur in each of the practicum periods, aiming to contribute to continuity and progression as regards to the Ss’ professional development.

The study was initiated by the PMs at the school and the triadic conferences included were self-selected. A total of nine conferences were recorded in the autumn term 2016, of which three were carried out during Practicum I, one during Practicum II, and the remaining five during Practicum III. Four of the conferences were conducted with language teacher students, three with combined language and social science teacher students, one social science teacher student, and one science education teacher student.

Analysis

The recordings were analysed employing a deductive approach and qualitative content analysis focusing on the manifest content (Elo & Kyngäs Citation2008; Hsieh and Shannon Citation2005). The codes were deduced from a framework worked out by combining six of Nicol’s (Citation2009) ‘twelve principles for good assessment and feedback’ with Hattie and Timperley (Citation2007) conceptualisation of assessment as directed backwards, upwards or forward, and focusing on performance, strategy, self-regulation or person. The categories used for coding are shown in .

Table 1. Categories of formative feedback used in the coding.

The analysis aimed to account for the extent to which these subcategories appeared in the material and to what extent the US, the PM and the S were accountable for conveying these different kinds of feedback. The material was specifically searched for aspects of these different subcategories.

The coding was done directly in the audio files using TRANSANA software that allows for coding of sequences. The program allows one to create sequences of different lengths that might be encoded according to multiple categories and curated into ‘collections’. The findings reported here are part of a more comprehensive study focusing, in addition, on other aims of the conference conversations, competencies addressed, and references used. The coding procedure accordingly included codes besides the ones used for coding aspects of feedback and the participant conveying it.

In a first step, the focus was on testing the coding scheme and identifying categories and/or subcategories that did not fit existing categories/subcategories. This process led, after having listened to three conferences, to the addition of five subcategories distributed across two categories. All nine conversations were coded from start to finish in such a manner that each change of speaker, competence addressed, aim of speech and reference split sequences into ‘clips.’ Each ‘clip’ was then curated into a collection/subcategory subordinate to the collections/categories of ‘who’ (speaker), ‘what’ (competence addressed), ‘aim’ (of utterance) and ‘ref’ (reference utilised).

For example, a conversation sequence in which the S described a dialogue with her supervisor where she/he aimed at obtaining tips on the treatment of pupils, was gathered into the collections WHO: S(tudent) + WHAT: Relationships + AIM: Feedback; Strategy. A clip where the university teacher talked about how she/he usually goes about creating a peaceful environment in the classroom was collected in WHO: US (university supervisor), WHAT: Classroom management, AIM: Clarification of Good Performance and REF: Own experience. When all the audio files were encoded, each ‘collection’ was reviewed to check each clip for coding; if deemed necessary, clips were moved to other subcategories and re-coded.

Findings

Division of conversational time

The triadic conferences lasted between 22 and 75 minutes (see ). On average, USs accounted for 40% of the time, Ss for 36%, and PMs for 19% of the time. The US dominated in all of the conferences in the study but one, displaying a variation of between 20% and 68%. The S share of the conversational space varied from 10% to 65%, while the PM share varied from 11% to 26%, accounting for the least of the conversational time in all conferences but one.

Table 2. Duration of the conferences (letters represent the unique conference and numbers represent the practicum period in which the conference took place).

Conversational aims and kinds of feedback in the conferences

In distinguishing different conversational purposes in the nine triadic conferences studied, below shows the percentage distribution of time devoted to feedback, meta-conversing and non-relevant issues, respectively. On average, about 14% of the time was found to be spent on ‘meta-conversing’, i.e. speaking about the triadic conference, the placement, and the S’s skills development in a more general sense.5% was spent on ‘other’, i.e. things not relevant to the conference.

Figure 1. Percentage distribution of different conversational purposes occurring in the triadic conferences.

Figure 1. Percentage distribution of different conversational purposes occurring in the triadic conferences.

The remaining part of the time – about 82% on average – was found to be devoted to various kinds of formative feedback. In 37% of the time, the feedback was directed backwards towards the achievements and strategies used up to that point by the S, while 21% of the time was spent talking about how the S might improve her/his achievements and strategies going forward. In addition, one percent of the time spent in the average conversation was devoted to encouraging positive motivational beliefs and self-esteem in the S. Feedback focusing on the S as a person as well as feedback focusing on self-regulation both occurred to a very small extent in the conferences. 19% of the average conference contained feedback focusing on S strategies to achieve the competences defined in the ILOs, and 36% of the time feedback focused on the actual performance.

Aspects of formative feedback in the conferences

The type of feedback most common was that which clarified the nature of a ‘good performance’. This kind of feedback occurred on average around 23% of the total time, ranging from less than 10% to about 40%. The clarification consisted in problematising and qualifying the way a performance, its criteria, and level of requirement expressed themselves practically and theoretically. As shown in , the US accounted for the largest proportion of such clarification (43 percentage points) and the PM for the smallest (22 percentage points). The S share amounted to slightly more than one third (35 percentage points). Occasionally, one of the participants developed such a clarifying reasoning on her/his own, while on other occasions two or three of them cooperated in what might be characterised as a joint problem-solving dialogue or trialogue.

Figure 2. Total time for different formative feedback (occurring 1 minute or more) conveyed in the conferences.

Figure 2. Total time for different formative feedback (occurring 1 minute or more) conveyed in the conferences.

The clarification of ‘a good performance’ often took its departure point in the lesson preceding the conference (which the US observed), rendering the performance in question a contextual setting regarding the activities, groups and pupils influencing the performance. The conversation often started with a unique event and proceeded to a more general perspective, in which questions about didactic choices and consequences were raised. The reference point changed from the detailed, concrete and unique to a more general perspective and then back, while relating to the question ‘how can this improve the performance in the teaching practice’. Most often, failures and challenging experiences were utilised as resources to problematise and qualify, with good examples used less often. Two things could be considered noteworthy: the fact that while the Ss seemed to be more competent or at least more interested in digital skills than the USs and the PMs, it was the USs problematising and qualifying what good digital competence means from a didactic perspective. Secondly, the fact that although the PMs clearly had comprehensive and recent teaching experiences of their own which should have helped them clarify good performance, they sometimes had difficulty putting it into words.

Feed back related to S performance occupied almost as much time in the average conference as clarifying ‘good performance’, varying between 13 and 48 percent. The S accounted for 60 percent of this feedback while the PM contribution extended to approximately 22 percentage points and the US to about 18%, as evidenced in .

When the Ss conveyed their own feed back on successful or less successful achievements, it was communicated objectively rather than in words of appraisal. Further, it tended to be related to the implications of the performance on the teaching itself, or implications for the pupils. The Ss motivated their didactic choices and actions by referring to the knowledge levels of individual (or groups of) pupils, discipline and dedication, the purpose of the teaching, and their own learning goals. Achievements were related both to personal strengths and weaknesses as well as to the conditions under which the performance took place, such as class discipline, for example. Likewise, the USs and the PMs related their feed back to the motivation, interest and discipline of the pupil group. The PM, having followed S through at least one and perhaps up to three internship periods, also set their S’s performance in relation to former performances/achievements.

Feed back conveyed was found to be based on the performance of the S during the lesson preceding the triadic conference, which was observed by the US and the PM. Based on what took place therein, feed back was expressed in elaborate reasoning. The S, the US and the PM specified and developed the ways in which the performance in question was successful and less successful, and what consequences it led to. The USs and the PMs were often more evaluative in this process than the Ss, especially when the feed back was positive. The reasoning typically moved from the specific situation to the S’s performance during placement more generally, and alternative approaches were proposed along with motivations related to consequences. The US and the PM shared their observations of the S’s achievements during the observed lesson by developing in detail the appearance of actions performed and its consequences on behalf of the pupils and the teaching. Most often, this was in the form of positive observations, especially after a S had criticised her/his achievements. In these cases, the US and the PM often committed to explaining and defending the S’s choices and actions based on contextual conditions and their own experience of similar situations.

Feed forward on S performance occupied approximately 14% of the time on average, varying from four to 40%. As displayed in , this kind of feedback was dominated by the US, accounting for 68% of it, while the PM accounted for 18 percentage points and the S for 14.

The forward-looking feedback occurring in the conferences aimed at taking S’s present achievements and skills as a departure point in order to adapt advice and suggestions to, on the one hand, the goals of teacher education with regard to the S’s professional development, and on the other hand to the personality, challenges and strengths of the S. All three parties, but mostly the USs and the PMs, shared experiences of successful as well as unsuccessful performances and consequences from teaching practice. In the conversations these examples served as utilities for reflection for all parties, and for the S’s learning.

Feed back that sought to focus the strategies used by the S to improve her/his performance amounted to between two and 40 percent of the time, with an average value of about 12 percent. This kind of feedback was performed by the S 68% of the time, while the PM contributed less than 25 percent and the US approximately seven percent. See .

Occasionally, Ss broached aspects of personal singularities that were sensitive to reveal, yet important for the strategies employed and therefore for presenting the challenges of becoming a teacher. When the Ss problematised these personal singularities in light of being a professional teacher – such as looking very young, having a weak voice or a poor tone – the US and the PM followed the S’s lead. This turned the triadic conversation into a kind of joint problem-solving dialogue or trialogue, in which the US and the PM in an unbiased/objective and constructive manner, afforded explanations and proposed solutions.

Feed forward focusing on helping the S to find strategies to improve performance occurred between four and 14 percent of the time in the conferences studied. On average, it occurred seven percent of the time; the US being accountable for 64%, the S for 28% and the PM for 8% (see ). USs and PMs typically advised Ss to identify personal challenges and engage in ameliorating these in the course of placement periods with the aid of their PM. The USs and the PMs emphasised the necessity for commitment and courage. They encouraged Ss to experiment, and to practice and evaluate different didactic solutions, perhaps with the help of their pupils. Furthermore, they suggested the Ss draw on a plurality of sources and reflect (either on their own or with their PM) on didactic choices, actions and consequences: their own achievements – successful or not; examples from the PM’s teaching practice; teacher manuals; and theoretical knowledge from the university program. USs and PMs however emphasised that becoming a teacher was also about gaining experience, which, in other words, comes with time. They also pointed out that teacher-becoming must be based on personality and therefore it behooves the S to adapt inspiration gained from internship and theory to her/his personal conditions. Teacher-becoming is, they maintained, a matter of building on personality. Both US and PM advised the S to be receptive of and make use of their PM’s feedback, yet not be too self-critical in their self-assessment.

The category denoted ‘facilitating self-reflection and self-assessment’ involves asking a S to reflect and/or evaluate either an accomplished or an upcoming performance or strategy. This kind of feedback occurred from zero to 18% of the time in the conversations, with an average of about three percent. As shown in , it was the US who accounted for the largest proportion of such requests. The requests were initiated by the US or the PM describing or sometimes problematising performed or planned S performances, strategies, formulated learning goals, intentions and plans; either for a lesson or for the S’s development throughout the entire placement period. Subsequently, the US asked the S how it went in terms of difficulties or challenges. In other cases, the S was asked to develop thoughts on a certain performance, in theoretical and/or practical terms – what, according to the S, would ‘a good performance’ mean, what strategy did the S use in order to achieve the ILO in question, and how would she/he plan to ameliorate it further on? This category also entailed the US asking the S to give examples of different ILOs occurring in the syllabus by asking her/him to elaborate on them theoretically and practically.

Of all the significant categories, encouragement with positive motivational beliefs and self-esteem quantitatively took up the smallest amount of time. More specifically, it amounted to, on average, three and a half minutes from USs and one minute from PMs (see ). This feedback took the form of positive expression, directed backwards or forwards in time regarding S performance, strategies and development. It was concrete, elaborate and linked to consequences. In addition, USs and PMs instilled motivation and self-esteem in Ss (from their professional experience as teachers and teacher trainers) by affirming the S’s ability to achieve the learning objectives for the placement and succeed in becoming a teacher.

Feedback intended to feed forward to focus the person and to encourage application of ‘time and effort’ to challenging learning tasks, appeared for about 30 seconds altogether in the material, and is not presented further.

Based on , it can be inferred that the S and US accounted for approximately the same amount of total feedback time occurring in the nine conversations, 41% and 40% respectively, while the PM accounted for the lowest percentage, about 19%. However, it should be noted that in the triadic conference the PM on several occasions said: ‘we have talked about this during the tutorial’, choosing not to repeat themselves and limiting their feedback to new issues.

Concluding discussion

This study has identified details of feedback in triadic conferences.

The combining of Hattie and Timperley (Citation2007) and Nicol’s (Citation2009) conceptualisations of feedback allowed for identifying details of the formative nature of the feedback employed. The categories included in the framework made it possible to discriminate between feed back, feed up and feed forward. Furthermore, the occurrence of feedback focused on either S performance, strategy, self-regulation or person was possible to map. In addition, and in contrast to earlier studies in the field, the S contribution in the feedback employed was illuminated in this study.

The decision to combine Hattie and Timperley (Citation2007) conceptualisation of feedback with that of Nicol’s (Citation2009) was made in concert with an abductive approach (see Tavory and Timmermans Citation2014). The analysis set out to use those categories of good feedback for self-regulated learning as suggested by Nicol’s (Citation2009) that could be deemed relevant to triadic conversations. It soon became clear though, that crucial aspects of the feedback employed could not be captured by the chosen categories. Looking for other theoretical concepts to fit the empirical material – including those suggested in earlier studies of triadic conferences – eventually led to the choosing of Hattie & Timperley’s (Citation2007) conceptualisation. The categories of the latter were easily integrated into the framework of the former. One important contribution of this study is the framework suggested for analysing conversational feedback. Two things should be noted though. First, that it was difficult to discriminate between the two categories ‘clarifying of a good performance’ and ‘feed forward regarding performance’ when coding the recordings. Empirically, feedback that included suggesting to the S how to perform better always included clarification of what ‘a good performance’ was. The merging of these two categories is therefore suggested should the framework be utilised in future studies. Second, despite that the categories ‘facilitating self-reflection and self-assessment’ and ‘encouraging self-motivation and self-esteem’ occupied very little time in the conferences, the importance of these categories as part of formative feedback should not be underestimated.

There were no covariates found while analysing deviant findings among the parameters. For example, while the length of the nine triadic conferences varied – between 22 and 75 minutes – these variations did not coincide with any of the parameters accounted for. The fact that the two shortest conferences lasted only 22 minutes and 29 minutes respectively and the longest lasted 75 minutes poses questions about the quality of the triadic conferences, which are supposed to take an hour.

The analysis shows that the S and the US account for an approximately equal amount of the conversational time, and the PM for about half as much as that. This is not in line with previous studies which found feedback time to be either distributed equally between the three conference participants, or to be dominated by the US and PM. This might be interpreted as feedback practice that aims primarily to accommodate needs identified by the S.

This interpretation is reinforced by the fact that Ss were found to dominate the backwards-directed feedback. Also, the feed up, i.e. the clarification of what a ‘good performance’ is, was found to be equally conveyed by Ss and USs. This is also not in line with findings from previous studies, where S accounts of the feedback have been less visible and where PMs were reported to dominate the feedback directed backwards. In addition, the S feedback focusing on performance in this study was found to account for an equal amount of time to that of the US, and S feedback focusing on strategy accounted for twice as much time as the same feedback from the US and PM. Both these findings reinforce the interpretation that the feedback practice is designed to accommodate the S. The finding that feed forward is dominated by the US, however, is in line with earlier findings.

The most common foci of feedback in the study were performance (36%), and strategy to ameliorate performance (19%), both of which have a relatively significant bearing on S learning, as argued by Hattie and Timperley (Citation2007). Feedback focusing on the S as a person, the focus with the least influence upon future skills development according to Hattie and Timperley (Citation2007), occurred to a very small extent in the material. Likewise, feedback focusing on self-regulation was barely evident in the conferences studied, albeit that such focus, according to Hattie & Timperley (ibid), has the greatest effect on skills development. Thus, while showing that the feedback in triadic conferences clarifies and confirms S performance and strategies, this study finds that there is room for improvement in facilitating S self-regulation.

Looking at the three parties’ contributions to feedback, the study shows US feedback to be dominated by clarification of performance, feed forward on performance and feed back on performance; MP feedback to be dominated by clarification of performance, feed back on performance, and feed back on strategy; and S feedback to be dominated by feed back on performance and strategy, and clarification of performance. Again, this could be interpreted as feedback practice intending to accommodate needs identified by Ss regarding where they are, where they are going, and what is needed in order to get ‘there’.

This interpretation is furthermore reinforced by nuances emerging from the material. The analysis provides an account of the conversations as characterised by joint problem-solving focused on the S’s professional teacher-becoming. To help the S plan a professional growth pathway, examples of US and PM experiences from teaching practice and S failures and successes are related to program ILOs and S’s personal learning goals, personal attributes, strengths and challenges.

Based on unique teaching situations which contextually define performance, the conversation progresses more generally to relate practical skills to didactical choices and consequences. This is done by connecting to and using theory to motivate clarifications, explanations, suggestions, and solutions for the unique situation. The conversations are clearly characterised by all three parties contributing to elaborations, critique, problematising, explicating, suggesting, and defending in an objective, that is, not evaluative, tone. The findings suggest room for improvement in facilitating the development of self-assessment and reflection, and encouragement of positive motivational beliefs and self-esteem. It should be noted, though, that this kind of feedback requires less time than the other kinds of feedback discussed here.

In conclusion, the feedback employed in the triadic conferences studied indicates a feedback practice characterised by support for S self-regulation (cf Nicol (Citation2009). The findings show a detailed and nuanced account of ‘sustainable’ feedback that scaffolds Ss’ self-assessing competence and fosters self-reflexivity and self-regulation (cf Hounsell Citation2007; cf Boud Citation2007). When conveyed by the US, the PM or the S, this implies a feedback practice that supports the S ‘becoming an accomplished and effective professional’ (Boud and Falchikov Citation2007, 184).

The contribution that this study makes to the field of feedback practice in teacher education can be summarised as follows: detailing the formative nature of feed back, feed up and feed forward; discriminating between feedback concerning S performance, strategy, self-regulation and person; and illuminating the S contribution to the feedback employed. Together, this has the potential to clarify the formative dimensions of an assessment practice which accommodates a learning paradigm in teacher education practicum, especially the potential of the Ss’ contributions to their own professional development. Also, it tells us as teacher educators that there is room for improvement when it comes to feedback on self-regulation, which, according to Hattie and Timperley (Citation2007), has the greatest effect on skills development.

Additional information

Notes on contributors

Lotta Jons

Lotta Jons is an Educational developer, PhD in Education and University diploma in Health care education.

References

  • Akcan, S., and S. Tatar. 2010. “An Investigation of the Nature of Feedback given to Pre‐service English Teachers during Their Practice Teaching Experience.” Teacher Development 14 (2): 153–172. doi:10.1080/13664530.2010.494495.
  • Al-Malki, M. A., and K. Weir. 2014. “A Comparative Analysis between the Assessment Criteria Used to Assess Graduating Teachers at Rustaq College (Oman) and Griffith University (Australia) during the Teaching Practicum.” Australian Journal of Teacher Education 39 (12). doi:10.14221/ajte.2014v39n12.3.
  • Aspden, K. M. 2014. “Illuminating the Assessment of Practicum in New Zealand Early Childhood Initial Teacher Education: A Thesis Presented in Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy in Education at Massey University, Manawatū, New Zealand.” Thesis, Massey University. https://mro.massey.ac.nz/handle/10179/6473.
  • Bates, R. 2004. “Regulation and Autonomy in Teacher Education: Government, Community or Democracy?” Journal of Education for Teaching 30 (2): 117–130. doi:10.1080/0260747042000229744.
  • Black, P., and D. Wiliam. 2003. “‘In Praise of Educational Research’: Formative Assessment.” British educational research journal 29 (5): 623–637. doi:10.1080/0141192032000133721.
  • Blanton, M. L., S. B. Berenson, and K. S. Norwood. 2001. “Exploring a Pedagogy for the Supervision of Prospective Mathematics Teachers.” Journal of Mathematics Teacher Education 4 (3): 177–204. doi:10.1023/A:1011411221421.
  • Boud, D. 2007. “Reframing Assessment as If Learning Were Important.” In Rethinking Assessment in Higher Education. Learning for the Longer Term, edited by D. Boud and N. Falchikov, 14–25. New York, NY: Routledge.
  • Boud, D., and N. Falchikov. 2007. “Developing Assessment for Informing Judgement.” In Rethinking Assessment in Higher Education. Learning for the Longer Term, edited by D. Boud and N. Falchikov, 181–197. New York, NY: Routledge.
  • Cheng, M. M.-H., and S. Y.-F. Tang. 2008. “The Dilemma of Field Experience Assessment: Enhancing Professional Development or Fulfilling a Gate‐keeping Function?” Teacher Development 12 (3): 223–236. doi:10.1080/13664530802259289.
  • Chiang, M.-H. 2008. “Effects of Fieldwork Experience on Empowering Prospective Foreign Language Teachers.” Teaching and Teacher Education 24 (5): 1270–1287. doi:10.1016/j.tate.2007.05.004.
  • Darling-Hammond, L. 2010. “Teacher Education and the American Future.” Journal of teacher education 61 (1–2): 35–47. doi:10.1177/0022487109348024.
  • Elo, S., and K. Helvi. 2008. “The Qualitative Content Analysis Process.” Journal of advanced nursing 62 (1): 107–115. doi:10.1111/j.1365-2648.2007.04569.x.
  • Falchikov, N. 2007. “Assessment and Emotion: The Impact of Being Assessed.” In Rethinking Assessment in Higher Education. Learning for the Longer Term, edited by D. Boud and N. Falchikov, 144–155. New York, NY: Routledge.
  • Farrell, T. S. C. 2008. “`Here’s the Book, Go Teach the Class’: ELT Practicum Support.” RELC Journal 39 (2): 226–241. doi:10.1177/0033688208092186.
  • Fernandez, M. L., and E. Erbilgin. 2009. “Examining the Supervision of Mathematics Student Teachers through Analysis of Conference Communications.” Educational Studies in Mathematics 93 (1): 93–110. doi:10.1007/s10649-009-9185-1.
  • Goodwin, L. A., and C. Oyler. 2008. “Teacher Educators as Gatekeepers: Deciding Who Is Ready to Teach.” In Handbook of Research on Teacher Education: Enduring Questions in Changing Contexts, edited by I. M. Cochran-Smith, S. Feiman-Nemser, D. J. McIntyre, and K. E. Demers, 468–489. New York, NY: Routledge.
  • Grossman, P., and M. Morva. 2008. “Back to the Future: Directions for Research in Teaching and Teacher Education.” American educational research journal 45 (1): 184–205. doi:10.3102/0002831207312906.
  • Haigh, M., F. Ell, and V. Mackisack. 2013. “Judging Teacher Candidates’ Readiness to Teach.” Teaching and Teacher Education 34 (August): 1–11. doi:10.1016/j.tate.2013.03.002.
  • Hattie, J., and H. Timperley. 2007. “The Power of Feedback.” Review of educational research 77 (1): 81–112. doi:10.3102/003465430298487.
  • Hegender, H. 2010. “The Assessment of Student Teachers’ Academic and Professional Knowledge in School-Based Teacher Education.” Scandinavian Journal of Educational Research 54 (2): 151–171. doi:10.1080/00313831003637931.
  • Hounsell, D. 2007. “Towards More Sustainable Feedback to Students.” In Rethinking Assessment in Higher Education. Learning for the Longer Term, edited by D. Boud and N. Falchikov, 101–113. New York, NY: Routledge.
  • Hsieh, H.-F., and S. E. Shannon. 2005. “Three Approaches to Qualitative Content Analysis.” Qualitative health research 15 (9): 1277–1288. doi:10.1177/1049732305276687.
  • Johnsen-Høines, M. 2009. “Dialogical Inquiry in Practice Teaching.” NOMAD 14 (1). http://arkiv.ncm.gu.se/node/3615.
  • Johnsen-Høines, M. 2011. “Praksissamtalens sårbarhet/The fragility of communication in the context of practice teaching.” Tidsskriftet FoU i praksis 5 (1): 47–65. http://tapir.pdc.no/pdf/FOU/2011/2011-01-4.pdf.
  • Kaldi, S. 2009. “Student Teachers’ Perceptions of Self‐competence in and Emotions/Stress about Teaching in Initial Teacher Education.” Educational studies 35 (3): 349–360. doi:10.1080/03055690802648259.
  • Maclellan, E. 2004. “How Convincing Is Alternative Assessment for Use in Higher Education?” Assessment & Evaluation in Higher Education 29 (3): 311–321. doi:10.1080/0260293042000188267.
  • Nicol, D. 2009. Quality Enhancement Themes: The First Year Experience : Transforming Assessment and Feedback: Enhancing Integration and Empowerment in the First Year. Scotland: The Quality Assurance Agency for Higher Education. http://dera.ioe.ac.uk/11605/1/First_Year_Transforming_Assess.pdf.
  • Ortlipp, M. 2009. “Shaping Conduct and Bridling Passions: Governing Practicum Supervisors’ Practice of Assessment.” Contemporary Issues in Early Childhood 10 (2): 156–167. doi:10.2304/ciec.2009.10.2.156.
  • Parker, D. C., and L. Volante. 2009. “Responding to the Challenges Posed by Summative Teacher Candidate Evaluation: A Collaborative Self-Study of Practicum Supervision by Faculty.” Studying Teacher Education 5 (1): 33–44. doi:10.1080/17425960902830385.
  • Smith, K., and L. Lev‐Ari. 2005. “The Place of the Practicum in Pre‐service Teacher Education: The Voice of the Students.” Asia-Pacific Journal of Teacher Education 33 (3): 289–302. doi:10.1080/13598660500286333.
  • Tang, S. Y. F., and A. W. K. Chow. 2007. “Communicating Feedback in Teaching Practice Supervision in a Learning-Oriented Field Experience Assessment Framework.” Teaching and Teacher Education 23 (7): 1066–1085. doi:10.1016/j.tate.2006.07.013.
  • Tavory, I., and S. Timmermans. 2014. Abductive Analysis. Theorizing Qualitative Research. Chicago: University of Chicago Press.
  • Tillema, H. H., K. Smith, and S. Leshem. 2011. “Dual Roles – Conflicting Purposes: A Comparative Study on Perceptions on Assessment in Mentoring Relations during Practicum.” European Journal of Teacher Education 34 (2): 139–159. doi:10.1080/02619768.2010.543672.
  • Tsui, A. B. M., F. Lopez-Real, Y. K. Law, R. Tang, and M. S. K. Shum. 2001. “Roles and Relationships in Tripartite Supervisory Conferencing Processes.” Journal of Curriculum & Supervision 16 (4): 322. http://web.edu.hku.hk/f/acadstaff/399/Roles_and_Relationships_in_Tripartite_Supervisory_Conferecing_Processes.pdf.
  • White, S., D. Bloomfield, and L. C. Rosie. 2010. “Professional Experience in New Times: Issues and Responses to a Changing Education Landscape.” Asia-Pacific Journal of Teacher Education 38 (3): 181–193. doi:10.1080/1359866X.2010.493297.
  • Zeichner, K. 2010. “Rethinking the Connections Between Campus Courses and Field Experiences in College- and University-Based Teacher Education.” Journal of teacher education 61 (1–2): 89–99. doi:10.1177/0022487109347671.
  • Zhang, Q., P. Cown, J. Hayes, S. Werry, R. Barnes, L. France, and T.-G. Rawhia. 2015. “Scrutinising the Final Judging Role in Assessment of Practicum in Early Childhood Initial Teacher Education in New Zealand.” Australian Journal of Teacher Education 40 (10). doi:10.14221/ajte.2015v40n10.9.