1,744
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Unravelling workplace educators’ judgment processes when assessing students’ performance at the workplace

ORCID Icon, , ORCID Icon & ORCID Icon
Pages 517-536 | Received 12 Nov 2020, Accepted 09 Feb 2022, Published online: 21 Feb 2022

ABSTRACT

The assessment of workplace learning by educators at the workplace is a complex and inherently social process, as the workplace is a participatory learning environment. We therefore propose seeing assessment as a process of judgment embedded in a community of practice and to this purpose use the philosophy of inferentialism to unravel the judgment process of workplace educators by seeing it as an interrelated system of judgments, actions and reasons. Focussing on the unfolding of a process, we applied a longitudinal holistic case study design. Results show that educators are engaged in a constant judgment process during which they use multiple and adaptive frames of reference when forming their judgment about students. They construct an overarching image of students that develops throughout the placement, and their judgments about students go hand in hand with their actions relating to fostering independent practice.

Introduction

Assessment as a social and situated practice

Workplace learning is a distinctly social activity that requires interaction and dialogue and can therefore not be considered separate from its social circumstances (Tynjälä Citation2008; Virtanen, Tynjälä, and Eteläpelto Citation2014). Social interaction and legitimate participation in work by students are crucial to learning a vocation and the workplace as a learning environment can thus be characterised as participatory (Billett Citation2004, Citation1994). If the workplace is viewed as a participatory learning environment, assessment can only be participatory as well. An instrumental approach to assessment exclusively focused on fairness and objectiveness is not suitable as it does not allow for the inherent complexity of workplace learning (Trede and Smith Citation2014). Learning in an authentic context like the workplace is complex as it involves developing tacit knowledge which is embodied in skills, attitudes and interactions. Therefore workplace learning does not match an approach of assessment as a measuring act but rather requires judgment-based assessment (Johnson and Lewis Citation2013). Assessment of workplace learning thus needs to be seen as an inherently social and relational judgment-based practice that is situated in a community of practice (Rømer Citation2002). In this article we propose viewing assessment as a process of judgment during which an educator construes an image of a student’s capability and competence based on their participation at the workplace. In order to approach assessment as a process of judgment we apply the philosophy of inferentialism in our theoretical framework and present a longitudinal multiple case study aimed at uncovering what the assessment process comprises during a prolonged period of workplace learning.

Theoretical framework

Assessment as a process of judgment

Our previous study indicated that educators are engaged in a process of judgment, which starts at the first encounter between student and educator and ends when the period of workplace learning is finished (De Vos et al. Citation2019). Berg et al. similarly describe how workplace supervisors provide ongoing assessment in work-based education (Berg et al. Citation2013). Hauer et al. further indicate that judgments are formed within minutes of interacting with a student and that these judgments over time help sustain the appraisal of the student’s level and abilities throughout the placement (Hauer et al. Citation2013). It thus seems likely that educators at the workplace are engaged in a process of judgment. However, how this comprehensive judgment process takes its course from the initial meeting throughout the period of workplace learning and what sustains the educator’s judgment is as of yet unclear.

The philosophy of inferentialism as a perspective on assessment

To do justice to assessment as a social practice while also accommodating an educator’s judgment process, we introduce the philosophy of inferentialism as a perspective on assessment. Inferentialism, coined by the contemporary philosopher Robert Brandom, characterises understanding as the capacity to give reasons or ask for reasons in social interaction. It thus proposes we can only understand something when we can reason (use reasons) with it. For example, we can understand what a cat is when we understand what role a cat plays in propositions about cats, such as ‘cats spend most of their day sleeping’ or ‘cats seem to enjoy petting’. The conclusions we draw about cats (perhaps: ‘cats are lazy animals’) based on the connections between these reasons are called inferences (Brandom Citation2000; Guile Citation2006). These inferential connections between reasons are what enable humans to form judgment, which is a part of a coherent system of reasons, actions and judgments (Bakker and Derry Citation2011). The philosophy of inferentialism has previously been applied in the domain of vocational education to explore how students’ understanding of statistics is connected to their actions at the workplace through how they build inferential connections during their statistical reasoning (Bakker, Ben-Zvi, and Makar Citation2017) and further to conceptualise how students develop vocational knowledge at the workplace (Heusdens et al. Citation2016). We propose that the philosophy of inferentialism can provide a lens for assessment of workplace learning as a social process, since it assumes that social interaction in the form of reasoning is central to human understanding (Bakker and Derry Citation2011). Seeing assessment as judgment through the lens of inferentialism means that attempting to unravel the judgment process involved in the assessment of students’ performance in the vocational community of practice entails trying to understand the interrelated system of judgments, actions and reasons.

In order to study how judgment processes take their course, we conducted a longitudinal multiple case study combining observations with interviews to enable us to study to how workplace educators reach judgment in situ. The research question we aim to answer is:

How does the judgment process of an educator at the workplace take its course when student performance is assessed during a prolonged period of workplace learning?

Method

To match the emphasis on the process of judgment over a prolonged period of time, we applied a longitudinal holistic multiple case study design (Yin Citation2018). The case study includes three cases which lasted between 22 and 45 weeks and combines different data sources and data collection methods.

National context

This study took place in Dutch vocational education, which consists of senior secondary vocational education (ISCED level 3–4) and higher professional education at universities of applied sciences (ISCED level 5). Dutch vocational education is expected to qualify its students for work, social participation in society and for further learning (De Bruijn and Onstenk Citation2017). This study focuses on learning environments designed based on alignment and its context is limited to the workplace within this design (Bouw, Zitter, and De Bruijn Citation2019).

Participants: case selection

We aimed for a variety of sites in occupational domains and levels of vocational education. Case recruitment happened via the network of the research team where we approached different affiliated groups to inform them of the aim and design of the study and asked whether they knew any suitable workplace educators. Leads regarding potential participants were followed up by the primary researcher (MDV). Each potential site was screened by means of a conversation using the following criteria:

  • The workplace is authentic and students go there for workplace learning for a prolonged period of at least 10 weeks.

  • Multiple people are employed at the workplace (minimum 5), since in very small communities observation research could potentially be too intrusive.

  • The workplace educator provides students with day-to-day guidance and is involved in their assessment.

Participants, their students and the organisations gave their informed consent after receiving an information letter from the primary researcher (MDV). In this article the participants are referred to using pseudonyms. They did not receive remuneration. Full ethical approval was obtained from the ethical committee of the Open University The Netherlands for the research methodology, recruitment procedures and to confirm that the present study complies fully with the GDPR.

Participants: case descriptions

Case A: communication department in large governmental organisation

The educator, Alex, is a 46 year old male. He has worked as a communication advisor for 20 years and has been a workplace educator for nine years during which he has been involved in the workplace learning of 20 students. He is responsible for the communication department of a large governmental organisation that makes knowledge available for both internal and external parties and functions as the primary communication outlet for the organisation. The student, Anna, is Alex’s full-time intern for 18 weeks. She is in the final year of her communication bachelor degree (ISCED level 5) and needs to complete a research project based on communication advice for the organisation. The internship focuses on Anna’s research.

Case B: social work in small neighbourhood organisation

The educator, Bryan, is a 40 year old male. He has worked in the social domain for 14 years and has been active as a workplace educator correspondingly. He has been involved in educating approximately 30 students. He works for a small organisation based on a partnership between a vocational school for healthcare and social work and different neighbourhood organisations, which provides support for local residents via students who do their on-the-job training. The organisation provides individual support at home and group activities for youths. Bryan is the only professional in the organisation. He is supported by two social work students from professional higher education (ISCED level 5) and five to six students from senior secondary vocational education (ISCED level 4). Together they provide guidance to approximately 20 students at ISCED level 3. Bernadet, a level 4 student, does her internship with Bryan two days a week for a year. She is in her final year at a school for senior secondary vocational education and is training to become a social worker. The data collection comprises Bernadet’s internship until the first formal assessment with Bryan after 45 weeks.

Case C: graphic design in small studio within school

Educator Clint is a 51 year old male. He has worked in the domain of graphic design for 30 years and has been a workplace educator for twelve years. He works at a small graphic design and desktop publishing studio located in a vocational school for media and design. The studio is set up to be a workplace for students and aims to provide a safe learning environment for students with specific needs. Clint is employed fulltime as an educator and graphic designer. The studio employs six students from different media and design disciplines (ISCED level 4) and takes assignments from both external and internal clients. Cassie is placed at the studio via her school mentor and is a level 4 graphic design student in the final year of her senior secondary vocational education programme. She is supposed to be a full-time intern for six months, but soon after the onset of her internship Cassie starts missing days due to illness. She worked at the studio on and off for 21 weeks and her final assessment was deferred until next year.

Data collection methods

In order to study the judgment process of workplace educators for the duration of an internship we applied multiple data collection methods. We conducted stimulated recall interviews and observations, and collected all relevant documents.

Stimulated recall interviews

We conducted a stimulated recall interview every week with each participant which lasted between 5 and 30 minutes. Interviews were conducted face to face at the participant’s workplace. If this was not possible, they were conducted via telephone. All interviews were audio recorded, transcribed verbatim and pseudonymised prior to analysis. The stimulated recall interviews were intended to elicit the educators’ expertise as assessors through introspection, rather than retrospectively reconstructing their thoughts and actions (Lyle Citation2003), and used prompts gathered via email in the week preceding the interview using a method based on ecological momentary assessment rather than video recordings. Ecological momentary assessment is aimed at collecting ‘current or recent states, sampled repeatedly over time, in their natural environments’ (Shiffman, Stone, and Hufford Citation2008, 4). Using a design based on ecological momentary assessment, participants were sent an email twice per week asking them to respond to a repeated set of four questions. Three open questions focused on when the educators had communicated with or seen the student, what they had communicated about, and how the student was performing. One closed question asked educators how certain they felt about their judgment of the student’s performance (numerical response on scale of 1 to 5). The emails were sent on the same days and times each week fitting with participants schedule and availability and the participants’ answers served as prompts for the weekly interview. Participants received an instruction on how to comply with the prompts prior to data collection. The methodological combination of stimulated recall and ecological momentary assessment was piloted for three weeks.

Non-participant observation

During the period of workplace learning we conducted two types of non-participant observation. Each participant was observed at the beginning of the placement, half-way through and at the end, during which the primary researcher shadowed the educator for four hours (Cuyvers Citation2018; Cuyvers, Donche, and Van den Bossche Citation2020). Each observation was conducted at a different time and on a different weekday. Specific times differed per site and suitable observation moments were determined together with the participants. The observation focused on the workplace educator and using an observation protocol the observer noted any behaviour or interaction pertaining to the judgment of the student (Lofland et al. Citation2004; Simpson and Tuson Citation2003). Besides shadowing the participants, we also observed specific moments relevant to the judgment process, such as the initial meeting between educator and student. These moments were observed using an observation protocol and audio recorded. A short interview was conducted with the educator immediately following the specific moment, focusing on the educator’s underlying reasoning and motivation for their judgments. All interviews were audio recorded, transcribed verbatim and pseudonymised prior to analysis.

Documents

During the case study any documents the participants considered relevant to their process of judgment or that were used during the process were collected and documented in an annotated bibliography intended for corroboration of the other data sources (Yin Citation2018). provides an overview of the complete tally of collected data.

Table 1. Complete tally of collected data.

Analysis

We applied inferentialism as a perspective for our analysis by starting with reasons, actions and judgments since understanding assessment requires us to try and understand the coherent system of reasons, actions and judgments that forms the basis of human understanding according to inferentialism (Bakker and Derry Citation2011). We formulated five analysis questions based on the main research question to scrutinise the data in the first step of analysis (see ). These questions were aimed at selecting data that could shed light on the educator’s actions (How does the educator gather information? What actions are taken?), reasons (What frames of reference are used?) and judgements (What is the educator’s judgment? How certain is the educator of his judgment?).

Figure 1. Schematic representation of analysis process.

Figure 1. Schematic representation of analysis process.

Fragments pertaining to these questions were selected using coding and condensed into a memo per case per week. These memos were subsequently transposed to a matrix per case (see ), allowing the data condensation to be organised across time and per question (Miles, Michael Huberman, and Saldaña Citation2014).

Figure 2. Matrix format per case

Figure 2. Matrix format per case

To ensure the quality and transparency of data condensation and analysis we took two quality measures. Firstly, a second member of the research team (LB) double coded the first interview for each case and subsequently discussed coding choices with the primary researcher (MDV). Secondly, the coding and memos for one randomly selected week of data for each case were scrutinised by the same member of the research group in order to establish that the data condensation and analysis were transparent before compiling the matrices. Once completed, each matrix was analysed by two members of the research team (MDV and LB) who worked together to uncover any patterns relevant to the educator’s judgment process. Each matrix was first read horizontally to identify developments over time for each analysis question. When significant moments were identified by either MDV or LB, the matrix was scrutinised vertically to consider how actions, reasons and judgments were connected. The ensuing discussion of each matrix was recorded and reported in individual case reports, and led to the identification of multiple recurring themes across our cases. Together with the complete research team these themes were discussed to evaluate their suitability (did they match the research question and chosen perspective?) and feasibility (did they follow logically from the analysis?). Our discussion resulted in three final themes which are presented below in the results paragraph.

Reflexivity

Besides the ethical approval obtained, we want to give insight into the background of the individual team members to further strengthen the quality of our study (Reid et al. Citation2018). Three members are part of a research group on vocational education and learning (MDV, LB and EDB) with expertise on workplace learning, professional identity development, and assessment. One member has extensive expertise and experience with assessment research (CVDV). Our combined perspectives have enabled us to approach assessment of workplace learning as a situated and inherently social practice.

Results

The themes we identified across our three cases should be approached as interconnected and reciprocal, since workplace learning is a complex process and an educator’s judgment process is equally complex. We found that educators use multiple frames of reference that adapt to the development of the student or the situation at hand to reach judgment about the student’s performance throughout the placement (theme 1). Secondly, we found that they strive to see a complete picture of the student and are constantly attempting to supplement their existing image (theme 2). And third, the educator’s continuous process of judgment consists of mini-judgments that go hand in hand with actions related to independent practice (theme 3).

Theme 1: multiple and adaptive frames of reference

Analysis of our data across cases showed that during a prolonged period of work placement, educators use multiple and adaptive frames of reference when forming their judgment about students. These frames can be described as lenses through which educators see a student’s performance and together these lenses form a kaleidoscopic framework of overlapping and changing frames of reference that the educator uses to reach judgment. Our results indicate that educators use three stable frames of reference: (1) a vocational frame based on the vocational community of which the educator is a member, (2) a comparison frame based on the students’ peers, and (3) an educational frame based on the standards defined by the educational institution.

The first frame of reference, the vocational frame, is based on the standards of the vocational community the student is entering and it entails the educator’s understanding of what it means to participate successfully in that community. The results show that each educator has a clear idea of what their vocation entails and what they can expect of students. In case B educator Bryan often refers to what it means to be a good social worker when discussing his judgment of the student, which includes aspects such as establishing contact with the target group and activating a client’s social environment. When evaluating the student’s actions Bryan reflects:

… the social worker does not take over the question and answers it, but activates the social environment to help answer the question and mainly have the client themselves answer the question. And that is what I saw just then, very small, that [Bernadet] does it almost automatically.

Case C demonstrates that educator Clint applies standards from the community of graphic design that are very specific when he compares what the student, Cassie, did to what he would have done. When discussing Cassie’s performance during a visit with a client when the battery of her camera ran out, Clint reflects on how he would have prepared for the visit based on his preferences for what is adequate behaviour in the vocational community:

The preparation wasn’t great, because that is not supposed to happen. You should have, when you do something like that, you should have a spare battery. That’s what I always do, I always carry two full batteries.

Educators also take into consideration how quickly the student knows their way in the organisation or how well they fit in at the workplace. In case A educator Alex uses the corporate culture within the communication department and the organisation as a whole as a frame of reference when discussing the student’s suitability.

She [fits in] with [department] that we want to be. But at the moment there is a little friction as well, but that is …, compared to the entire organisation, I think she does fit in with our communication club.

Similarly Bryan repeatedly evaluates the Bernadet’s suitability by focussing on how she picks up her role and whether she can handle the responsibility.

The change in the situation, but also that she now actually has a very important role within [organisation], because I don’t have any higher education students, but I do have her at level 4. I think she can handle that role, actually I think that she is simply a very good and solid young lady for the social domain.

The second frame of reference, comparing a student to their peers and other novices at the workplace also becomes apparent in each case. Students are compared to other students who are fulfiling their placement simultaneously or educators use their previous experiences as a frame. After observing his student Anna during a meeting, Alex evaluates her performance in comparison to other students:

And she had a good presence there, was listening well and shared her opinion. I liked that. Not all students do that, usually they are a little timid or not as brave so soon, but she quickly had, she felt at liberty …

Often these comparisons are made to other students in general or the educator’s body of experience. Educators also compare students to specific other students, as Clint demonstrates when he discusses Cassie’s attitude in the final weeks of her placement:

Well, that she, she doesn’t complain or sigh and you can see with another student that he was pretty done the past week, and then you get a lot of *sigh* ‘Yes, ok … ’ and then ‘Done!’ (…) And she is more positive, more relaxed.

As a third frame of reference, results indicate that the standards for formal assessment as defined by school also play a role as a frame of reference when educators form their judgment. These standards are often represented in instruments with assessment criteria and learning objectives based on competencies that are provided by schools for evaluation purposes. Two of our cases showed that educators have an understanding of the student’s degree level which frames their judgment. For both Bryan and Clint the student’s level formed an often applied frame of reference, especially when evaluating their capacity to practice independently. In case A Alex notes the absence of standards defined by the educational institute, which makes it is difficult for him to pinpoint Anna’s development. A significant part of the placement is dedicated to Anna’s research project, however Alex separates her research from her general performance. For him the research project belongs to school and does not affect his judgment, even though it was the student’s main task and dedicated to a research question that originated in the organisation.

No, so that research project she really does for school. And of course she likes that is it useful for us, so we have arranged a presentation with the other advisors, so that we’ll actually do something with it. But it is really her school thing. And of course I was intensively involved in formulating the question at the beginning. So the assignment was pretty clear I think, we agree about that. But what the result is, I don’t know.

Despite appearing in varying and changing configurations, the three frames of reference discussed above were constantly present throughout our three cases. Besides these stable frames, our results also showed that frames can be adaptive in the sense that they adjusted to the situation at hand and can emerge or disappear. These frames relate to unexpected situations and the educator’s emerging picture of the student. Case B demonstrates how unexpected situations can lead to adapting frames of reference. Bernadet struggled with her health and her personal situation, but this was not initially apparent. After nine weeks of placement she started calling in sick regularly and after a conversation with Bryan about her absence in week twelve it became clear that her health influenced her performance significantly. This insight leads Bryan to use an additional frame of reference, the student’s well-being: ‘She just has a weak constitution. So I can now anticipate that much better for the future, by being a little more aware of her medical situation.’ In week 20 Bernadet is absent for a week and upon her return reveals her difficult personal situation that further influences her performance. Bryan reflects on what this means to him:

It puts some pieces of the puzzle in their place, and it’s my role as a workplace educator to say, ok, how are we going to make sure that you can at least pass your work placement. But these types of factors and actors, I do weigh them when I look at how she functions.

The results across our three cases also show that as the educator learns more about the student and their idea of the student’s capabilities becomes more rounded and robust, the students themselves become frames of reference for their subsequent performance. In case C we see an interesting combination of personal circumstance and performance at the workplace that leads to adjusted frames of reference. Cassie has health issues and Clint has also established that she works too slowly, which leads him to evaluate what she has achieved up to this point and how she needs to progress.

So if I can get her to make a choice, that she makes the right one, or that she at least makes a choice, that could help her develop. Now she obviously also has her health issue running simultaneously, so that might not directly lead to the desired effect, but it is where she needs to go. And with that, she’ll be able to make a concept much more quickly, because she’ll be able to make a choice more quickly.

Cassie’s performance up to this point forms a frame for Clint when he tries to establish what she is capable of.

Alex, Bryan and Clint all use multiple and adaptive frames of reference to be able to judge how their student is performing and as they are engaged in this process of judgment, an increasingly more complex picture of the student develops.

Theme 2: seeing a complete picture

Results show that early in the placement educators quickly form a complete and holistic image of the student based on judgments about their performance. These judgments can comprise different aspects such as technical skills, social interaction, learning curve and personality. Clint describes his image of Cassie after the first week of placement:

In the short time that I have observed her and in the time that I’ve discussed assignments with her, I have a positive image. Yeah. And then I’m not talking about her level, but about learnability. […] I do have, looking at how she presents herself and that she is picking up on things, I trust that we can work towards that level.

Similarly Alex describes Anna as ‘analytically ok, smart, respectful, self-willed and an easy talker’ after the first week of placement. Bryan’s first week impression of Bernadet characterises her as someone who is open, positive, involved, pro-active, and willing to learn. He adds that she dares to be critical, can signal interaction patterns in a group and is capable of asking for, receiving and applying feedback. These comprehensive initial images demonstrate how quickly educators form a broad impression of the students’ capabilities. This impression is not static however, and the results indicate that their image develops throughout the placement.

Our analysis revealed that the educator’s initial holistic image functions as an overarching image throughout the placement. The educator further develops and augments this image by continuously considering how it relates to their day-to-day impressions of the student. shows how Bryan relates his day-to-day impressions of Bernadet based on activities or interactions to his overarching image.

Table 2. Relating day-to-day impressions to overarching image.

The day-to-day impressions consist of information about the student that educators gather themselves. However, results showed that educators also rely on input from others to supplement their image of the student’s performance. Alex describes how information reaches him informally and from which he deducts that Anna must be doing well.

Then I received the request from a head of department for a job and he said maybe Anna can help, because she’s so enthusiastic. So that is a nice compliment for her of course. That she is mentioned and asked.

Clint relates an instance of a chance meeting with colleague during which the colleague’s actions lead him to have patience with Cassie.

So I was here with her tutor and she does the exposition as well, so she was working on that. And I said, yeah, I still have Cassie’s photos here. Oh. She phoned her immediately: the photos are here, do I need to bring them for you, otherwise it’s not going to happen. Eventually she took them. So she saved her. […] So from all directions she is being protected. Here as well, because I’m not very tough on here either.

A photograph is an apt an analogy to describe how the overarching image develops: the educator quickly sees the complete image, but he cannot see all the pixels yet. His day-to-day impressions and information from others add pixels, but also depth, contrast and focus to the image that was there from the start. The image develops and becomes richer and more complex. We found that not only the image of the student develops, but the degree to which the educators feel certain they are right about the student is also subject to change. Alex is very certain of his image of Anna very quickly and overall remains very certain throughout. However, half way through the placement Anna and Alex do not see each other much for several weeks in a row. Alex feels that he is losing touch and starts questioning his judgment of Anna.

In general I’m confident and positive, but when indeed two unclear things … or that she’s working from home but I can’t reach her, and she doesn’t return my call, then it becomes a little, well, uncertain, I’m not sure what she’s really doing.

In both case B and C we found that lack of contact due to student absence also influences how certain educators are about their judgment of the students. Their initial judgements are not set in stone and the overarching image of a student develops during the placement. It is this overarching image that takes centre stage when the educator has to assess the student’s performance at the end of their placement.

Theme 3: letting students go or not?

The developing image of the student as described above also forms the basis for educators’ actions. Throughout placement all three educators are engaged in a continuous judgment process during which they form mini-judgments about the student’s performance, which become apparent in choices educators make in their guidance of the students. This ongoing process seemingly poses all three educators with a constant dilemma. The students should be capable of independent practice at the end of their placement, which means that the educators want to assess whether the students are indeed capable of independent practice. To be able to see a student’s capacity, educators need to first give them the opportunity to practice independently. The results in all three cases indicate that the educators on the one hand want to let their student go to allow for this independent practice, while at the same time their judgments can lead them to choose closer guidance and thus restricting independent practice. The educators seem to strive for balance between close guidance (for example taking the student along to meetings, or giving detailed instructions) and more distance by letting them go and allowing the student to practice more independently.

The process of letting go progresses differently for each case. Alex decides to let go Anna very quickly and almost completely, and retrospectively worries that he has let go too much. In a similar vein, Bryan lets go of a specific task, but then shifts between guiding more closely through deadlines and appointments before letting Bernadet go again when he judges she is ready. He worries retrospectively that he has not given her enough space to develop. In case C we see a different outcome all together: Clint wants to let Cassie go, but is unsuccessful. He tries to nudge her towards more independent practice by helping her on her way and then letting her go again, but this strategy does not have the desired effect. Eventually Clint judges that Cassie is not capable of independent practice and chooses to guide her very strictly. In week 18, towards the end of the placement, he describes how he takes over the design process.

I adjusted the sign, because she was messing around with it, with the space and the font size. It’s not a very big sign. So I said, just print it, so you can see if its legible. […] If you do it like this, then you can make it a little bigger, and it’ll leave more space […] So, when she had finally done that, it looked a lot better. But I feel that I’m being terribly directive.

Within this process of mini-judgments and the resulting actions, we can identify several distinctive actions. All three cases show that the educators fairly early take the action to give students a more complex task with more independence, because they judge that the student is ready for it. shows an overview of the more complex tasks the educators gave the students in each case.

Table 3. More complex tasks early in placement.

Besides trusting students to execute a more complex task successfully, we also found in case A and B, where letting go of the student towards more independent practice was more successful than in case C, that both educators take the decision to let things fail on purpose or choose to refrain from taking action. In case A, unlike during the beginning of the placement, Alex no longer supplies Anna with a back-up educator when he goes on holiday because his judgment is that she can handle herself. In case B we find a typical example when Bryan describes how Bernadet forgets to pick up a girl that should have been part of the activity she has organised. Bryan is aware that the girl is being forgotten, but decides to refrain from solving the issue: ‘And then I leave it with her a little as well, I mean, it’s part of the whole deal, so you figure out how you, how you are going to solve that.’ He considers it Bernadet’s responsibility and wants her to learn to stay in control during the activity, therefore he does not intervene. In both cases the educator’s action is to not act because they feel it is more appropriate for the student’s development. Overall the educators worked on striking a balance between encouraging their student to practice independently and letting them go, while at the same time staying closer to the student and providing more guidance.

Conclusion & discussion

In untangling the judgment process of three workplace educators we have shown that they quickly developed a comprehensive overarching image of the student using multiple and adaptive frames of reference that adjusted to the situation when necessary. The overarching image grew richer and more complex during the prolonged period of workplacement and led the educators to form continuous mini-judgments, which in turn led them to foster independent practice by trying to let their students go. We have also attempted to show that inferentialism offers a promising perspective on assessment practices at the workplace when we consider the workplace as a participatory learning environment.

This study included three cases with educators who were not only willing to share their judgments, reasons and actions, but also very capable of making their ideas explicit. They cannot be considered to be representative for all educators at the workplace and we do not wish to generalise based on three cases. However, choosing a case study design allowed us to reach a deep understanding of our cases as they were studied intensively and the wealth of data collected enabled profound insights into the judgment processes presented here (Yin Citation2018). Unlike our data analysis, our case study design was not informed by inferentialism as a philosophical point of view. If this had been the case, inferentialism would have affected how we questioned our participants during interviews with a stronger focus on how actions, judgment and reasons interconnect. However, it would also make observation questionable as a data collection method, as observation allows for very little insight into the judgment and reasons of workplace educators. Our observations showed us behaviour and interactions that we deemed valuable for unravelling the judgment process of workplace educators.

Even though inferentialism was not were our study embarked, we feel that its tenets have potential as a perspective on the assessment of workplace learning. In our theoretical framework we stated that understanding assessment of students’ performance in a vocational community of practice involves understanding an interrelated system of judgments, actions and reasons. This in itself is a novel approach to assessment and also a novel application of inferentialism, since previous work on inferentialism in VET has focused on vocational knowledge (Heusdens et al. Citation2016; Heusdens, Baartman, and De Bruijn Citation2018), teaching mathematical reasoning (Noorloos et al. Citation2014; Bakker and Derry Citation2011; Nilsson Citation2020) and has been employed to position rationality within a sociocultural approach to VET (Bakker Citation2014). Inferentialism offers us the philosophical concept ‘web of reasons’ that can help conceptualise the interrelated system of reasons, judgments and actions at play in assessment, as it allows contemplation of the ‘complex of interconnected reasons, premises and implications, causes and effect, motives for action, and utility of tools for particular purposes’ (Bakker and Derry Citation2011, 10). As presented in the results, our three educators used adaptive and multiple frames of reference when forming their judgements. Every time an educator explicitly or implicitly used a frame of reference, they were reasoning towards their judgment. The philosopher Robert Brandom describes this reasoning process as a game of giving and asking for reasons which is governed by rules. A web of reasons sets the rules for this game. For example, reasoning that our lazy cat is red, means that you must also be able to reason that the cat is coloured and that it is not a tabby (Brandom Citation2000). Our understanding of what a cat is, what colours cats can have and what this means for our reasoning is all part of a web of reasons. Similarly, educators employ a web of reasons at the workplace when they form their judgment of students. This web of reasons is socially and historically constructed, and it determines the norms that are used in a particular practice and thus shapes what we consider to be good reasoning or acceptable judgment (Bakker and Derry Citation2011). When regarding workplace learning, this means that the web of reasons in which educators make relevant observations, judgments and take actions before arriving at a conclusion, is shaped by the vocational community as manifested in their workplace and this determines what is valid or adequate. The concept ‘web of reasons’ offers an interesting approach towards assessment of workplace learning, as it enables a more comprehensive view of what assessment entails and allows us to continue exploring it as an inherently social and relational judgment-based practice.

Our analysis further showed that assessment as a process of judgment is intricately tied up with guidance. Each guidance action seems based on an educator’s judgment and in turn can lead to another judgment. For example, an educator thinks the student is ready for a more complex task (judgment) and decides to give them such task (action). They then evaluate the student’s performance (judgment) and give them feedback (action). Seeing assessment as an interrelated system of judgements, actions and reasons enables us to acknowledge how guidance (actions) and assessment (judgements) are connected. This is not surprising seeing how other research already seems to indicate that guidance at the workplace is closely connected to assessment. Recent research into the guidance practices of workplace educators describes how educators at the workplace estimate to what extent a student can perform independently (judgment) in order to base their guidance choices on this judgement (action) (Ceelen, Khaled, and De Bruijn Citation2019). A study into students’ perceptions of assessment practices at the workplace shows that judgments and subsequent actions from workplace educators are perceived by students to be part of everyday work (Sandal, Smith, and Wangensteen Citation2014). Judgments and actions go hand in hand during workplace learning and inferentialism as a new approach to assessment might offer an opportunity to reconsider guidance and assessment at the workplace as interconnected.

Our results also showed that educators are engaged in a continuous process of forming judgments that leads them to foster independent practice by letting their students go, which is reminiscent of literature about entrustment from the medical education domain. Here we find a distinction between entrustment and trust, where trust is the educator’s judgment of a student that develops over time, similar to the overarching image presented in our results, whereas entrustment consists of decisions to let students perform a task, similar to our actions that foster independent practice (Ten Cate et al. Citation2016). Ten Cate et al. distinguish between different levels of trust that educators develop during a period of workplace learning (presumptive, initial, and grounded) and different types of entrustment decisions (ad hoc or summative). The summative entrustment decisions are considered most valuable and levels of supervision increasingly aimed at independent practice are formulated to describe how entrustment develops. Other research sheds further light on how entrustment is informed and demonstrates the complexity of its development during a prolonged period of workplace learning (Sagasser et al. Citation2017). In contrast with these studies, our results also showed that educators purposefully refrained from intervening and even set their students up for failure. This leads us to believe that entrustment is a more comprehensive concept and that the repertoire of educators reaches beyond increased or reduced supervision. They experiment with entrustment by orchestrating situations to try whether entrustment is possible and are willing to take the concomitant risks. Our cases further indicated that trust is also built collectively by input from colleagues and clients, which could further expand the concept of entrustment and how it is informed, and thus advancing how performance at the workplace is assessed.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

References

  • Bakker, Arthur. 2014. “Characterising and Developing Vocational Mathematical Knowledge.” Educational Studies in Mathematics 86 (2): 151–156. doi:10.1007/s10649-014-9560-4.
  • Bakker, Arthur, Dani Ben-Zvi, and Katie Makar. 2017. “An Inferentialist Perspective on the Coordination of Actions and Reasons Involved in Making a Statistical Inference.” Mathematics Education Research Journal 29 (4): 455–470. doi:10.1007/s13394-016-0187-x.
  • Bakker, Arthur, and Jan Derry. 2011. “Lessons from Inferentialism for Statistics Education.” Mathematical Thinking and Learning 13 (1–2): 5–26. doi:10.1080/10986065.2011.538293.
  • Berg, Derek H, Jennifer Taylor, Nancy L Hutchinson, Hugh Munby, Joan Versnel, and Peter Chin. 2013. “Student assessment in exemplary work‐based education programs.” Journal of Workplace Learning 19 (4): 209–221. doi:10.1108/13665620710747906.
  • Billett, Stephen. 1994. “Situating Learning in the Workplace – Having Another Look at Apprenticeships.” Industrial and Commercial Training 26 (11): 9–16. doi:10.1108/00197859410073745.
  • Billett, Stephen. 2004. “Workplace participatory practices: conceptualising workplaces as learning environments.” Journal of Workplace Learning 16 (6): 312–324. doi:10.1108/13665620410550295.
  • Bouw, Erica, Ilya Zitter, and Elly De Bruijn. 2019. “Characteristics of learning environments at the boundary between school and work – A Literature Review.” Educational Research Review 26: 1–15. doi:10.1016/j.edurev.2018.12.002.
  • Brandom, Robert B. 2000. Articulating Reasons: an Introduction to Inferentialism. Cambridge, MA: Harvard University Press.
  • Cate, Ten, Danielle Hart Olle, Felix Ankel, Jamiu Busari, Robert Englander, Nicholas Glasgow, Eric Holmboe, et al. 2016. “Entrustment decision making in clinical training.” Academic Medicine 91 (2): 191–198. doi:10.1097/ACM.0000000000001044.
  • Ceelen, Lieke, Anne Khaled, and Elly De Bruijn. 2019. “Begeleiden van Studenten op de Werkplek.” Onderwijs En Gezondheidszorg 43 (5): 12–15.
  • Cuyvers, Katrien. 2018. “Measuring Self-Regulated Learning Processes during Job-Performance of Medical Specialists: an Integrated Research Perspective.” In 9th International conference of the EARLI SIG 14 Learning and Professional Development: Interaction, Learning and Professional Development, Geneva, Switzerland, 12–14 September 2018, 164–165. Geneva: Université de Genève.
  • Cuyvers, Katrien, Vincent Donche, and Piet van den Bossche. 2021. “Unravelling the process of self-regulated learning of medical specialists in the clinical environment.” Journal of Workplace Learning, 33 (5): 375–400. doi:10.1108/JWL-09-2020-0151.
  • De Bruijn, Elly, Stephen Billet, and Jeroen Onstenk. 2017. “Vocational Education in the Netherlands.” In Enhancing Teaching and Learning in the Dutch Vocational Education System: reforms Enacted, edited by E. De Bruijn, S. Billett, and J. Onstenk, 1–34, Springer. Cham, Switzerland.
  • Guile, David. 2006. “Learning across contexts.” Educational Philosophy and Theory 38 (3): 251–268. doi:10.1111/j.1469-5812.2006.00193.x.
  • Hauer, Karen E., Olle Ten Cate, Christy Boscardin, David M. Irby, William Iobst, and Patricia S. O’Sullivan. 2013. “Understanding trust as an essential element of trainee supervision and learning in the workplace.” Advances in Health Sciences Education 19 (3): 435–456. doi:10.1007/s10459-013-9474-4.
  • Heusdens, Wenja T., Liesbeth K.J. Baartman, and E. Elly de Bruijn. 2018. “Knowing everything from soup to dessert: an exploratory study to describe what characterises students’ Vocational knowledge.” Journal of Vocational Education & Training 70 (3): 435–454. http://doi.org/10.1080/13636820.2018.1437065.
  • Heusdens, Wenja T., Arthur Bakker, Liesbeth K.J. Baartman, and Elly De Bruijn. 2016. “Contextualising vocational knowledge: a theoretical framework and illustrations from culinary education.” Vocations and Learning 9 (2): 151–165. doi:10.1007/s12186-015-9145-0.
  • Johnson, Martin, and Carenza Lewis. 2013. “‘Can You Dig It?’ developing an approach to validly assessing diverse skills in an archeological context.” Journal of Vocational Education and Training 65 (2): 177–192. doi:10.1080/13636820.2012.755212.
  • Lofland, John, David Snow, Leon Anderson, and Lyn H. Lofland. 2014. Analyzing Social Settings: A Guide to Qualitative Observation and Analysis. 4th ed. Wadsworth: Thomson.
  • Lyle, John. 2003. “Stimulated recall: a report on its use in naturalistic research.” British Educational Research Journal 29 (6): 861–878. doi:10.1080/0141192032000137349.
  • Miles, Matthew B., A. Michael Huberman, and Johnny Saldaña. 2014. Qualitative Data Analysis: a methods sourcebook. Third edit ed. Thousand Oaks: SAGE Publications, .
  • Nilsson, Per. 2020. “A framework for investigating qualities of procedural and conceptual knowledge in mathematics—An inferentialist perspective.” Journal for Research in Mathematics Education 51 (5): 574–599. doi:10.5951/jresematheduc-2020-0167.
  • Noorloos, Ruben, Sam Taylot, Arthur Bakker, and Jan Derry. 2014. “An inferentialist alternative to constructivism in mathematics education.” In Liljedahl, P., Oesterle, S., Nicol, C., & Allan, D. (Eds.) Proceedings of the Joint Meeting of PME 38 and PME-NA 36, 4: 321-328. Vancouver, Canada: PME.
  • Reid, Anne-Marie, Jeremy M. Brown, Julie M. Smith, Alexandra C. Cope, and Susan Jamieson. 2018. “Ethical dilemmas and reflexivity in qualitative research.” Perspectives on Medical Education 7 (2): 69–75. doi:10.1007/s40037-018-0412-2.
  • Rømer, Thomas A. 2002. “Situated Learning and Assessment.” Assessment and Evaluation in Higher Education 27 (3): 233–241. doi:10.1080/0260293022013859.
  • Sagasser, Margaretha H, Cornelia R. M. G. Fluit, Chris Van Weel, P. M Cees, Van der Vleuten, and Anneke W. M. Kramer. 2017. “How entrustment is informed by holistic judgments across time in a family medicine residency program : an ethnographic nonparticipant observational study.” Academic Medicine 92 (6): 792–799. doi:10.1097/ACM.0000000000001464.
  • Sandal, Ann Karin, Kari Smith, and Ragne Wangensteen. 2014. “Vocational students experiences with assessment in workplace learning.” Vocations and Learning 7 (2): 241–261. doi:10.1007/s12186-014-9114-z.
  • Shiffman, Saul, Arthur A. Stone, and Michael R. Hufford. 2008. “Ecological momentary assessment.” Annual Review of Clinical Psychology 4 (1): 1–32. doi:10.1146/annurev.clinpsy.3.022806.091415.
  • Simpson, Mary, and Jennifer Tuson. 2003. Using observations in small-scale research a beginner’s guide. Edinburgh: SCRE Centre, University of Glasgow.
  • Trede, Franziska, and Megan Smith. 2014. “Workplace educators’ interpretations of their assessment practices: a view through a critical practice lens.” Assessment and Evaluation in Higher Education 39 (2): 154–167. doi:10.1080/02602938.2013.803026.
  • Tynjälä, Päivi. 2008. “Perspectives into Learning at the Workplace.” Educational Research Review 3 (2): 130–154. doi:10.1016/j.edurev.2007.12.001.
  • Virtanen, Anne, Päivi Tynjälä, and Anneli Eteläpelto. 2014. “Factors promoting vocational students’ learning at work: study on student experiences.” Journal of Education and Work 27 (1): 43–70. doi:10.1080/13639080.2012.718748.
  • Vos, De, E. Marlies, K. J. Baartman Liesbeth, P. M. Van Der Vleuten Cees, and Elly De Bruijn. 2019. “Exploring How educators at the workplace inform their judgement of students’ professional performance.” Journal of Education and Work 32 (8): 693–706. doi:10.1080/13639080.2019.1696953.
  • Yin, Robert K. 2018. Case Study Research and Applications. 6th ed. Los Angeles: Sage.