2,146
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Characteristics of productive feedback encounters in online learning

ORCID Icon, ORCID Icon & ORCID Icon
Received 18 Nov 2022, Accepted 05 May 2023, Published online: 25 May 2023

ABSTRACT

Understanding how students engage with feedback is often reduced to a study of feedback messages that sheds little light on effects. Using the emerging notion of feedback encounters as an analytical lens, this study examines what characterizes productive feedback encounters when learning online. Drawing from a cross-national digital ethnographic dataset, a qualitative analysis categorized feedback encounters within this dataset: While most encounters led to instrumental impacts without any significant reflections, students also engaged in encounters with more substantive impact on learning. The latter took place under two conditions. First, the encounter must challenge the student’s assumptions about their work, and they must be able and willing to accept this challenge. Second, the encounter must take place at a time which is appropriate in relation to whichever task the student is currently working on. This highlights design considerations, such as importance of social interactions, and the instrumental enactments of self-generated feedback.

This article is part of the following collections:
Editors’ Choice Award for 2024

Introduction

Feedback is an important but often problematic part of teaching and learning (Shute Citation2008; Winstone and Carless Citation2019). Feedback information is considered costly to produce by staff and insufficiently helpful by students (Li and De Luca Citation2014). Furthermore, many feedback practices are primarily focussed on optimizing what teachers do, and pay little attention to how the students use the information provided (Dawson et al. Citation2019). The focus on teachers’ comments has been criticized by researchers arguing for a new feedback paradigm (Winstone and Carless Citation2019). This paradigm conceptualizes feedback as a social and contextual process that is not just the transfer of performance information, but also includes students’ meaning making and subsequent behavior. By moving the focus from information transfer to student activity, it becomes clear that the key aspect of productive feedback processes is how students make sense of and use performance information to inform future work (Jonsson Citation2013).

When learning online, where many student feedback practices do not directly involve a teacher, a focus on student meaning making and behavior is both relevant and significant. The often remote and solitary nature of online learning means that despite access to learning analytics, we have limited understanding of when, how, and why students engage in feedback processes, and how this engagement contributes to their learning.

This paper addresses this gap, by empirically exploring how students experience and engage in feedback processes in the context of online and blended courses. Because we are interested in feedback as it occurs in uncontrolled naturalistic settings, the study adopts a digital ethnographic approach (Pink et al. Citation2016). This allows us to generate and use multiple types of rich unstructured data to create detailed descriptions of online student experiences not available to other forms of online research (Jensen et al. Citation2022; Angelone Citation2019).

Productive feedback encounters

To ensure that feedback practices have an impact on student learning, researchers have primarily focussed on improving the content, timing and delivery of performance information (Shute Citation2008). However, efforts to improve the teacher’s message to the student often have limited impact on learning, because the student does not engage with the information provided (Jonsson Citation2013). Seeing feedback as a process of student meaning making and activity can provide an explanation why feedback so often underdelivers (Winstone et al. Citation2022).

Changing the focus from feedback information to feedback processes brings new challenges to researchers and practitioners. In a large mixed-methods study, Henderson, Phillips, et al. (Citation2019) found that the effects of feedback processes are dependent on certain social and contextual conditions. These include: the feedback capacities of students and teachers; the feedback design of the task/course; and the feedback culture of the institution or study program. The inclusion of such social and contextual conditions makes the notion of feedback processes a more complex phenomenon than what is entailed in the traditional view of feedback as information-transfer.

The notion of feedback encounter has been proposed as a way to operationalize and analyze feedback processes within a sociocultural worldview (Esterhazy Citation2019; Jensen, Bearman, and Boud Citation2023). A feedback encounter is an interaction with teachers, peers, materials, technologies, or any other person or artefact inside or outside the course which addresses the student’s understanding of task criteria and quality, their own level, or what would be a good next step. Esterhazy (Citation2018, 1303) argues that feedback encounters can only be considered productive if ‘students both make meaning of and act upon knowledge about the quality of their performance and how to improve it’. In this paper, we employ this definition with its emphasis on meaning-making, actions, and improvement. Examples include making changes to an assignment, changing strategy, eliciting a new feedback encounter, or deliberately continuing as before, but with a better understanding of the direction taken.

For a feedback encounter to be productive it must of course have a positive impact on learning. In theory, such an impact is then observable in the future when the student does better on a similar task. However, as Henderson, Ajjawi, et al. (Citation2019) point out, observing the impact of feedback is often not possible, because truly similar tasks are rarely undertaken, and if they are, the new context would make it hard to know if the improvement was caused by the initial encounter. In higher education, where learning is a matter of developing advanced understandings and complex skills, impact may be particularly hard to measure. In this study, we consider feedback encounters to be productive if the student reports any positive impact, regardless of whether we observed their improved understanding on a subsequent task.

Feedback in online higher education

The adoption of online technologies across higher education has led to an expansion of digitally mediated feedback practices. A recent review suggests that feedback in online environments has a specific set of conceptualisations associated with digital mediation (Jensen, Bearman, and Boud Citation2021). This aligns with Dawson and Henderson's (Citation2017) argument that many educational technologies can be seen as an effort to scale up the provision of formative feedback. Online quizzes and intelligent tutoring systems have automatized the provision of performance information (e.g. Paassen, Mokbel, and Hammer Citation2016), and peer review technology provides alternatives to teacher comments (Van Popta et al. Citation2017). As illustrated by these examples, feedback processes in online learning are not exclusively between teachers and students, but include a mix of interactions with humans, technologies, and resources (Dawson et al. Citation2018).

Despite these many new feedback opportunities, the main challenges remain, and the use of learning management systems may even decrease student engagement in feedback processes (Winstone et al. Citation2021). At the same time, increased flexibility, often lauded as an advantage of digital learning, may mean that online course designs privilege self-directed and asynchronous learning over student collaboration and live classes. This means that students in online courses may have few interactions with each other and their teachers (Bearman, Lambert, and O’Donnell Citation2021). Consequently, they may miss many informal feedback opportunities that otherwise occur on campus.

To better understand the diverse and largely invisible feedback processes that online students engage in, we need to observe feedback processes in their natural settings. This paper draws on data from a digital ethnographic study that explored how students seek out, make sense of, and use feedback processes to support their learning in online courses. Adopting a feedback encounters perspective, which acknowledges the sociocultural nature of education, this paper utilizes this comprehensive dataset to address a focussed research question: What characterizes productive feedback encounters when learning online?

Methods

The digital ethnographic fieldwork generated a dataset containing rich and illustrative accounts of student experiences with a variety of feedback processes – both ones that are a formal part of the course design and those that students seek out on their own or come across incidentally. In alignment with the digital ethnographic approach, and in recognition that meaning is inherently social, we adopt a constructivist epistemology. Under this paradigm, the quality of a study is a reflection of the rigor of the analysis and the trustworthiness of the results (Lincoln Citation1995). To that end, our analysis employs several well-established digital ethnography strategies to enhance quality. Researcher and data triangulation were used to generate further perspectives and gain a more detailed understanding, and explication of the theoretical underpinnings highlighted implicit understandings of key concepts.

A key part of an ethnographic approach is to consider the role of the researcher within the research, through a process of reflexivity. Reflexivity involves a recognition of the inherent subjectivity of the approach and a highlighting of the many ways that the researchers’ assumptions, experiences and beliefs influence all stages of the research project (Hine Citation2017).

Fieldwork and dataset

The dataset hails from digital ethnographic fieldwork at an Australian and a Danish university. It contains a combination of online observations and elicited data from 18 students (13f, 5 m) enrolled in either a humanitarian studies or psychology program. The authors were unaffiliated with the courses. Observations took place in the online learning management systems in an unobtrusive non-participatory manner, i.e. without interactions in online course rooms. Observational data included text-based interactions in online discussion forums, course websites, announcements from course staff, key course documents, as well as quiz results and other learning analytics. Elicited data included 33 longitudinal audio diaries (LADs) and 27 qualitative semi-structured interviews. In LAD-research, study participants are asked to make short reflective audio recordings in which they reflect on a question from the researcher (Worth Citation2009). In our case, LADs were elicited by individualized text prompts sent to study participants. Most LAD recordings were 1–5 min long. The semi-structured interviews were done remotely via audio or video call, except for one that took place face-to-face. Each participant was interviewed once or twice. Most interviews were 30–50 min long.

Our evolving views about feedback influenced both the fieldwork and the dataset. In interviews and LADs we prioritized themes that are considered important within current student-centered feedback perspectives, such as student meaning making and student use of feedback and gave less attention to the role of the teacher and the precise formulation of feedback comments.

Access to online course rooms was granted by the course leader of each course. Informed consent from the participants was collected digitally. No data was collected about non-participating students enrolled in the observed courses. The fieldwork was done by the first author. The project was approved by the ethical review board of the university in Australia (Deakin University HAE 19-017). Study participants received a gift voucher for the value of 15–50 EUR, depending on the extent of their participation in the study. To maintain anonymity, study participants are referred to by pseudonyms, and no direct quotes from discussion boards are used. Quotes from interviews and LADs are presented with minor edits for readability.

Analytical approach

The analysis presented in this paper draws on all the observational and elicited types of data mentioned above. The development of the notion of feedback encounter into an analytical tool formed the first phase of analysis. This process is presented in Jensen, Bearman, and Boud (Citation2023). It involved an initial thematic analysis that yielded 73 codes, representing various feedback-related phenomena, such as sources, impacts, technologies, interactions, tasks, roles, course materials. This led us to the notion of the feedback encounter as a meaningful unit of analysis that can link together these phenomena in small, detailed narratives of student experiences with feedback. This first phase culminated in the development of an analytical frame for analyzing feedback encounters, including the identification of three main categories of encounters, namely:

Elicited feedback encounters are those that a student actively seeks out, for instance when asking for help or showing their draft to a peer.

Formal feedback encounters are those that are part of the course design, such as when a teacher comments on submitted work.

Incidental feedback encounters are neither planned by teachers nor elicited by students, rather they happen by chance, for instance when an informal conversation with peers prompts the student to reflect on their own work or approach.

This approach – which is presented in detail in Jensen, Bearman, and Boud (Citation2023) – allows for an integrative analysis centered on how students experience, make sense of, and use complex feedback processes in interactions with artefacts and humans.

This present paper uses this analytic frame to identify characteristics of productive encounters. To be able to code each encounter independently, we organized the initial 73 open codes into a coding framework in which each code could be used to label an individual feedback encounter. This was an iterative process of inductively categorizing the open codes into more robust themes and sub-themes, sensitized by key concepts from contemporary feedback literature. The resulting coding framework was then applied to all feedback encounters identified. During this process, the coding framework was adjusted to account for our evolving perspectives.

The analysis in this paper uses this comprehensive coding of all encounters to explore links between encounter characteristics and encounter impact. The impact types and modifying factors presented below are derived in the following way: First, the encounters were grouped according to the extent of their reported impact on the student’s understanding, approach, or work – e.g. strategy change or correcting typos. Second, we examined all the codes associated in each group of feedback encounters (e.g. student’s intention, timing, emotional impact, type of feedback encounter). This approach made it possible to identify characteristics of feedback encounters that may influence if and how they have an impact on learning. Microsoft Word and Excel were used throughout this analysis for organizing the data and handling the coding process.

Setting and study participants

Fieldwork took place in 2019 in six online or blended courses. Each was designed for the online modality and was not an example of emergency remote teaching common during the COVID-19 pandemic. They were within the disciplines of psychology and humanitarian studies, and varied greatly in size, from ∼10 to ∼900 enrolled students.

The study participants had very diverse backgrounds, both in terms of nationality – Australia, Germany, India, Italy, Japan, and the UK – demography, and socioeconomic status. Most were full-time students, and many had chosen the online modality because it gave them the flexibility to combine their studies with jobs or primary carer roles.

Aside from readings and lectures, the courses included substantial and varied online activities, typically in the form of discussions, assignments, and quizzes. Participation by the course staff varied considerably, from very engaged to nearly absent. In the larger courses the discussion forums were very active, with thousands of posts by students and course staff. Especially close to assignment deadlines, the sub-forums about assignments were very busy.

Aside from text-based discussions, the courses had few collaborative activities. Some students organized to meet up and study together. There were few instances of live streamed classes, but most of the coursework was done individually in an asynchronous manner. They were built around a structure of weekly modules or topics, so all students would have a similar progression through the course materials and activities.

The Danish courses had a final exam accounting for 100% of the grade, whereas the Australian ones marked assignments and quizzes throughout to create a composite grade. With a few exceptions, teacher comments on student work were given together with a summative assessment. All courses employed an assessment design where comments on each task, whether graded or not, were intended to be useful for subsequent tasks.

Most of the formal encounters involved teacher comments on submitted work, and most incidental encounters happened while reading the discussion forums. The elicited encounters were the most diverse, including self-assessment with rubrics or exemplars and seeking help from university staff, peers, colleagues, or family members. Despite the online modality, participants also reported many offline feedback encounters, for instance when showing their assignment to their partner or engaging with downloaded course materials.

There were several instances when study participants pointed out that our questions had made them look at feedback differently, expect more from it, or become more intentional participants in feedback interactions. Thus, the study also prompted students to develop different feedback behaviors.

Results

In reporting the analysis of the feedback encounters, we start by describing the different levels of impact that an encounter may have, and then explore factors that may influence these impacts.

Types of impact

Instrumental learning

According to our definition, a productive feedback encounter includes both performance information, student’s meaning making, and any actions a student may be prompted to take. Most encounters in our dataset are characterized by a very straightforward meaning making process after which it is clear to the student what to do next.

Many elicited encounters were explicitly sought by students simply to double check that their work fulfilled the task criteria. Kate, for instance, did a thorough comparison of her own assignment draft and an exemplar, but it did not prompt any significant changes to her work, ‘I think it just really confirmed my understanding that I had it right.’

Formal feedback encounters, typically involving teacher comments on student work, most often led to superficial edits, in which the student simply followed any explicit directions contained in the comments. Magda explained the changes she did to her draft after receiving comments from her teacher as ‘it was just changes in the wording. So, I had to rephrase the sentences from being kind of wish-list to something which is actually planned.’ This is also the case when the encounter concerns finished work, and the student’s meaning making involves deciding what the encounter means for their work on subsequent tasks. An example of this is Tessa’s description of how teacher comments were useful for her next essay: ‘I made a few errors with the APA style. So, I learned from that what I needed to do to fix up the next essay.’

These are examples of productive feedback encounters that support students in their work. However, it is also apparent that their impact is instrumental, by which we mean that they do not require or prompt any significant reflection or deeper thinking on the part of the student. Errors and areas for improvement are revealed in a way that is experienced as in alignment with the overall approach and understanding of the student. Although such changes may sometimes be time consuming to implement, an important characteristic of such feedback encounters is that it is clear to the student what should be done to improve performance and how to do better in a future similar task. The resulting adjustments or corrections make sense within the student’s current frame of understanding and therefore, these encounters only have a minor impact on student understanding and approach.

Substantive learning

Less commonly, feedback encounters have more substantive impacts. This is the case when the encounter prompts the student to reflect critically on their own assumptions and leads to a new level of understanding or quality of performance.

Often substantive learning was the consequence of the student experiencing a challenge to their current understanding. This was the case for Mila. The comments she received on her first assignment, not only contradicted her own opinion about her work, but also her experience of doing well on similar assignments. This formal encounter challenged her beliefs about herself and her work by alerting her to substantial weaknesses, that could not be met with simple adjustments. Mila’s conclusion was that she has a blind-spot, and she decided to change her strategy going forward: ‘I have to try to remove myself and I should have a second person […] just look over this and read it and see if this makes sense.’ This strategy is then employed on her second assignment, where she has a colleague read and comment on her work before submitting it.

Chakresh also had a formal encounter that prompted him to change his strategy. After almost failing an assignment where he misunderstood the criteria, he decided to spend more time on understanding task criteria for his next assignment: ‘I took a printout of second assignment information document and rubric and studied it thoroughly. I marked all the sections where I was a little confused’. Throughout work on this second assignment, he elicited several feedback encounters with colleagues and university study mentors to discuss the criteria and ensure that his work is closely matching them.

The substantive impact changes how the student understands the task or their own work or approach. Depending on course design, it may be associated with observable changes to work or performance of the student, or in the adoption of new approaches or strategies. To Mila and Chakresh, the impact manifests as changes to how future tasks are approached.

Making sense of a challenge

At first sight it may seem like the difference between instrumental and substantive impact on learning is simply whether the performance information suggests small or big changes. Our analysis shows that this would be missing the point. Very instrumental feedback encounters that provide clear directions for superficial changes to work, only rarely led to any deeper reflection or even learning. The opposite, however, is not the case. Only a few of the more challenging encounters that suggested deeper reflection ended up having substantive impact.

In our study, participants often experienced such challenging encounters as lacking directions and specificity. Sandaya provides an illustrative example:

The personalized feedback that I received, I thought it was a bit vague. I think it would have helped me better had they specifically pointed out the mistakes that I did rather than vaguely talking about ‘maybe you can think of this in a different way’.

In many cases we see an aversion to such thinking in a different way. This has the consequence that encounters with teacher comments that address both simple corrections as well as suggestions for a different approach are interpreted in a way that only the simple corrections are used, while the more challenging elements are dismissed as less useful. James describes one of his formal feedback encounters as containing corrections, praise, and questions. The questions were challenging the structure of his arguments and how he presented evidence, but instead of engaging with the questions he chooses to focus on the immediate usefulness of the corrections and praise. The questions, he says, ‘were not that useful. They felt a bit academic’y rather than very practical’. James’ contrast between academic and practical also illustrates how students may react to encounters that challenge their assumptions by discrediting the source of the feedback information.

James was not the only student who prioritized the more instrumental elements of comprehensive feedback. Claire received comments on one assignment both as rubric scores and as an audio recording. She disliked the audio comments because they ‘did not really explain where I went wrong and […] how I could improve’. The rubric feedback was ‘really helpful for me in seeing where I lost marks and where I did really well’ and she is ‘planning to use the feedback from the rubric to improve my next journal.’

All three examples illustrate that it is not sufficient that the encounter is experienced as challenging, i.e. at odds with the student’s beliefs or assumptions. For a feedback encounter to have substantive impact, the student must take up the challenge and seek to make sense of it even when it requires substantive, and maybe uncomfortable, changes to understanding and work.

Initially, neither Sandaya, James, nor Claire engaged with the challenging elements. In James’ case, however, later encounters raised the same issue again, and eventually he appreciated that it would be valuable for him to engage with the challenge.

Some students were quick to dismiss challenging encounters as unhelpful, while others appreciated that there was something potentially valuable in the challenge but struggled to comprehend it and translate it into actions. This was a frequent reason for eliciting feedback encounters. If subsequent eliciting paid off and the student managed to make sense of an initially vague encounter, this prolonged process of meaning making could indeed result in substantive learning. In situations where vagueness, despite the effort, could not be addressed and solved through subsequent encounters, it was a reason for frustrations persisting.

The students who did not take up the challenge often highlighted mismatches between the encounter and their immediate feedback needs, e.g. for clearing up specific doubts or determining what would be a good next step. In Sandaya’s course, the information returned from multiple choice quizzes did not reveal which questions the student got wrong, but rather pointed out topics that the student ought to revisit. She does not find that helpful:

I do not know exactly what I am supposed to work more hard on. It would have been more helpful if they would have just given the question and said this was the right answer. […] I think we do deserve to know what wrong answer we chose, and what the right answer was. […] I still do not know what went wrong.

In many cases, students’ needs were relatively instrumental, and consequently vague or challenging encounters were experienced as unhelpful or distracting. In some formal encounters, students accepted that vagueness was intended to foster student reflection, but nonetheless they found it irritating. The preference for instrumental feedback, also meant that feedback encounters in which the student self-generates the performance information only rarely contained any challenges to their perspective and therefore did not lead to substantive learning.

The role of timing

The second factor that influences impact is timing. In the feedback encounters that we analyzed, timing was not primarily a matter of the amount of time passed between doing a task and receiving comments on it. Rather, timing of a feedback encounter was primarily experienced in relation to the feedback needs associated with whichever subsequent task the student is anticipating or currently working on. In other words, when students lamented that a formal feedback encounter was too late, it was not because a long time had passed since they submitted work, but rather that the encounter came too late to be useful for subsequent tasks. This is not just a matter of feedback encounters taking place before or after subsequent work is submitted, but also the more specific timing throughout work on the task.

Feedback encounters that may look similar, e.g. several instances of studying an exemplar can be experienced and used very differently if they occur at different times. This was the case for Kate, who used an exemplar before, midway through, and at the end of working on an assignment. Initially, the exemplar was consulted to gain a better understanding of task criteria and determining what direction to take the assignment. When using it midway, the encounter led her to update her draft: ‘I adjusted just a couple of things, how I wrote my arguments, and then my conclusion’. Just before submitting, a third feedback encounter with the exemplar served as a final check before submitting. This example illustrates that there may be a decreasing impact, not just in terms of concrete changes to work that an encounter causes, but also in the way Kate controls the possible impact of the encounters – starting out with an open approach that may well challenge her assumptions and ending with a very tight and focussed encounter that is meant to approve it for submission.

This movement from open or inspirational feedback encounters, towards solving specific challenges, and finally a focus on checking quality of work, is not just seen when students engage with rubrics and exemplars. The ways students engage in most their feedback encounters follows the same pattern. This means that challenging feedback encounters only lead to reflection and substantive learning if they take place at an appropriate time.

An example of inappropriate timing is James’ incidental feedback encounter, which happened when a guest lecturer spent an entire online class presenting something very similar to his already finished, but not yet submitted, assignment:

I was kind of a bit dismayed to see that the presentation that the woman went through was actually very close to my argument. […] Like all the key points that they raised and all the quotes […] were basically following the same order of what I have written.

This made him doubt the quality and originality of his own work, however ‘in the end I did not change it […] I just did not have the time or the energy to do it.’ This incidental encounter could have had substantial impact, but because it took place at a time when he was not open to it, the only impact of the encounter was that he started doubting his own work.

Timing relative to subsequent tasks is also important for formal feedback encounters. Despite course designs where teacher comments on finished work were intended to be useful for later tasks, a recurring challenge was that comments arrived so late, that the students were already far into their work on the subsequent assignment. In the words of James: ‘I felt like also the timing of the feedback, coming in the last week, was a bit late. I felt like it would be good if they could give the feedback before you start writing the next bit.’

The study participants frequently brought up their intention to use teacher comments related to previous work to improve their work on subsequent tasks, and they lamented that the comments were not available earlier in the process. Tessa postponed starting on an essay until she had received comments on the previous one:

I sort of found it a bit hard to get going because we had not had the feedback from our first essay yet […] I did not really start writing the body of the essay until I got the feedback back.

Others, like Kate, started without waiting: ‘I would rather start now and then tweak, than not start.’ Her use of the word tweak reveals that she is aware that the comments will then only have minor impact. This shows that the wrong timing of a formal feedback encounter can essentially undermine the impact it has on student learning, not because the student does not engage, but because the nature of their engagement changes.

Discussion

In the section above we observed that while most feedback encounters only lead to instrumental changes to work or understanding, some encounters lead to what we refer to as substantive impact on learning. We do not consider instrumental and substantive to be in opposition. Rather they both have their role and value to students, and it is worth exploring what each type of encounter can be productive for a student. Whether an encounter will have instrumental or substantive impact is not primarily an effect of the performance information suggesting small or big changes, but rather depends on factors related to student meaning making and context. We identified two such factors, namely that the feedback encounter must be experienced as having an element of challenge that the student must be willing and able to make sense of, and that the encounter must take place at an appropriate time in relation to whichever task the student is currently working on.

The value of feedback encounters that are experienced as challenging is that they can lead to improved understanding and learning, because they prompt students to reflect on their assumptions about what constitutes quality work. Such productive encounters bear some consideration in relation to Hattie and Timperley’s discussion of the focus of feedback inputs (Hattie and Timperley Citation2007). They found that feedback impact is a function of the focus of the performance information and argue that comments at task level are less impactful than those addressing self-regulatory and process levels. The conundrum is that many feedback encounters that include a challenge and thus have the potential to have substantive impact, only lead to instrumental learning because students do not take up the challenge they are offered. One reason for this is that students often experience such encounters as vague, and not sufficiently directional. In their paper on assessment criteria metaphors, Bearman and Ajjawi (Citation2021) introduce the notion of a productive space, which has enough room for students to bring their own thinking, yet is bounded enough for them not to get lost. This is a similar balancing act that we observed in feedback encounters. If they are too directional, they lead to no student reflection. If they are too abstract the students disregard them as vague and unhelpful.

Timing was identified as an important factor that influences how students engage in feedback encounters. Our description of the role of timing differs from the substantial body of research into the timing of instructional feedback, which examines the benefits of offering immediate or delayed performance information (Attali and van der Kleij Citation2017). In that tradition, only formal feedback encounters (taking place after performance) are examined, and timing is understood in relation to the already finished task (Kulik and Kulik Citation1988). Our study provides empirical support to the view presented by Boud and Molloy (Citation2013), that in formal feedback encounters about a finished task, the importance of timing is not in relation to the task of the past, but to the current task the student is working on. This view also aligns well with designs that are proposed in the literature on assessment for learning, namely that feedback should take place during work on a task, instead of combining it with end-of-course summative assessment when students have very little use of it (Sadler Citation1989; Wiliam Citation2011). The influence of timing that we observed was seen across all three types of feedback encounters, suggesting that changes to student feedback needs and feedback behavior follow a predictable pattern, where certain times are more favorable for challenging feedback encounters. This adds a temporal dimension to the productive space metaphor.

Future research could benefit from broadening the understanding of context to also include the material conditions of the spaces in which students learn. A sociomaterial perspective would highlight how feedback practices are ‘entangled with social, material, spatial and temporal actors’ (Gravett Citation2022, 269). By considering the ways in which technology, power dynamics, and cultural norms interact with feedback practices, this perspective provides a more nuanced understanding of the role context plays for how students engage in feedback processes.

Implications for practice

An advantage of the feedback encounters perspective is that it enables an analysis that includes the many diverse formal, elicited and incidental feedback processes that students engage in. The inclusion of elicited and incidental feedback encounters is particularly relevant for understanding the feedback processes of students that study online, because the availability and nature of such encounters can vary a lot between online and onsite education. However, the dichotomy between online and onsite is rapidly losing relevance (Fawns Citation2019). Many feedback practices of onsite courses are technologically mediated just like many of the feedback encounters that online students engage in are taking place in a face-to-face setting. This means that some implications of this work are not isolated to online education, but broadly relevant in different higher education settings.

Our findings sit well with the understanding of teaching and learning that is found in Activity-Centred Analysis and Design (ACAD) by Goodyear, Carvalho, and Yeoman (Citation2021). ACAD recognizes that most of the time students spend on learning is only lightly supervised. This means that teaching can be seen as a form of design, i.e. the planning of productive situations in which students can learn unsupervised. At first glance, the perspective of teachers-as-designers may seem to only apply to formal feedback encounters. However, according to ACAD, designing student activity is never just a matter of developing tasks (epistemic design), but also includes selecting the tools and materials (physical/set design) and deciding on the ways student can interact and collaborate (social design). This comprehensive perspective gives us a framework for thinking of teachers as designers of the physical and social aspects of student activity that are crucial for creating opportunities for incidental and elicited feedback encounters.

Simply being on campus may create opportunities for incidental feedback encounters because students have informal conversations with peers or may overhear interactions between peers and teachers. In online learning, there may be an even more urgent need for social designs that facilitate such informal interactions, e.g. by requiring students to collaborate in pairs or groups, exposing them to each other’s work, and giving students access to formal and elicited feedback encounters between peers and teachers.

Our analysis suggests that elicited encounters with rubrics and exemplars – a practice often recommended for online courses – may be unlikely to have substantive impact, because the self-generated feedback information rarely challenges the student’s beliefs and assumptions. Including student self-assessment in course designs is considered to have many positive impacts, not least the strengthening of student self-regulation and self-efficacy (Panadero, Brown, and Strijbos Citation2016). However, our study suggests that the feedback processes associated with self-assessment may be more instrumental in their impact and should be seen as supplemental to the more challenging encounters that are essential for learning.

Strengths and limitations

A main strength of this study is that it is based on a trans-national digital ethnography, spanning six online courses at two universities. The combination of comprehensive online observations, interviews and longitudinal audio diaries have offered an unusually detailed look into the feedback experiences of online students. This strength simultaneously represents an important limitation. Our approach highlights local and contextual factors – particularly surrounding curriculum and course design – and consequently an otherwise similar study set in a different discipline or institution could surface other phenomena. Another limitation comes from the study’s reliance on students’ self-reports of impact. This approach is appropriate for our research paradigm, but the subjectivity introduces some uncertainties in the analysis because students may over- or underestimate the impact of individual encounters.

Conclusion

This paper provides an analysis of elicited, incidental, and formal feedback encounters to explore what characterizes productive feedback encounters in online learning. First, it identifies and describes the different levels of impact on learning that an encounter may have, from the purely instrumental, which requires little reflection on the part of the student, to the substantive learning that comes when an encounter prompts the student to reach a new understanding or adopt a new approach. Secondly, it distinguishes two factors which may influence whether a feedback encounter leads to instrumental or substantive learning: a challenge that the student must be willing and able to make sense of, and the timing of the encounter in relation to whichever task a student is currently working on. It points to implications for teaching and learning, in both online and onsite courses, such as the importance of social designs and potential limitations of encounters associated with student self-assessment.

This study serves as an example of how we can explore complex feedback processes in un-controlled settings. Most research on feedback in online courses is focused on only formal feedback encounters and the role of incidental and elicited feedback processes remains under-researched. Future work could further explore these types of feedback and their interconnections with formal feedback processes. The digitalization of feedback practices across higher education leads us to believe that future studies could benefit from an inclusive approach that does not reproduce the online-onsite dichotomy but explores the full feedback experience of students across online, onsite and hybrid spaces.

Acknowledgements

The authors would like to thank the study participants.

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • Angelone, L. 2019. Virtual ethnography: The post possibilities of not being there. Mid-Western Educational Researcher 31, no. 3: 275–95.
  • Attali, Y., and F. van der Kleij. 2017. Effects of feedback elaboration and feedback timing during computer-based practice in mathematics problem solving. Computers & Education 110: 154–69. doi:10.1016/j.compedu.2017.03.012.
  • Bearman, M., and R. Ajjawi. 2021. Can a rubric do more than be transparent? Invitation as a new metaphor for assessment criteria. Studies in Higher Education 46: 359–368. doi:10.1080/03075079.2019.1637842.
  • Bearman, M., S. Lambert, and M. O’Donnell. 2021. How a centralised approach to learning design influences students: A mixed methods study. Higher Education Research & Development 40: 692–705. doi:10.1080/07294360.2020.1792849.
  • Boud, D., and E. Molloy. 2013. Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education 38, no. 6: 698–712. doi:10.1080/02602938.2012.691462.
  • Dawson, P., and M. Henderson. 2017. How does technology enable scaling up assessment for learning? In Scaling up assessment for learning in higher education, edited by David Carless, Susan M. Bridges, Cecilia Ka Yuk Chan, and Rick Glofcheski, 209–22. Singapore: Springer.
  • Dawson, P., M. Henderson, P. Mahoney, M. Phillips, T. Ryan, D. Boud, and E. Molloy. 2019. What makes for effective feedback: Staff and student perspectives. Assessment & Evaluation in Higher Education 44, no. 1: 25–36. doi:10.1080/02602938.2018.1467877.
  • Dawson, P., M. Henderson, T. Ryan, P. Mahoney, D. Boud, M. Phillips, and E. Molloy. 2018. Technology and feedback design. In Learning, design, and technology: An international compendium of theory, research, practice, and policy, edited by Michael J. Spector, Barbara B. Lockee, and Marcus D. Childress, 1–45. Cham: Springer International Publishing.
  • Esterhazy, R. 2018. What matters for productive feedback? Disciplinary practices and their relational dynamics. Assessment & Evaluation in Higher Education 43, no. 8: 1302–14. doi:10.1080/02602938.2018.1463353.
  • Esterhazy, R. 2019. Re-conceptualizing feedback through a sociocultural lens. In The impact of feedback in higher education, edited by Michael Henderson, Rola Ajjawi, David Boud, and Elizabeth Molloy, 67–82. Cham: Springer International Publishing.
  • Fawns, T. 2019. Postdigital education in design and practice. Postdigital Science and Education 1, no. 1: 132–45. doi:10.1007/s42438-018-0021-8.
  • Goodyear, P., L. Carvalho, and P. Yeoman. 2021. Activity-centred analysis and design (ACAD): Core purposes, distinctive qualities and current developments. Educational Technology Research and Development 69, no. 2: 445–64. doi:10.1007/s11423-020-09926-7.
  • Gravett, K. 2022. Feedback literacies as sociomaterial practice. Critical Studies in Education 63, no. 2: 261–74. doi:10.1080/17508487.2020.1747099.
  • Hattie, J., and H. Timperley. 2007. The power of feedback. Review of Educational Research 77, no. 1: 81–112. doi:10.3102/003465430298487.
  • Henderson, M., R. Ajjawi, D. Boud, and E. Molloy. 2019. The impact of feedback in higher education: Improving assessment outcomes for learners. Cham: Springer Nature.
  • Henderson, M., M. Phillips, T. Ryan, D. Boud, P. Dawson, E. Molloy, and P. Mahoney. 2019. Conditions that enable effective feedback. Higher Education Research & Development 38, no. 7: 1401–16. doi:10.1080/07294360.2019.1657807.
  • Hine, C. 2017. From virtual ethnography to the embedded, embodied, everyday internet. In The Routledge companion to digital ethnography, 47–54. New York: Routledge.
  • Jensen, L.X., M. Bearman, and D. Boud. 2021. Understanding feedback in online learning – a critical review and metaphor analysis. Computers & Education 173: 104271. doi:10.1016/j.compedu.2021.104271.
  • Jensen, L.X., M. Bearman, and D. Boud. 2023. Feedback encounters: Towards a framework for analysing and understanding feedback processes. Assessment & Evaluation in Higher Education 48: 121–134. doi:10.1080/02602938.2022.2059446.
  • Jensen, L.X., M. Bearman, D. Boud, and F. Konradsen. 2022. Digital ethnography in higher education teaching and learning—a methodological review. Higher Education 84: 1143–1162. doi:10.1007/s10734-022-00838-4.
  • Jonsson, A. 2013. Facilitating productive use of feedback in higher education. Active Learning in Higher Education 14, no. 1: 63–76. doi:10.1177/1469787412467125.
  • Kulik, J.A., and C.-L.C. Kulik. 1988. Timing of feedback and verbal learning. Review of Educational Research 58, no. 1: 79–97. doi:10.3102/00346543058001079.
  • Li, J., and R. De Luca. 2014. Review of assessment feedback. Studies in Higher Education 39, no. 2: 378–93. doi:10.1080/03075079.2012.709494.
  • Lincoln, Y.S. 1995. Emerging criteria for quality in qualitative and interpretive research. Qualitative Inquiry 1, no. 3: 275–89. doi:10.1177/107780049500100301.
  • Paassen, B., B. Mokbel, and B. Hammer. 2016. Adaptive structure metrics for automated feedback provision in intelligent tutoring systems. Neurocomputing 192: 3–13. doi:10.1016/j.neucom.2015.12.108.
  • Panadero, E., G.T.L. Brown, and J.-W. Strijbos. 2016. The future of student self-assessment: A review of known unknowns and potential directions. Educational Psychology Review 28, no. 4: 803–30. doi:10.1007/s10648-015-9350-2.
  • Pink, S., H. Horst, J. Postill, L. Hjorth, T. Lewis, and J. Tacchi. 2016. Digital ethnography: Principles and practice. London, UK: SAGE.
  • Sadler, D.R. 1989. Formative assessment and the design of instructional systems. Instructional Science 18, no. 2: 119–44. doi:10.1007/bf00117714.
  • Shute, V.J. 2008. Focus on formative feedback. Review of Educational Research 78, no. 1: 153–89. doi:10.3102/0034654307313795.
  • Van Popta, E., M. Kral, G. Camp, R.L. Martens, and P.R.-J. Simons. 2017. Exploring the value of peer feedback in online learning for the provider. Educational Research Review 20: 24–34. doi:10.1016/j.edurev.2016.10.003.
  • Wiliam, D. 2011. What is assessment for learning? Studies in Educational Evaluation 37, no. 1: 3–14. doi:10.1016/j.stueduc.2011.03.001.
  • Winstone, N.E., D. Boud, P. Dawson, and M. Heron. 2022. From feedback-as-information to feedback-as-process: A linguistic analysis of the feedback literature. Assessment & Evaluation in Higher Education 47: 213–230. doi:10.1080/02602938.2021.1902467.
  • Winstone, N.E., J. Bourne, E. Medland, I. Niculescu, and R. Rees. 2021. “Check the grade, log out”: Students’ engagement with feedback in learning management systems. Assessment & Evaluation in Higher Education 46: 631–643. doi:10.1080/02602938.2020.1787331.
  • Winstone, N.E., and D. Carless. 2019. Designing effective feedback processes in higher education: A learning-focused approach. London: Routledge.
  • Worth, N. 2009. Making use of audio diaries in research with young people: Examining narrative, participation and audience. Sociological Research Online 14, no. 4: 77–87. doi:10.5153/sro.1967.