6,381
Views
12
CrossRef citations to date
0
Altmetric
Research Articles

Audio Feedback – Better Feedback?

&

Abstract

National Student Survey (NSS) results show that many students are dissatisfied with the amount and quality of feedback they get for their work. This study reports on two case studies in which we tried to address these issues by introducing audio feedback to one undergraduate (UG) and one postgraduate (PG) class, respectively. In case study one audio feedback was given to half of the UG class whereas the other half received written feedback for a low stake (10%) exam style essay. In case study two audio feedback only was given to the PG class for a purely formative assignment which formed the basis for a final, summative piece of work. In both studies, audio feedback was received favourably as students found it clear, detailed and personal. A comparison between audio and written feedback (case study one) indicated that audio feedback left students more satisfied and provided more explanatory as well as motivational comments than written feedback. Generating audio feedback proved to be significantly more time consuming (by five minutes per script) than written feedback, but was more efficient in the sense that it produced nearly 10 times as much and higher quality feedback per unit of time. However, neither form of feedback led to an increase in students' subsequent performance in a final, summative exam essay. We conclude that audio feedback is better feedback in terms of student experience but it is still unclear if audio feedback can lead to higher attainment than other forms of feedback.

Introduction

Formative assessment in combination with high quality feedback can have a powerful impact on student learning (CitationGibbs & Simpson 2004, CitationHattie & Timperley 2007). In order to be effective, feedback must be detailed enough to provide sufficient information to the students about their learning, and it should help to close the gap between current and desired performance (CitationNicol & Macfarlane-Dick 2006). The importance of feedback is recognized by practitioners in higher education who spend a great deal of time and effort producing feedback for assignments. It seems, however, that a lot of this effort is wasted: on the one hand, student surveys such as the National Student Survey (NSS) suggest that students are not satisfied with the feedback they get, with particularly low satisfaction in the category “Feedback helped me clarify things I did not understand” (CitationFielding et al. 2010). On the other hand, many students do not collect their feedback (CitationJollands et al. 2009) or if they do, they fail to read or to act upon it (CitationBuswell & Matthews 2004). Indeed, it has been suggested that some students are primarily interested in their marks (CitationWinter & Dye 2004).

It seems that there is a mismatch between teachers' intentions and students' perceptions of the value of feedback (CitationCarless 2006). There are several possible explanations for this. Firstly, a lot of feedback is produced for summative assessments, feedback which students – rightly or wrongly – see as not relevant for future assignments (CitationGibbs & Simpson 2004). Secondly, written comments on students' work are often not perceived as being helpful by the students as they are too brief, too general, do not provide guidance for future work and/or use jargon that might be obscure to them (CitationGlover & Brown 2006, CitationWeaver 2006). Several studies have attempted to address at least some of these problems by introducing audio feedback. The results indicate that audio feedback in general is well received by students, who find it more detailed, informative and personal than written feedback (CitationMerry & Orsmond 2008, CitationLunt & Curran 2010, CitationRhind et al. 2013). Teachers who trialled audio feedback also found this a positive experience, partly mirroring the students' perception regarding the personal nature of audio feedback, but also acknowledging its ease of use (CitationKing et al. 2008, CitationDixon 2009).

There are, however, still a number of open questions regarding the use of audio feedback. Firstly, it is not clear yet whether or not audio feedback is efficient in terms of staff time. Some studies report a decrease in the time needed to provide audio feedback as compared to written feedback (CitationLunt & Curran 2010), whereas others found the opposite to be true (CitationMcFarlane & Wakeman 2011, CitationRodway-Dyer et al. 2011). Secondly, there is no clear evidence so far as to whether or not audio feedback better supports students' learning. Some studies indicate that students who received audio feedback found this to have a positive effect on their learning (CitationMerry & Orsmond 2008, CitationGould & Day 2012). It is impossible to tell, however, whether this particular mode of feedback led to improved student performance in these cases. CitationMacgregor et al. (2011) found that learning gains in students who had received audio feedback for a formative task were not significantly different from those of the recipients of written feedback.

There seems to be little doubt that in most cases the provision of audio feedback improves students' feedback satisfaction. In the light of the important open questions outlined earlier, however, no clear recommendations for the widespread introduction of audio feedback can yet be made. A clear indication of improved student performance would perhaps justify a change from the traditional written feedback to audio feedback even if it led to a higher workload for staff, but at the moment there is little if any evidence for that. The situation is complicated by the fact that many of the studies on audio feedback so far work with relatively small numbers of volunteers (CitationKing et al. 2008, CitationMerry & Orsmond 2008) and/or do not directly compare written and audio feedback for the same assessment. Very little information so far is available on the use of audio feedback in postgraduate (PG) teaching or in classes where a high percentage of students have English as their second language.

Here we present two case studies in the biological sciences where we introduced audio feedback for formative assignments. One case concerns an undergraduate (UG) class, the other a PG class. In the UG setting (case study one) we directly compare written and audio feedback within the same assessment in order to find out whether audio feedback leads to improvements in (a) student satisfaction, (b) quality of feedback, and (c) staff workload. We also investigated whether audio feedback leads to higher student performance than written feedback, although we did not expect a big impact of the mode of feedback in this study. In case study two audio feedback was used in a mixed PG class to provide feedback for a non-assessed piece of work, as preparation for a subsequent assessed assignment.

Methods

For both case studies, audio feedback was recorded using mobile digital voice recorders. Audio files were saved as mp3 files and sent to individual students by email.

At the beginning of the study each student received a participant information sheet explaining the background and purpose of the study and what participation involved. It was stressed that participation was voluntary and that answers to questionnaires and interviews were anonymous. Interviewees were asked for permission to record the interviews. Ethics approval from the University of Liverpool's ethics committee was sought and obtained before the start of the study. More details regarding ethical considerations in case study one are provided later.

Case study one

Methodology

This case study was undertaken in a year 3 (level 6) undergraduate theory module ‘Comparative Animal Physiology’ (class size 62) in the academic year 2011/2012. All but one student in the class had English as their first language. The module was assessed by two essay-type assignments, both handwritten under exam conditions. The first assessment took place halfway through the module and consisted of one unseen essay which was worth 10% of the total module mark. The topic of this essay (although not the question) was known in advance. The final exam consisted of three unseen essays, worth 90% of the module mark and included topics from the whole module including the topic that was examined in the first essay, although this topic made up only a small proportion of the final exam. The purpose of the first assessment was to give students an opportunity to practise their essay writing skills under exam conditions, and to provide feedback. This feedback addressed generic issues such as essay writing skills, but also content related issues. In the past, feedback had been given in the form of written comments.

For this study, the class was split randomly into two halves. One half of the students (31) received written feedback as before, the other half received audio feedback. All feedback in this assignment was provided by one of the authors (SV). This slightly unusual experimental approach was carefully considered beforehand by the University of Liverpool's ethics committee. The project was approved by the ethics committee on the basis that none of the two groups (written and audio feedback group) would be disadvantaged in case of a more beneficial effect of one mode of feedback over the other. To that end, the module was earmarked for particular scrutiny by the Module Review Board. If either of the two groups had gained higher marks in their final exam than the other, marks moderation would have ensured that no student was disadvantaged by the procedure.

During a feedback lecture which took place two weeks after the assessment a model answer was discussed and common errors were explained. At the beginning of this lecture all students received a copy of their essay with their marks and comments. Written comments for both groups (written and audio group) consisted of a brief explanation of the mark, referring to the previously published marking criteria. In both groups numbers on the margins of each essay were used to signpost further, specific comments. In the written feedback group, these numbers were repeated at the end of the essay and detailed comments on the relevant parts of the essay were given (e.g. correcting a misconception, pointing out an error, etc.). In addition, the written feedback group received a summary at the end of their script of what the good points of the essay were, and what they need to do to improve their marks.

The audio feedback group received an audio recording by email on the same day, straight after the feedback lecture. For each student in this group the recording began with a short greeting addressing the student by name. The student was then advised to look at the essay while listening to the feedback. The lecturer then started each comment with the number in the text it referred to, going through the essay from start to end. Finally, a summary was given in which the lecturer emphasized what the student had done well, and what he or she needed to do in the future to improve.

For both groups, written and audio feedback, the time spent producing the feedback was measured from the moment each script was opened to the moment it was closed again. Therefore it included the time needed for reading the script, writing or speaking any comments and, in the case of the audio group, saving the recording. It does not however include the additional time needed to transfer the saved mp3 files to a computer and email them to the students.

Evaluation

Two weeks after receiving the audio or written feedback, students were invited to fill in electronic questionnaires on the University's virtual learning environment (Blackboard). The questionnaires consisted of a series of closed questions and also provided the opportunity to give further comments. Some students also provided comments about feedback in the free text component of an end-of-the-year module evaluation questionnaire which is routinely administered to all modules within the School. In addition, nine volunteers (five from the audio group and four from the written group), were to be asked about their experiences in semi-structured interviews. The interviews were performed by the author of this study who was not involved in this module (LM). The interviews were taped and later transcribed.

Data analysis

To compare audio and written feedback, 15 scripts each were randomly selected from the audio and from the written feedback group. Written comments on scripts from the written feedback group and transcripts of the recordings from the audio feedback group were analysed for quantity, and for type and depth of feedback. Type of feedback comprised of the categories content (errors, omissions and clarification), writing (spelling, grammar, expression, and structure), motivational comments (e.g. praise) and feed forward (generic advice for future assignments). Depth of feedback on content-related issues was split into three categories: category one (issue acknowledged), two (correction provided), and three (explanation given) (CitationGlover & Brown 2006).

To assess any possible differences in the impact on learning of the written and audio feedback, average student performance in both feedback groups was compared. Differences between mean exam marks and times needed to produce audio and written feedback were analysed using t-test and significance was accepted at the p < 0.05 level. Interview transcripts and free comments from the questionnaires were analysed using thematic analysis (CitationBraun & Clarke 2006).

Case study two

In case study two, audio feedback was given to students on a PG module ‘Genome Bioinformatics’ (class size 15) in the second semester of the academic year 2011/2012. Five students had English as their second language. The module was assessed by three continuous assessments, worth 35% in total, and a final exam. In the first continuous assessment students were asked to produce a piece of coursework, working in groups of four, discussing the methodologies and results related to a practical activity in class. To prepare the students for participation in the group work, each student was asked first to individually generate a brief outline of the essay to be produced by the group. According to instructions, this individual piece of work should demonstrate the student's understanding of the practical class activity, and highlight important points to be discussed in the first group meeting. Thus, the assessment was divided into two parts and audio feedback was provided only for the individual piece of work. All feedback in this assignment was provided by one of the authors (LM). All students had received written feedback in a previous first semester module delivered by the same lecturer. Therefore, audio feedback analysis was based on personal perception and quality of the audio feedback, and no mark group comparison was performed as for case study one.

The audio recording feedback was sent to students by email a week before the first group meeting. The feedback was also discussed with each group in a 20 minute session immediately prior to the first group meeting. The same procedures for recording that were used for the UG module were adopted here. In the audio recording, students were addressed by name, and were advised to look at the essay while listening to the feedback. Comments were linked to numbers in the text, and a summary emphasizing good points as well as suggestions for improvement was provided.

Evaluation

A week after submission of the group work assessment, students were called for another group feedback session at which they discussed and self-evaluated their final work according to the marking criteria. At the end of this session, students were invited to fill in a paper questionnaire evaluating the received audio feedback. The questionnaire consisted of a series of closed questions and one open question. Similar to the UG module, case study one, some students provided further comments about feedback in the free text component of an end-of-the-year module evaluation questionnaire which is routinely administered to all modules within the School.

Results

Case study one

Student evaluation results

Forty-four per cent of the students filled in the dedicated online questionnaire, 52% of which stated that they had received audio feedback, 48% written feedback. All respondents had English as their first language. All respondents from the audio feedback group said that they had no technical problems accessing or listening to the audio files, and 86% had no problems understanding the teacher (the rest had ‘little problem’). In the written feedback group, only 62% had no problem with legibility of the comments, 23% found the comments ‘mostly ok to read’, but two students said they found the comments ‘difficult to read’. Asked what they did with their feedback so far, 78% of the audio group said they had listened once (the rest had listened more than once), whereas in the written feedback group 32% had read their feedback only once (68% had read it more than once). In both groups, about 30% said they would prefer written feedback in the future, 50% would prefer audio feedback, and the rest were unsure.

More than 80% of the respondents agreed that they understood their mark after reading or listening to the feedback, and there was no difference between the written and audio feedback group (). The majority also agreed that they understood what they needed to do to improve their marks in the future, although slightly more students in the audio feedback group agreed than in the written feedback group (). Asked about their view on the amount of feedback they got, about 80% of the audio group found that the amount was just right, whereas only just over half of the written group were satisfied with the amount of feedback (). Eighty-five per cent of the audio group found that the feedback was detailed enough, but fewer than half of the written group agreed (). All from the audio group found their feedback clear, whereas the written feedback group was less satisfied with the clarity of the feedback (). The majority of the recipients of audio feedback also said they liked the experience (85%), felt that audio feedback is more personal than written feedback (70%), and that audio feedback should be provided in more assessments (65%) (). Half of them also said that they got more out of audio feedback than out of written feedback. Nobody disliked listening to the teacher's voice, and only one person said that he or she would not want audio feedback again.

Figure 1 Percentage of respondents agreeing/disagreeing with the statement ‘After reading/listening to my feedback, I fully understand my mark’ (case study one: N = 16 (audio) and N=15 (written); case study two: N = 13).

Figure 2 Percentage of respondents agreeing/disagreeing with the statement ‘I understand what I need to do to get a better mark in the future’ (case study one: N = 16 (audio) and N = 15 (written); case study two: N = 13).

Figure 3 Respondents' perception on the amount of feedback they received (case study one: N = 16 (audio) and N = 15 (written); case study two: N = 13).

Figure 4 Percentage of respondents agreeing/disagreeing with the statement ‘The feedback I got was detailed enough’ (case study one: N = 16 (audio) and N = 15 (written); case study two: N = 13).

Figure 5 Percentage of respondents agreeing/disagreeing with the statement ‘My feedback was clear’ (case study one: N = 16 (audio) and N = 15 (written); case study two: N = 13).

Figure 6 Responses of participants to the question ‘Which of the following statements do you agree with?’ Case study one: only those who received audio feedback where asked to answer this question (N = 16). Case study two: all participants (N = 13). Participants could tick as many answers as they liked.

Interview transcripts and comments from the electronic questionnaires confirmed the opinions reflected in . Typical comments from the audio feedback group include ‘This was so much more explanatory [than written feedback], it helped me understand it a lot more than just having a short sentence about it’. However, some additional issues emerged. Firstly, many students who had received audio feedback said that it was more personal than written feedback (‘It felt quite personal so it was like meeting with her’). This was mostly perceived as positive, and included other aspects such as ‘you get a lot more of an honest opinion about it’, and ‘I think it was more engaging because it made me focus on it more … When I went home I sat down and listened to it and took it more in I think than just having written feedback’. However, one person said ‘I did feel that [what] was perhaps a negative of the audio was that because they're all points to be really improved on it felt like I was being told off, because it was a rapid fire [of] criticisms’. A number of students said that it was easier to revisit written feedback, e.g. ‘written feedback means that you have it there just to look at all the time and it's a bit harder to go back and look at the audio feedback’.

Comments from the written feedback group indicated that they would have liked to get some indication of what they had done well, e.g. ‘The written [feedback] was all the negatives in the essay. I think it's easier to understand what you're getting right or wrong if you have both [positives and negatives]’. Some referred to what they had heard from friends who got audio feedback, e.g. '… because my friends had audio feedback and [the teacher] actually referred to sentences they'd written and said how they could have written it better’. One issue that was raised by both groups was that they would have liked to receive detailed information about how they lost marks, e.g. ‘I wasn’t sure how many marks were lost because of certain things’.

Feedback analysis

Detailed analysis of 15 randomly selected recordings from the audio feedback group and 15 randomly selected scripts from the written feedback group showed that the number of feedback interventions did not differ significantly between the two groups (). However, the average number of words in each feedback piece was 12 times higher in the audio group than in the written group. In both, written and audio feedback, the majority of the interventions addressed content related issues (). However, while in the audio group the second most common feedback type was motivational, very few written feedback comments fell into this category. The second most frequent written feedback type addressed writing-related issues. Comparing the depth of the content-related feedback () it becomes clear that about half of the feedback in the written group provided corrections and very few offered explanations. In the audio group, however, more than 70% of the interventions were explanatory.

Table 1 Feedback analysis of 15 randomly selected recordings from the audio feedback group and 15 randomly selected scripts from the written feedback group. Case study one.

Figure 7 Type of feedback provided through audio feedback and written feedback, respectively (case study one).

Figure 8 Depth of feedback within the content category provided through audio feedback and written feedback, respectively: 1, acknowledges; 2, offers alternative; 3, provides explanation (case study one).

Impact on staff workload

To evaluate the potential effect on staff workload, the time that was needed to read the exam paper and to provide the feedback was recorded for each exam script. The average time spent preparing the recordings for the audio feedback was significantly higher (on average five minutes) than the time needed for the provision of written feedback (). The time that was needed to email the recordings to the students was not included in the calculation, but amounted to roughly one hour in total.

Table 2 Average time needed for the preparation of feedback, measured from the time a script was opened to when it was closed again.

Impact on student performance

To see whether the feedback mode (written or audio) had any impact on student performance, the average exam marks of both groups were compared. There was no significant difference between the average marks of both groups in either the formative assessment (for which they had received either written or audio feedback) or the final, summative exam (). The results also show that there was no difference between the average final exam marks and the marks in the first assignment.

Figure 9 Average marks of those students who received written feedback and those students who received audio feedback, respectively, in their formative assessment and in their final exam (N = 31 each) (case study one).

Case study two

Student evaluation results

Thirteen out of 15 (87%) students filled in the dedicated questionnaire, 62% of which had English as their first language. Seventy-five per cent of the English native speakers and 60% of students with English as their second language had no problems understanding the teacher (the rest had ‘little problem’). All students said that they had no technical problems accessing or listening to the audio files. Asked what they did with their feedback so far, 62% said they had listened to the feedback more than once, 31% had listened once, and one student had only listened to it partially. Regarding the type of feedback they would like to receive in the future, 62% would prefer audio feedback, 23% written feedback, and 15% declared themselves unsure.

Ninety-three per cent of the students agreed that they understood their mark after listening to the feedback (), while all students claimed they understood what they needed to do to improve their marks in the future (). Asked about their view on the amount of feedback they got, 93% of the students were satisfied with it (). Again, 93% found it sufficiently detailed () and clear (). The majority of the respondents liked the experience of audio feedback (85%), and thought that audio feedback should be provided in more modules (62%). Again, a majority found it more personal than written feedback (54%), but only 31% claimed to get more out of audio feedback (). Written comments on the questionnaires reflected the outcomes shown in and did not raise any additional issues.

Discussion

In the two case studies described we set out to explore the use and success of audio feedback for formative assessment in one UG and one PG setting. In both case studies the recipients of audio feedback were very satisfied with their feedback in terms of clarity, the amount and the level of detail. They were also satisfied that the feedback helped them understand their mark and what they needed to do in the future in order to improve their mark. These student reactions to audio feedback reflect similar findings in earlier studies (see for example CitationDixon 2009, CitationGould & Day 2012, CitationMacgregor et al. 2011, CitationRodway-Dyer et al. 2011).

In case study one, where a comparison between the recipients of written and audio feedback for the same piece of work was possible, it became clear that audio feedback led to a higher degree of student satisfaction, in particular regarding the amount and detail of the feedback. This is consistent with the fact that audio feedback was considerably longer than the written feedback, containing about 12 times as many words. Student comments, however, made it clear that it was not just the amount but also the quality of the audio feedback they liked. Students found that the audio feedback provided more explanations and suggestions for improvements compared to written feedback. This is confirmed by the feedback analysis, which found that the majority of the audio feedback interventions addressing the content of the assessment were explanatory in nature, whereas written feedback mainly corrected errors without offering any further explanation. Analysis of written feedback undertaken by CitationGlover and Brown (2006) also found that a relatively small proportion of tutor interventions consisted of explanations, and students did not perceive tutors' indications of errors as particularly helpful. It is not surprising, therefore, that – as shown in this study – students prefer a more detailed form of feedback which not only indicates errors or omissions but also provides clarification and advice on how to avoid similar mistakes in the future (see also CitationWeaver 2006). Indeed, it has been pointed out by CitationNicol (2010) that good quality feedback should be forward-looking, making suggestions on how the student could improve their work for future assignments.

Students from both groups, written and audio, in case study one said that they would like more acknowledgement of what they had done well, although this point was raised more often within the written feedback group. This relates to the result of the analysis of the types of feedback given. In the audio group the percentage of motivational comments (i.e. praise) was relatively low (just over 20%), but in the written feedback group the figure was even lower. This is not unusual. For example, CitationGlover and Brown (2006) found that only a small minority of feedback comments were of motivating nature.

In this context it is important to acknowledge the emotional side of feedback (CitationCarless 2006). This is addressed in students' perception of audio feedback being more personal than written feedback which reflects earlier findings (CitationKing et al. 2008, CitationMerry & Orsmond 2008, CitationGould & Day 2012). In the majority of cases this was seen as positive (‘engaging’, getting a ‘more honest opinion’). Sometimes, however, students felt that this had a negative side, too (‘like being told off’). It is important, therefore, that this emotional side to feedback should not be neglected. Acknowledging parts of the work that students did well could contribute not only to student learning but also to feedback being seen as encouraging (CitationLizzio & Wilson 2008).

Students in both case studies who had audio feedback reported no problems accessing and listening to the audio files. In case study one the vast majority had also no problem understanding the recorded feedback. Similarly, in case study two those students whose first language is English had mostly no problem understanding the teacher. It is noteworthy, though, that among those students whose first language is not English two out of five said that they had few problems understanding the teacher while the other three had no problem at all. Up to now little if anything is known about audio feedback in relation to international students, and it may well be worth looking into this further. By contrast, in the written feedback group in case study one about 40% said that they had some difficulty deciphering the teacher's comments on their scripts. Similar results were found in previous studies where a relatively high percentage of students found written comments difficult to read (CitationWeaver 2006). Clearly, any feedback can only be useful if students are able to read and understand it, and in that sense audio feedback appears to be superior to written feedback.

Looking at what students did with their feedback it becomes clear that in case study one students in the written feedback group had returned to their feedback more often than the audio feedback group had (68% as opposed to 22%). This may be related to the fact that students found it slightly easier to re-read the written comments than to listen to the recording again. Some said when they were studying in the library it would be easier to go through written notes than to access the audio files. However, this survey was performed two weeks after the provision of feedback, and the final exam was still more than two months away. Therefore, the usage pattern might have changed later during revision time.

In case study two, most students claimed to have listened to the audio files more than once. The difference to case study one is that here the assignment was purely formative and the students knew they had to produce a credit-bearing final piece of assignment. Therefore, they could expect the feedback for the formative piece to help them improve their summative piece and to get higher marks if they implement the feedback. CitationGibbs (2010) suggested that if assignments are designed in two-stages where the first, formative assignment provides feedback but no marks, and the second, summative assignment is marked, students pay more attention to feedback and produce better quality in their second assignment. CitationBrearley and Cullen (2012) however found that when students were offered the chance to submit a draft version of a summative assignment only about half of the class took the chance to receive feedback. In our study students were also expected to take their understanding of their practical class activity to the group meeting. It is plausible to consider that the feedback helped them for the discussion where they needed to demonstrate their understanding to their peers.

Overall it was clear that the students enjoyed the experience of getting audio feedback and many agreed that this feedback format should be used more often. The majority would prefer to receive audio feedback rather than written feedback in the future, although in both studies a sizeable group (30% and 23% in case studies one and two, respectively) would prefer written feedback. Again, these results tie in with findings in previous studies where the majority but not all of the students would prefer audio feedback for future assignments (CitationLunt & Curran 2010, CitationGould & Day 2012, CitationRhind et al. 2013). It is interesting to note that in case study one 50% from the written feedback group (who had no experience with audio feedback) said they would prefer audio feedback in the future, possibly indicating that their peers who received audio feedback praised the experience.

Despite the high student satisfaction, this study does not support the idea that students who receive audio feedback learn better than students who receive written feedback. Data from case study one show that there was no difference in student performance in the final exam in the two feedback groups. This result does not come as a surprise. In one of the few previous studies that also directly compared performance after audio versus written feedback the authors found similar results: there were no statistically significant differences between learning gains across the two feedback groups (CitationMacgregor et al. 2011). Several factors have to be considered in the context of the present study. Firstly, in case study one feedback was provided in a two tiered format where both groups (written and audio) first received a feedback lecture where a model answer was presented and common mistakes were discussed. Then all students were handed a copy of their essay with either written or audio comments. Therefore, both groups first received the same detailed generic feedback in addition to their specific individual written or audio feedback, which might have watered down any potential differences between the two. Secondly, the question might be asked whether one instance of feedback can be expected to result in a measurable improvement in performance in something as complex as an essay assignment, in particular if the feedback is given not for a draft version of the final marked assignment but for a separate (though similar in format) assignment. CitationBell (2011) pointed out that scientific writing is a skill that has to be developed and practiced over a long time. Thirdly, the low stakes assignment considered in this study consisted of only one essay and the general topic was known to the students beforehand. By contrast, students had to write three essays in the final exam covering all topics taught in the module rather than only one. The final exam, therefore, was a much tougher task and without the practice and the feedback from the low stakes assignments students might well have performed lower in the final exam as they did now.

There has been contrasting evidence on the time efficiency of audio feedback as compared to written feedback, some claiming that audio saves time (CitationMacgregor et al. 2011), whereas others found the opposite to be true (CitationKing et al. 2008). In this study, case study one enabled a comparison of the time needed to provide written and audio feedback within the same module, for the same assignment and given by the same teacher (SV). The results clearly show that in this case giving audio feedback increased the time needed per script, even without considering the additional time needed to email the recorded files to the students. This result is probably not surprising, given the fact that, as stated earlier, the amount of information provided to students via audio feedback was much higher than in the written feedback. Clearly, the audio feedback was not just an oral version of the written feedback comments, an observation that has also been made in previous studies (CitationKing et al. 2008). The mode of feedback appears to have changed the way in which feedback was given. Other authors have found that not just students but staff also found audio feedback more personal than written feedback (CitationDixon 2009). CitationKing et al. (2008) for example discuss the more spontaneous, less guarded reactions from markers, whereas written feedback is much more formal and concise. One could argue that this difference is the real strength of audio feedback as it resembles much more the tone of a face-to-face conversation and is therefore more approachable and more likely to 'make sense’ to the student.

Evaluating only the time needed to provide feedback would lead to the conclusion that audio feedback is less efficient than written feedback. However, feedback should not only be considered in the context of how long it takes but also in terms of its quality and quantity. A rough estimate of the efficiency of the two different types of feedback shows that in that sense audio feedback was much more efficient than written feedback: on average 34 words per minute were produced by spoken feedback whereas only four words per minute were written in the form of comments. In addition, there is evidence through student views and feedback analysis that audio feedback in this study was not just ‘more feedback’ but that it also was of higher quality than written feedback. Therefore, it might be well worth spending the additional time necessary to provide high quality feedback. Staff training and increasing experience may over time shorten the time needed to produce audio feedback.

In conclusion, in case study one feedback quality and student satisfaction were clearly much higher for audio feedback than for written feedback even though audio feedback did not lead to higher marks in a subsequent exam. This is particularly relevant in relation to the low NSS student satisfaction ratings for the category ‘Feedback helped me clarify things I did not understand’ as audio feedback at least in this study provided a lot more explanations rather than just pointing out errors or omissions. Although in this study audio feedback did not save staff time, it can be argued that audio feedback was more efficient as it produced more and higher quality feedback per unit of time.

Case study two shows that audio feedback worked very well and is highly popular in a PG setting and may well justify wider adoption. However, care needs to be taken in classes with a high percentage of non-native English speakers, as they may find it more difficult to understand the spoken feedback. Unfortunately, the low number of international students in this study did not allow us to reach a definite conclusion.

Acknowledgements

The project was funded by the eLearning Steering Group, Research and Development Sub-Group, University of Liverpool.

References

  • Bell, R. (2011) Negotiating and nurturing: challenging staff and student perspectives of academic writing. In Learning Development in Higher Education ( eds. P. Hartley, J. Hilsdon, C. Keenan, S. Sinfield and M. Verity). Basingstoke: Palgrave Macmillan.
  • Braun, V. and Clarke, V. (2006) Using thematic analysis in psychology. Qualitative Research in Psychology 3, 77–102.
  • Brearley, F.Q. and Cullen, W.R. (2012) Providing students with formative audio feedback. Bioscience Education 20, 22–36. Available at http://journals.heacademy.ac.uk/doi/pdf/10.11120/beej.2012.20000022 (accessed 8 October 2013).
  • Buswell, J. and Matthews, N. (2004) Feedback on feedback! Encouraging students to read feedback: A University of Gloucestershire case study. Journal of Hospitality, Leisure, Sport & Tourism Education 3 (1). Available at www.host.ltsn.ac.uk/johlste.
  • Carless, D. (2006) Differing perceptions in the feedback process. Studies in Higher Education 31 (2), 219–233.
  • Dixon, S. (2009) Now I'm a person: Feedback by audio and text annotation. In Proceedings of A Word in Your Ear 2009 audio feedback conference, Sheffield Hallam University conference, Sheffield. Available at http://research.shu.ac.uk/lti/awordinyourear2009/docs/Dixon-Now-Im-a-person_.pdf (accessed 7 October 2013).
  • Fielding, A., Dunleavy, P.J. and Langan, A.M. (2010) Interpreting context to the UK's National Student (Satisfaction) Survey data for science subjects. Journal of Further and Higher Education 34 (3), 347–368.
  • Gibbs, G. (2010) Using Assessment to Support Student Learning. Leeds: Leeds Metropolitan University Press. Available at https://workspace.imperial.ac.uk/edudev/Public/Additional_Feedback_Reading_Mobile.pdf (accessed 1 December 2013).
  • Gibbs, G. and Simpson, C. (2004) Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education 1, 3–32.
  • Glover, C. and Brown, E. (2006) Written feedback for students: Too much, too detailed or too incomprehensible to be effective? Bioscience Education 7. Available at http://journals.heacademy.ac.uk/doi/pdfplus/10.3108/beej.2006.07000004 (accessed at 7 October 2013).
  • Gould, J. and Day, P. (2012) Hearing you loud and clear: Student perceptive of audio feedback in higher education. Assessment & Evaluation in Higher Education 1–13. Available at http://dx.doi.org/10.1080/02602938.2012.660131 (accessed 7 October 2013).
  • Hattie, J. and Timperley, H. (2007) The power of feedback. Review of Educational Research 77 (1), 81–112.
  • Jollands, M., McCallum, N. and Bondy, J. (2009) If students want feedback why don't they collect their assignments? In Proceedings of the 20th Australasian Association for Engineering Education Conference 2009, University of Adelaide, pp735–740.
  • King, D., McGugan, S. and Bunyan, N. (2008) Does it make a difference? Replacing text with audio feedback. Practice and Evidence of Scholarship of Teaching and Learning in Higher Education 3 (2), 145–163.
  • Lizzio, A. and Wilson, K. (2008) Feedback on assessment: Students’ perceptions of quality and effectiveness. Assessment & Evaluation in Higher Education 33 (3), 263–275.
  • Lunt, T. and Curran, J. (2010) ‘Are you listening please?’ The advantages of electronic audio feedback compared to written feedback. Assessment & Evaluation in Higher Education 35 (7), 759–769.
  • Macgregor, G., Spiers, A. and Taylor, C. (2011) Exploratory evaluation of audio email technology in formative assessment feedback. Research in Learning Technology 19 (1), 39–59.
  • McFarlane, K. and Wakeman, C. (2011) Using audio feedback for summative purposes. Innovative Practice in Higher Education 1 (1), 1–20.
  • Merry, S. and Orsmond, P. (2008) Students’ attitudes to and usage of academic feedback provided via audio files. Bioscience Journal 11 (3). Available at http://journals.heacademy.ac.uk/doi/full/10.3108/beej.11.3 (accessed 7 October 2013).
  • Nicol, D. (2010) From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education 35 (5), 501–517.
  • Nicol, D.J. and Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education 31 (2), 199–218.
  • Rhind, S.M., Pettigrew, G.W., Spiller, J. and Pearson, G.T. (2013) Experiences with audio feedback in a veterinary curriculum. Journal of Veterinary Medical Education 40 (1), 12–18.
  • Rodway-Dyer, S., Knight, J. and Dunne, E. (2011) A case study on audio feedback with geography undergraduates. Journal of Geography in Higher Education 35 (2), 217–231.
  • Weaver, M.R. (2006) Do students value feedback? Student perceptions of tutors' written responses. Assessment & Evaluation in Higher Education 31 (3), 379–394.
  • Winter, C. and Dye, V.L. (2004) An investigation into the reasons why students do not collected marked assignments and the accompanying feedback. In Learning and Teaching Projects 2003/2004, University of Wolverhampton. Available at http://wlv.openrepository.com/wlv/bitstream/2436/3780/1/An%20investigation%20pgs%20133-141.pdf (accessed 7 October 2013).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.