2,506
Views
5
CrossRef citations to date
0
Altmetric
Original Article

Exploratory evaluation of audio email technology in formative assessment feedback

, &
Pages 39-59 | Published online: 27 Jan 2017

Abstract

Formative assessment generates feedback on students’ performance, thereby accelerating and improving student learning. Anecdotal evidence gathered by a number of evaluations has hypothesised that audio feedback may be capable of enhancing student learning more than other approaches. In this paper we report on the preliminary findings of a quasi-experimental study employing qualitative techniques for triangulation, conducted to evaluate the efficacy of formative audio feedback on student learning. We focus on the delivery of ‘voice emails’ to undergraduate students (n = 24) and evaluate the efficacy of such feedback in formative assessment and ergo students’ learning, as well as achieving a better understanding of students’ feedback behaviour post-delivery. The results indicate that audio feedback better conforms to existing models of ‘quality’ formative feedback, can enhance the student learning experience and can be more efficient in feedback delivery. Despite this, and high levels of feedback re-use by student participants, the audio treatment group underperformed in learning tasks when compared with the control group. Differences between the groups were not statistically significant and analyses of individual and mean learning gains across the treatment group provide little indication of improvements in learning.

Introduction

Formative assessment has been shown to be effective in most educational scenarios (Black and William Citation1998) and its importance continues to be well recognised within pedagogical communities (Nicol and Macfarlane-Dick Citation2006). Formative assessment refers to assessment that is intended to generate feedback on student learning performance so as to modify learner thinking or behaviour, accelerate achievement and ultimately effect student learning improvements (Sadler Citation1998; Shute Citation2008). The literature on formative assessment suggests that it performs an important function in fostering ‘deep learning’, thereby promoting conceptual understanding and learning at higher cognitive levels, and thus averting ‘surface’ approaches often associated with summative assessment (Rushton Citation2005).

Despite the clear pedagogical merits of formative assessment and feedback, few formative assessment opportunities are made available to students in higher education (Bone Citation2008). The reasons for this are complex but can generally be attributed to the limited time lecturers have within semester-based systems to provide and deliver formative feedback, a problem exacerbated by staff research responsibilities and the increasingly large student cohorts that can dominate particular academic courses or modules (Yorke Citation2004). For ‘formative learning’ to occur and the benefits of formative assessment realised, feedback has to be timely, relevant and delivered to students prior to summative assessments.

Recent advances in audio and Web 2.0 technologies present opportunities for providing a variety of audio-based learning materials to support and enhance student learning, most notably podcasts (Tynan and Colbran Citation2006; Kervin and Mantei Citation2008; Sutton-Brady et al. Citation2009; Middleton Citation2009). The use of audio technologies to deliver feedback of all types has also attracted attention from the learning technology community (Rotheram Citation2009; Bird and Spiers Citation2009). The potential for time efficiencies in feedback delivery is often cited as a possible benefit and a potential solution to the lack of formative assessment at higher education, particularly as most exploratory research appears to indicate that students tend to be favourably disposed to receiving audio feedback (Sipple Citation2007; Merry and Orsmond Citation2008; Rotheram Citation2009). Evidence gathered by a number of studies has also hypothesised that audio feedback may actually be capable of enhancing student learning, thereby reinforcing its use (Ice et al. Citation2007; Sipple Citation2007; Bird and Spiers Citation2009). However, research to date has predominantly focused on student satisfaction or attitudes and therefore our current understanding of audio feedback efficacy in student learning remains limited.

In this paper we report on the preliminary findings of a quasi-experimental study employing qualitative techniques for triangulation. The research is conducted as part of a wider project funded to formally evaluate the use of audio technologies in delivering formative feedback. It focuses on the use of audio feedback technology to deliver ‘voice emails’ to undergraduate students studying topics within the areas of business information management and web technologies. In particular, we attempt to evaluate the efficacy of audio email feedback in formative assessment and ergo students’ learning, as well as achieving a better understanding of students’ feedback behaviour post-delivery.

Background

The use of audio to provide feedback is not new. Cryer and Kaikumba (Citation1987) conducted an interpretive study to investigate the use of audio-cassette tapes as a means of providing feedback on written work. Although a small study population prevented robust conclusions to be drawn, they found audio feedback to be well received by study participants. This was largely based on students’ perception that written feedback was often “too cryptic” to be followed or easily interpreted in subsequent learning tasks. The increased level of personalisation possible in audio coupled with variations in voice intonation assisted in fostering student motivation and interest in their learning, although some students found the lack of a written record for later reference to be problematic. Additional staff benefits, such as time savings, avoiding the ‘stress’ of drafting and structuring a written arguments, were also reported.

Subsequent research, which has sought to exploit recent advances in audio technology has, in general, reported findings not dissimilar to Cryer and Kaikumba (Citation1987). Merry and Orsmond (Citation2008) conducted a small study explicitly investigating student perceptions and use of formative audio feedback delivered as MP3 files via email. Students were found to respond positively to audio feedback principally because it was easier to interpret than written feedback, was more personal and more detailed. In a wide-ranging qualitative study led by Rotheram (Citation2009), students were “overwhelmingly positive” about audio feedback and appreciated its personal nature. Such results have been further corroborated in the literature. Ice et al. (Citation2007) found audio feedback to invoke perceptions that tutors “cared”, a quality attributable to the personalised nature of feedback and the way in which this invoked improved student engagement with their learning. Similarly, other studies have found that students consider audio feedback to shorten the “social distance” between them and lecturers (Morra and Asis Citation2009). It is noteworthy that although the personal nature of audio feedback is generally found to be a positive aspect, recent exploratory research conducted by Fell (Citation2009) noted that it was ineffective in enhancing the learning of postgraduate students owing to the lack of student-tutor dialogue. Fell’s findings appear to be atypical and may be attributable to the small number of participants used (n = 6). Nevertheless, the need for a student–tutor dialogue is a finding that warrants further exploration and is one that – for the purposes of this study – we attempt to control.

Using audio approaches to capture greater feedback detail and to turn feedback around more quickly has been identified by many as a potential advantage of the approach. With the exception of Ice et al. (Citation2007), who found that feedback time reductions of almost 75% were possible in particular circumstances, suggestions of time efficiencies have been mixed and based largely on anecdotal evidence (for example, Cryer and Kaikumba Citation1987; Bird and Spiers Citation2009; Rotheram Citation2009).

The generally positive student perception of audio feedback in much of the literature appears to be attributed to the fact that it is better placed to meet the criteria of ‘good feedback’. Gibbs and Simpson (Citation2004) propose a series of conditions that assessment feedback should meet in order to maximise student learning. These conditions are varied but include several that are directly relevant to formative assessment, such as the need for feedback to be easily understood by the student, to be sufficiently detailed, timely and received by the student “when it still matters”. If audio feedback can better meet the criteria of ‘good’ formative feedback then it can be hypothesised that there exist better opportunities for improvements in – or increased – student learning. Few researchers have attempted to measure student learning or better understand audio feedback efficacy. Sipple (Citation2007) conducted a qualitative study focusing particularly on student attitudes and perceptions of receiving audio feedback. Audio feedback was found to positively influence student motivation and revision behaviour, self-confidence and student–tutor relationships. However, Sipple concluded that audio feedback improved overall student learning, a ‘speculative’ conclusion based on improvements in revision behaviour and the possible benefits this would stimulate in student learning. Improving our understanding of this aspect motivates this current research study.

Methodology

Aims

Recall that our current understanding of audio feedback efficacy in student learning remains limited. The aim of this study is therefore to formally evaluate the efficacy of audio feedback technology in formative assessment and ergo students’ learning, as well as achieving a better understanding of students’ feedback behaviour post-delivery. Since it is an under-researched area, this research is largely exploratory in nature. Three hypotheses were formulated:

  • H1: Audio feedback can better meet the conditions of ‘good’ formative assessment feedback, thereby constituting better feedback.

  • H2: Audio feedback can be more effective than written feedback in producing improvements in student learning and assessment scores.

  • H3: Audio feedback technology can better facilitate the efficient creation and delivery of feedback to students.

Significant exploratory goals are an improved understanding of students’ use of audio feedback; for example, listening habits, how they use it, the role of portability, and whether students re-use audio feedback more so as to inform conclusions and future research.

Participants

The study participants were drawn from a first-year cohort studying the BA (Hons) Business Management and Information degree course. This business-oriented degree provides students with typical business skills, but is peculiar in its emphasis on technology and information in business, particularly in areas pertaining to web technologies, e-business, information systems and management. The study was conducted during semester one of the academic year 2009/10 for a module on web technologies.

Twenty-four students agreed to participate in the study (Table Citation1). The small population available (n = 26) indicates that the sample used (n = 24) was large, representing 92% of the entire cohort. Participants included male (n = 14) and female (n = 10) students, predominantly within the 18–24 age bracket (n = 22). The majority were registered for full-time study at the institution (n = 23). Excluding international students without such points, the mean UCAS point score of the sample upon beginning the degree course was 256 (standard deviation [SD] = 37), thus representing one of similar academic characteristics. As a general comment, it can be added that the participants began the degree course with a general aptitude for the business and information technology (IT) disciplines.

Table 1. Demographic details of study participants.

‘Voice emails’

To deliver audio email feedback, Wimba Voice™ 6.0 was installed (Wimba Citation2009) within the virtual learning environment. Wimba Voice™ is a web-based tool capable of being bolted onto a variety of virtual learning environments and provides a series of audio tools such as a podcaster and voice enabled discussion fora. It also enables the creation and delivery of ‘voice emails’. These are essentially voice messages that can be recorded and communicated with students using a familiar email/tape-recorder interface, all within a Java-enabled web browser (as in Figure Citation1). It is also possible to send written emails with audio annotations.

Figure 1 Creating a ‘voice email’ using Wimba Voice™.

Using voice emails for this study was advantageous for several reasons. Firstly, the audio file is not attached to the email, thus obviating MP3 file size issues normally associated with email delivery (for example, Merry and Orsmond Citation2008; Rotheram Citation2009). Audio is instead saved to a local server. The recipient of a voice email is provided with a hyperlink to follow and then offered the opportunity of streaming the audio within a web browser or downloading the message as an audio file. Recipients can listen or download the audio as many times as they wish.

Secondly, the voice email enables students to reply with their own voice emails in much the same way that a reply might be sent to a conventional email. The importance ascribed to fostering a student–tutor dialogue in formative feedback has been well noted in theoretical work (Nicol and Macfarlane-Dick Citation2006) and corroborated more recently in audio feedback research (Fell Citation2009). The use of voice emails therefore contributes to a student–tutor dialogue in a way that other audio feedback delivery methods do not and its use here is an attempt to control for a known limitation of audio feedback. The majority of IT laboratories used by the cohort are equipped with headset microphones as standard.

Procedure

The research was conducted with the student participants during the second half of semester one. The summative assessment for the course module required the submission of an XHTML report. The module design was modified to incorporate a formative assessment point mid-way through semester one. This entailed the submission of an XHTML report plan, thus providing tutors with feedback on student learning progress and understanding.

To control for varying levels of student information and communications technology (ICT) efficacy, a pre-test orientation session with Wimba Voice™ was delivered to all students during the week preceding formative feedback delivery. This session covered how to access voice emails, download them and reply to them. A demonstration video was also created and posted on the relevant module section of the virtual learning environment. After submitting their formative assessment, students were then randomly streamed into two feedback groups: a written group (control) (n = 12); and a voice (email) group (treatment) (n = 12). Details are provided in Table Citation2.

Table 2. Details of participants in streamed feedback groups.

To minimise variability in the marking of the formative submission, module tutors agreed marking criteria and, where possible, attempted to incorporate aspects of Nicol and Macfarlane-Dick’s (Citation2006) seven principles of good formative feedback. In line with formative feedback practice, no marks were attached to students’ formative assessment submissions; however, for the purposes of the current research, a mark was recorded by tutors based on the agreed marking criteria. This mark remained undisclosed to students.

Formative feedback was delivered to students within a week of submission. Students streamed into the treatment group received voice email feedback; students streamed into the control group received their feedback as an MS Word file email attachment. The required length of time taken to generate and deliver feedback was recorded by tutors. This was measured from the moment the tutor began perusing the submission to the very end of feedback creation process (i.e. delivery to the student) so as to accommodate the total time that a tutor might invest in providing formative feedback.

In the final week of the semester, students were required to submit their summative assessment (XHTML report). Summative assessment submissions were marked and written feedback delivered to all students. Student performance in the summative assessment was recorded for subsequent analysis.

Research instruments

Student participants in both the control and treatment groups received a web-based survey instrument designed to elicit data pertaining to feedback attitudes, initial use, reception and effect on learning. The survey was distributed to students one week after formative feedback was delivered and was administered during an IT laboratory session. The web-based survey consisted of three distinct sections. Section one was designed to capture simple demographic data (Table Citation1), while section three captured descriptive data on the extent of the formative feedback use, student ICT access and device ownership and use, and feedback preferences.

To determine how well formative feedback achieved its purpose and to detect the effect of formative feedback on student learning, the design of section two of the web-based survey instrument was informed by Nicol and Macfarlane-Dick’s (Citation2006) feedback model and the feedback conditions proposed by Gibbs and Simpson (Citation2004). Students were required to indicate their responses to a series of statements using a five-point Likert scale, ranging from ‘strongly agree’ (five) to ‘strongly disagree’ (one) (e.g. Table Citation3). These statements mapped to the above-noted models.

Table 3. Measures of central tendency and Mann–Whitney U tests between groups for section two responses.

Semi-structured interviews were conducted with a sample of the student participants in the final week of the semester (n =10). These interviews were designed to gather rich data on audio feedback use, perceptions and to better understand the role of formative audio feedback on student learning. Interviews were administered by a member of the research team not involved in module teaching. Interviews were sound recorded, transcribed and then uploaded into QSR NVivo 8 for content analysis, coding and subsequent analysis. Coding was undertaken using Holsti’s (Citation1969) methodologies for content analysis and category creation.

Results

Survey instrument

Table Citation3 sets out the results from section two of the survey instrument. With such ordinal data it is conventional to consider median results, although mean results have been included for completeness. The largely favourable nature of responses indicates that both groups were generally satisfied with their feedback, whether it was in written or voice form. Notable median differences in group responses can be observed for statements I, J and K, indicating that students in the voice email group found their formative feedback to better meet good formative feedback criteria in terms of detail and being understandable. A notable difference can also be found for statement L, revealing that the voice email feedback failed to inspire and motivate students to the same degree as written feedback. A Mann–Whitney U test was conducted to detect significant differences between group responses for all questions (Table Citation3). Differences in results for questions J, K and L were noted as being statistically significant (p < 0.05), further corroborating some of the above-noted differences. It is worth noting that positive mean responses for the voice email group can be observed for many of the question statements included in Table Citation3.

Recall that section three of the survey instrument captured descriptive data on the extent of formative feedback use, student device ownership and use, and feedback preferences. No member of the voice group reported using the reply functionality of their voice emails to engage in a tutor–student dialogue, with 75% (n = 9) indicating that they found the feedback to be sufficiently clear and no clarification was required. The remaining 25% (n = 3) reported a preference for seeking clarification from a tutor in person, a result matched in the written group; however, only 25% (n = 3) of the written group reported finding the feedback to be sufficiently clear. Indeed, 17% (n = 2) also replied in an email to ask further questions of their written feedback. One student failed to recall whether they had responded to their written feedback or not.

Only 50% of the voice group (n = 6) reported saving their audio feedback to a device for subsequent listening or re-using it after delivery. Data captured by the survey instrument (Table Citation4) indicate a greater proclivity to re-use written feedback, rather than audio. It should, however, be noted that the survey instrument was administered shortly after formative feedback was delivered and therefore failed to capture re-use behaviour in the week preceding summative assessment submission.

Table 4. Re-use of formative feedback after delivery.

Student participants reported wide ownership of a variety of mobile technologies capable of audio-file playback (Table Citation5); perhaps attributable to the ICT nature of the degree course they are studying. A significant proportion (25%) reported owning a smartphone (e.g. Blackberry™, iPhone™); however, far more owned a mobile phone capable of MP3 playback (79%), an MP3 player (79%) (e.g. iPod™, Creative ZEN™) and laptop (92%).

Table 5. Summary of mobile device ownership characteristics of student participants.

Academic performance data

To evaluate students’ academic performance and measure the potential influence of formative voice email feedback on student learning, the academic performance of both groups was analysed. Students’ performance in the formative and summative assessments is set out in Table Citation6 and Figures Citation2 and Citation3.

Figure 2 Student performance in formative and summative assessment (written).

Figure 3 Student performance in formative and summative assessment (voice).

Table 6. Student performance in formative and summative assessments of written and voice groups.

Student performance in the formative assessment was generally poor; although coincidentally both groups had an identical mean performance (Table Citation6). Performance in the summative assessment (after experimental treatment) was better, with the written group performing better, although an unpaired two-tailed t-test at p ≤ 0.05 revealed no difference between group performances (t(22) = 0.43, p = 0.67). It is nevertheless noteworthy that student performance in the voice group is less dispersed around the mean (SD = 9.5; R = 30).

Whilst students’ performance in the formative assessment appears to be replicated in the summative assessment by the graph profiles provided in Figures Citation2 and Citation3, it is possible to observe learning gains in the performance of voice group participants in Figure Citation3 that are not observable in the written group. However, this observation does not appear to be borne out by the mean percentage learning gains achieved by students (Mvoice = 24.34; Mwritten = 26.34) and was not corroborated by an unpaired two-tailed t-test at p ≤ 0.05 on the individual learning gains achieved by student participants (t(22) = 0.53, p = 0.6).

Interview data

Iterative analyses of the interview data in QSR NVivo derived a hierarchical coding taxonomy (Table Citation7). This denotes the principal themes identified from the data. The taxonomy includes two super-ordinate categories (voice and written), each including a series of subordinate classes. The scope of these classes is delineated in Table Citation7 along with indicative supporting quotes from the data, the number of interview sources in which the theme code was discussed, and the total references to this theme code across all interviews. Further discussion of these data is incorporated into the discussion section.

Table 7. Coding taxonomy derived from interview data.

Comparison of time requirements

To minimise the influence of varying voice characteristics and intonation on students’ perception of voice email feedback, all audio feedback was delivered by a single member of the teaching team. To enable comparisons to be made between the time taken to deliver audio and written feedback created by different tutors, a sample of submissions (n = 12) was taken from another cohort to provide ‘dummy’ feedback for benchmarking purposes (Table Citation8). Data are provided in decimal time and minutes/seconds. The time taken for both tutors to complete the dummy feedback for the same submissions was generally similar and did not differ significantly for either voice emails (t(10) = 1.52, p = 0.16) or written feedback (t(10) = −0.61, p = 0.56).

Table 8. Benchmark timings for delivery of ‘dummy’ audio and written feedback.

The time requirements for generating voice email and written feedback for the experimental sample are provided in Table Citation9. The time requirements for voice email feedback were significantly smaller and reveal that audio was almost twice as fast as written feedback. Using audio also appears to promote less variability in the amount of marking time spent per student submission, as indicated by the reduced data dispersion (i.e. SD and R).

Table 9. Time requirements for delivering voice and written feedback.

Discussion

This study aimed to investigate and evaluate the efficacy of audio technologies in delivering formative feedback, its ability to meet recognised formative feedback models, and explored its influence on student learning. Despite performing similarly in the formative assessment and indications from the survey instrument and interview data that voice email better achieved certain conditions of ‘good’ formative feedback (supporting our first hypothesis), the written group actually performed better in the summative assessment, although this difference was not significant. Interview participants noted an increased desire to re-use audio as opposed to written feedback and revealed a clear preference for audio owing to the fact that it was easier to understand, more detailed and personal. The results from our analysis of students’ summative assessment performance were therefore not intuitively anticipated and were disappointing. Whilst the generally high level of data variability and small participant numbers may have contributed to this finding, we nevertheless have to reject our second hypothesis in this instance. However, the large time efficiencies recorded in our study verifies our third hypothesis and helps to clarify conflicting evidence on the time requirements for generating audio feedback. This is an encouraging finding as it supports the view that audio feedback provides improved opportunities for adhering to good pedagogical practice by better enabling the formative assessment to be embedded within curriculum design.

The survey instrument elicited generally positive data in favour of audio feedback, some of which was statistically significant and much of which was corroborated by interview data (A.5 and subclasses). In particular, students found the audio feedback to be clearer and easier to understand and interpret. Literature on assessment feedback notes that it can often be difficult for students to interpret and decode feedback, owing to the use of language and jargon (e.g. “This is insufficiently critical”). It is suggested that this ‘information transmission’ approach to feedback delivery causes confusion in learners as they are consequently unsure how best to correct their learning behaviour (Nicol and Macfarlane-Dick Citation2006). It is encouraging that audio was more effective in providing clear and understandable feedback and this provides an indication that audio feedback could positively enhance the student learning experience. Interview data (coded at A.5.1.1 – Clarity) tend to support this observation and reveal the increased detail of audio feedback as being the facilitator of such clarity:

I thought [the tutor] put the message through very clearly and concisely. He spoke slowly enough to be able to understand it easily. But also not so it came across as too slow and jokey, if you know what I mean? There was more emphasis on certain words which you don’t get from the written feedback, so you know … And also probably the order of the feedback may have been in order of relevance for you to look at … I think as well. (Student 3)

I thought it was better than getting written feedback because sometimes you can’t read people’s writing either, whereas with the listening one you can understand it a bit more. I thought it was quite detailed. Because he said that I should put in some other bits because there was too many parts and stuff, and that I should put in other bits. And other bits that I should think about writing and stuff. I found it dead good […] It’s more modern isn’t it. Because sometimes people have got dead scribbly handwriting and then you’ve got to go back to them, and ask “what does this say? I can’t read it”. Whereas, with the listening one it’s easier to understand unless they have a really strong accent, but I could listen to mine perfectly. (Student 5)

A lot of the time with written feedback I can get confused with what they actually mean by it, whereas with audio I think they come across more understanding. I think maybe by saying it they can explain it better than writing it sometimes. I find that useful to gather the feedback. A lot more detailed than written feedback. I think when they say it they can understand what they have just said may not have made sense, and they can expand on it more. (Student 6)

Nevertheless, statistically significant differences in group responses for statement L in the survey instrument (“The feedback helped to increase my interest in the module I am studying”) also revealed that the potential of audio feedback to better engender interest and motivation among students in their subject of study may have been overstated by previous research (Cryer and Kaikumba Citation1987; Ice et al. Citation2007). The results from this study suggest that written feedback better inspired motivational beliefs in student participants. Similarly, inspiring motivation and interest in the topic of study were not themes that emerged strongly from the interview data. Students often reported positively on the personal nature of the feedback (A.5.1.6) and its ability to emulate face-to-face interactions (A.5.1.5), but few associated this with improvements in enthusiasm. Some also reported feelings of engagement (A.5.1.4) and “excitement” but this appears to have been attributable more to the novelty of receiving audio feedback rather than an inherent ability of recorded speech to be engaging.

As noted above, the data indicate an overall preference for audio feedback, which reflects the findings of existing research and was anticipated. It is nevertheless noteworthy that although the survey and interview data are largely supportive of this view, data from the latter uncovered several students who expressed a preference for written feedback (B.1). Of the students that explicitly expressed a preference for audio feedback, four also articulated a desire to receive their feedback in both audio and written formats in future (A.3). Interrogation of the qualitative data indicates that two students identified issues relating to “referring back” (A.4.1) as motivating this view. Referring back to specific passages of feedback or refreshing on its key points is certainly simpler to do with written feedback, either from scanning or searching the text. This view links with the findings of Cryer and Kaikumba (Citation1987) and was further corroborated by Students 4 and 8, who stated:

[O]ne thing that I prefer about the written feedback is where I can make a list and tick stuff off … improve my work as I go along and check what I’ve done.

[Written feedback can be useful] because it’s there in front of me when I’m doing my work. Whereas the audio you have to sit at the computer and listen to it.

Even students favourably disposed to audio feedback remarked on the lack of flexibility sometimes afforded by audio approaches:

I was quite impressed with it […] I found it quite useful that you actually listen to someone and they sort of get their point across, rather than it being written down. Although, I did find that […] when you’re in the library you can’t go and listen to it again, whereas you could have read it again. That’s, like, the only problem I’ve found. But other than that, it’s really good. (Student 7)

Interestingly, many students stated a preference for not downloading audio feedback to another device and instead preferred streaming the audio, an unusual finding given the wide ownership of mobile devices by student participants (Table Citation5) and the increased study flexibility that this would have afforded students. Instead, numerous students noted that they preferred to leave the voice email within their email software and revisit when necessary. Students 4 and 6 are indicative:

No, I didn’t download the file. […] Not sure if I would have. I just felt I could rewind it if I wanted to and could go back to my emails if I needed to.

I have not put it [the audio file] on my MP3 player or anything like that. I’ve just listened to it straight off the emails.

Even those students that saved the file did so to their laptop or to the university networked drive, and not to a mobile device. As Students 3 and 2 noted:

I initially listened to it through streaming because I wasn’t at my own computer at the time. So, I didn’t have anywhere to save it to. Since then I’ve saved it to my laptop so I can use it for easier access.

I put it on a USB stick and listened to it at home. When I plug it in it goes straight to Windows Media Player and then it’s easy […] With written [feedback] every time I have to [find] the papers to read but with the audio it’s like it’s straight on the computer … you just log on and listen to it. So I don’t have to carry anything around or misplace it.

Although viewed positively by students and enabling a level of portability (A.2), such ‘file saving behaviour’ immediately places restrictions on the level of access possible within particular learning contexts, as Student 7 notes above (e.g. at the library or IT laboratory), and for this reason remains much the same as relying on the streaming of audio via email. Promoting m-learning is a motivation behind using the audio feedback format, and the expectation is that students will in most cases save the file to a mobile device, thus enabling quick and flexible access to their feedback. This finding may therefore appear to constitute strange personal information management (PIM) behaviour on the part of our student participants.

Email has been found to be an important PIM tool in the personal archiving of documents and in task management (Whittaker, Bellotti, and Gwizdka Citation2006), and studies within information science note that an increasingly popular strategy in PIM is actually “to do nothing” (Bruce, Jones, and Dumais Citation2004). This is normally because saving and filing information can prove cognitively onerous and, when required to re-find personal information, most users are often successful anyway. It could therefore be suggested that our student participants chose in many cases not to save the file for fear of not being able to re-find it at a later date and instead preferred to do nothing, where at least the voice email could always be re-found (via advanced searching or file browsing) for future study tasks. A potential contributory factor explaining this PIM behaviour may be related to the clarity and detail of the audio feedback, which may have been such that it reduced – at least initially – students’ perceived need to revisit it at a future date. ‘Doing nothing’ may therefore have been considered by many participants as the most appropriate form of behaviour. Feedback clarity also appears to have usurped students’ need to enter into a student–tutor dialogue about their feedback.

Although only a small number reported re-using their audio feedback in the survey instrument, interview data indicated that more students re-used their audio feedback as the summative assessment deadline approached, as the comments of Student 3 illustrate:

I’ve received feedback three times now from different modules, and with written feedback I tend to access it once when I first get the email, and then normally just once more when I am doing the actual report or assignment. So far, with the audio feedback I’ve listened to it a few times already and I even haven’t started the report.

This increased level of audio re-use is consistent with the findings of Sipple (Citation2007) and Ice et al. (Citation2007) who, in different ways, suggest that increased feedback use will result in greater feedback application in learning tasks that, in turn, will stimulate learning gains among students. Our findings on academic performance and group learning gains do not appear to corroborate their assumptions in this instance. It is worth noting that Ice et al. (Citation2007) also attempted to examine the degree to which feedback was applied by students in future work.

Conclusions

This exploratory study aimed to investigate and evaluate the efficacy of audio technologies (voice email) in delivering formative feedback and its influence on student learning. It was motivated by the potential for time efficiencies in feedback delivery and as a potential solution to the lack of formative learning at higher education. In particular, our research was motivated by work hypothesising that audio feedback could be capable of improving student learning. Our results tend to support the view that audio feedback can be more efficient and better meets existing models of ‘quality’ or ‘good’ formative feedback, as posited by the literature, thus enhancing the student learning experience and better informing strategic policies with respect to assessment practice at higher education. Qualitative data provided useful insights into students’ perceptions of audio feedback and has improved our understanding of how it is used and re-used by students. However, despite better conforming to formative feedback models and high levels of feedback re-use by student participants, we found no significant differences between groups in either the formative or summative assessment tasks.

Since the study is exploratory in nature, only a small number of participants were used. This is a clear limitation of the study design, and the resultant findings are therefore intended to be indicative rather than generalisable. Neither can we discount the possibility that students failed to engage with the assessment task at the level required, instead adopting a ‘surface learning’ approach. Future research should therefore seek to better understand the level of student engagement prior to formative assessment submission and study the learning behaviour of students in the weeks preceding summative assessment submission. Our interview data provide a useful basis for further qualitative work, providing a taxonomy of factors that influence audio feedback effectiveness. Future research should employ further qualitative techniques to better understand the varying nature of the feedback content delivered using audio and written methods and monitor resultant student learning. This would inform the development of a theoretical model to better guide assessment practice and maximise the effectiveness of audio feedback by practitioners. Replicating aspects of the study using larger cohorts is recommended to corroborate the quantitative findings derived from the survey instrument and from observing academic performance.

Acknowledgements

This research was conducted as part of the ExAEF Project and funded by the Higher Education Academy Subject Centre for Information and Computer Science General Development Fund (2009).

References

  • Bird A., Spiers A. Implementing Wimba: A few words in your ear. Paper presented at Wimba Connect 2009: Adventures in Collaboration, Scottsdale AZ April. 5–8 2009
  • Black P., William D. Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice 1998; 5(1): 7–74.
  • Bone A. Designing student learning by promoting formative assessment. Paper presented at LILAC 2008: (Dis)integration … Designs on the Law Curriculum January. 3–4 University of Warwick: UK, 2008
  • Bruce H., Jones W., Dumais S. Information behaviour that keeps found things found. Information Research 2004; 10(1): http://informationr.net/ir/10-1/paper207.html.
  • Cryer P., Kaikumba N. Audio-cassette tape as a means of giving feedback on written work. Assessment and Evaluation in Higher Education 1987; 12(2): 148–53.
  • Fell P. Sounding out audio feedback: Does a more personalised approach tune students in or switch them off?. Paper presented at Audio Feedback: A Word In Your Ear 2009 Conference December. 18 Sheffield Hallam University: UK, 2009
  • Gibbs G., Simpson C. Conditions under which assessment supports learning. Learning and Teaching in Higher Education 2004; 1(1): 3–31.
  • Holsti O.R. Content analysis for the social sciences and humanities. Addison-Wesley: London, 1969
  • Ice P., Reagan C., Perry P., Wells J. Using asynchronous audio feedback to enhance teaching presence and students’ sense of community. Journal of Asynchronous Learning Networks 2007; 11(2): 3–25.
  • Kervin L., Mantei J. Taking iPods into the field to capture and share teacher wisdom stories. Paper presented at the Australasian Society for Computers in Learning in Tertiary Education November. 30–December 3 Deakin University: Australia, 2008
  • Merry S., Orsmond P. Students’ attitudes to and usage of academic feedback provided via audio files. Bioscience Education 2008; 1(3): http://www.bioscience.heacademy.ac.uk/journal/vol11/beej-11-3.aspx.
  • Middleton A. Beyond podcasting: Creative approaches to designing educational audio. ALT-J, Research in Learning Technology 2009; 17(2): 143–55.
  • Morra A.M., Asis M.I. The effect of audio and written teacher responses on EFL student revision. Journal of College Reading and Learning 2009; 39(2): 68–82.
  • Nicol D.J., Macfarlane-Dick D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education 2006; 31(2): 199–218.
  • Rotheram B. Sounds good – final report. Leeds Metropolitan University: Leeds, 2009 http://www.jisc.ac.uk/media/documents/programmes/usersandinnovation/sounds%20good%20final%20report.doc
  • Rushton A. Formative assessment: A key to deep learning?. Medical Teacher 2005; 27(6): 509–13.
  • Sadler D.R. Formative assessment: Revisiting territory. Assessment in Education: Principles, Policy & Practice 1998; 5(1): 77–84.
  • Shute V.J. Focus on formative feedback. Review of Educational Research 2008; 78(1): 153–89.
  • Sipple S. Ideas in practice: Developmental writers’ attitudes toward audio and written feedback. Journal of Developmental Education 2007; 30(3): 22–31.
  • Sutton-Brady C., Scott K.M., Taylor L., Carabetta G., Clark S. The value of using short-format podcasts to enhance learning and teaching. ALT-J, Research in Learning Technology 2009; 17(3): 219–32.
  • Tynan B., Colbran S. Podcasting, student learning and expectations. Paper presented at annual conference of the Australasian Society for Computers in Learning and Tertiary Education, Sydney Australia December. 3–6 2006
  • Whittaker S., Bellotti V., Gwizdka J. Email in personal information management. Communications of the ACM 2006; 49(1): 68–73.
  • Wimba. Wimba Voice. 2009. http://www.wimba.com/products/wimba_voice/.
  • Yorke M. Formative assessment and student success. Paper presented at Improving Feedback to Students (Link between Formative and Summative Assessment) June. 4 University of Glasgow: UK, 2004