643
Views
5
CrossRef citations to date
0
Altmetric
Descriptive accounts

Making Movies: The Next Big Thing in Feedback?

Pages 1-14 | Received 12 Sep 2011, Accepted 23 Nov 2011, Published online: 14 Dec 2015

Abstract

Good quality, timely feedback is a key factor to help students achieve their full potential. Increased class sizes have put significant strain on the ability to return work promptly without compromising feedback quality. In the current study, two screencasting technologies were used to produce audiovisual feedback. For essays, Jing was used, alongside a sample that were marked using typed comments in GradeMark for comparison. For longer lab reports, Camtasia Studio 7 software was used, allowing for editing and inclusion of additional material. Screencasting essay feedback reduced overall marking time by up to 50% compared to the typed comments, whilst the edited screencasts took about the same time as manual marking - although the editing time could probably be reduced. The students’ responses to the audiovisual feedback were extremely positive, with evidence of deeper engagement with the feedback and greater understanding of the tutor’s comments. Screencasting should thus be seen as a potentially timesaving and useful format for delivering quality feedback.

Introduction

It is established wisdom that quality feedback, when used properly, will enhance a student’s performance (CitationHiggins et al., 2002; Brown and Glover 2006; Hattie and Timperley, 2007; Poulos and Mahony, 2008). As evidenced by recent UK National Student Survey (NSS) results (CitationUnistats, 2011) however, the quality and timeliness of feedback are areas where many universities are failing to meet students’ expectations. Whilst some onus must fall on the students — feedback is hardly likely to be effective if the student does not bother to read it, or take the advice on board (CitationDing, 1998; Lea and Street, 1998) — we, as education providers, have an obligation to provide meaningful feedback within a reasonable timeframe (CitationGibbs and Simpson, 2004; Nicol, 2010).

Feedback quality is important and often there is a gap between what the tutor and the student perceive as good feedback (CitationHolmes and Papageorgiou, 2009; Lizzio and Wilson 2008; Orsmond, and Merry 2011). CitationGlover and Brown (2006) analysed the amount and type of feedback (e.g. content related, such as identifying omissions and errors, and more general issues such as praising, correcting grammatical errors etc.) and judged the depth of comments given by multiple tutors at two different institutions. They concluded that much of the feedback given is of little use to the student as it is too topic specific and given after the completion of the topic or module. Students interviewed in the same study claimed (contrary to other studies; CitationDing, 1998; Lea and Street, 1998) that they did read the feedback, but often did not act upon it either due to conflicting advice from different tutors or because the tutor comments did not explain in enough depth what was wrong and how it could be improved upon.

In response to the NSS results, a review of the feedback processes used across the Schools in the author’s institution was carried out as part of a JISC funded project (STAF). The other aims of which were to determine how technology could help tutors to improve feedback (CitationBostock and Street, 2011). In the author’s School, the current minimum requirements for feedback are comments written on the script accompanied by a generic summary tick-box sheet with space for general comments on the strengths, weaknesses and how to improve. Staff are encouraged, but not required, to give typed rather than handwritten feedback to address issues of legibility. However useful a tutor’s comments might be, if they are illegible then it has been a waste of the tutor’s time and of no use to the student. Whilst it may not be practical to type on a hard copy of a student’s work, at least the summary sheet can be typed and returned (hard copy or electronically) to the student. When student work is submitted electronically, a logical extension is to type comments directly onto the student’s work either using the editing features of word-processing packages or, if work has been submitted via TurnitinUK, by using the GradeMark facility to add comments to the work online. There are several advantages to using GradeMark. Firstly, in GradeMark, comments can be typed in expandable speech bubbles, which is a clear improvement on writing comments all over a script. Secondly, the tutor can build up a custom comments bank — ideal for dealing with common problems specific to the piece of work being assessed. Additionally, there are standard comments addressing grammatical errors (which, unlike some instructors’ comments, actually explain the grammatical rule) and common stylistic issues. Whilst some tutors do not feel it is necessary to correct grammar, there is evidence that correction is useful particularly for students who are writing in a second language (CitationEvans et al., 2011). Finally, there is no need to download the files and the students can see the assessed work online as soon as it has been released.

The author has been using GradeMark for several years and without the physical restraints of having to write all over a script, the quantity and quality of the feedback provided has improved, as evidenced by student and peer comments on feedback evaluation questionnaires. The author’s GradeMark feedback was originally accompanied by the separate generic feedback sheet that was returned to students electronically via the module KLE page (Keele’s virtual learning environment). It was the intention that the students should read the GradeMark comments in conjunction with the feedback sheet. However, after several students queried their marks, it became clear that they had only read the feedback sheet and not the more extensive GradeMark comments. Indeed, the tracking tool on the module KLE page revealed that a significant number of the students had not accessed both forms of feedback. This issue has been addressed in the current study by dispensing with the additional feedback form and typing the summary comments in GradeMark.

Anyone with a heavy marking load will know that providing extensive written or typed feedback is time consuming. Whilst typing comments certainly addresses legibility issues, there is little (if any) timesaving involved. The aim in the author’s institution is to return feedback within three working weeks, although the timing of assessment submission towards the end of a module often means that work is marked and returned after the module has finished. Increasing student numbers make it harder for instructors to maintain the amount and quality of feedback and still meet the three week deadline. For example, in the past few years the number of students on one of the modules described in the current study has doubled from 45 to 90 and this has lead to the challenge of decreasing the marking time per script without compromising the quality of the feedback.

The spoken word conveys much more information in a shorter space of time and vocal intonation can be used to stress important points. There are reports in the literature of tutors trialling audio feedback, using Audacity software to record commentaries that were converted to mp3 files to be emailed to students. For example, CitationMerry and Orsmond (2008) interviewed students who had received written and audio feedback on the same piece of work and found that the students perceived the audio feedback to be more useful than the written feedback. Some students annotated their work whilst listening to the recording, showing a deeper engagement with the feedback. CitationLunt and Curran (2010) report the results of a similar survey where both students and staff were interviewed. The tutors noted that recording commentaries was significantly quicker than writing the comments. Moreover, the students were generally very impressed with the feedback, finding it more detailed, personal and more helpful than other forms of feedback. It was also reported that students were more likely to listen to the audio feedback than read written feedback. The positive response of students to audio feedback is also mirrored by CitationMcGarvey’s study (2010), in which chemistry students were given mp3 files as feedback on posters and lab diaries. In this instance, there were a few negative comments, with students saying that they felt intimidated by a voice coming out of their computer. Some students admitted to not listening to the feedback because they were scared that hearing their mistakes being corrected by a person would be more difficult to cope with emotionally than just reading the corrections (CitationMcGarvey, 2010).

An audio commentary may be a quicker and potentially more effective way of providing feedback, but it loses efficacy if the student does not have the work in front of them whilst they are listening to it. Screencasting could be the ideal solution to this problem. Essentially, screencasting software allows the user to record a real-time audio commentary of whatever is on the computer screen. This medium is beginning to enter mainstream education, although currently its main uses are for short instructional videos (CitationPeterson, 2007), as an alternative to podcasts of lectures or for supplementary teaching materials (CitationCann, 2007). Its use as a feedback tool is less well documented, with only a few appearances in the recent literature, including the pilot study that lead to the current project (CitationHope, 2010; Fish and Lumadue, 2010). Screencast packages are available from a number of software developers including TechSmith (Jing, Camtasia Studio, Snagit), Adobe (Creative Suite), Debug Mode (Wink), Blueberry Software (BB Flashback) and CamStudio.org (Camstudio). They range in sophistication from simple screencast/screenshot products with limited output formats to fully editable packages that allow the user to create professional quality movies in multiple formats. Likewise, they range from free to beyond the budgets of most education providers. Of the free packages that the author has tried (Jing, Wink and CamStudio), Jing was judged the most user friendly and simple to use and, along with the reasonably priced Camtasia Studio, is one of the packages evaluated in the current study.

The main aim of the current study was to determine whether screencasting could be used to deliver high quality feedback for essays and lab reports, whilst reducing marking time compared to GradeMark feedback. This included an evaluation of two different screencasting technologies, Jing and Camtasia Studio.

Methods

Study cohorts

For the GradeMark vs. Jing study, 90 level-2 human genetics essays (1500–2000 words) were marked. Students were informed that a sample of the essays would receive audiovisual feedback instead of the standard GradeMark feedback and given the opportunity to opt out of one or other of the forms of feedback. Sixty reports were marked using GradeMark and 30 were marked using Jing, allocating the scripts randomly to each group. It takes time to build up the common comments bank in GradeMark, meaning that the first few scripts take longer than later scripts. The 2:1 marking split of GradeMark:Jing was decided upon to smooth out this front-ended time bias in GradeMark, whilst still giving a large enough Jing-marked sample for comparison. Average time per script was calculated by counting the number of scripts marked per eight hour marking day.

For the Camtasia Studio study, level 1 Forensic Science genetics lab proformas (n=55) were marked. The proformas included questions on the methodologies used, presentation and interpretation of the results and a series of problems based on the background material. The students were given the opportunity to receive standard typed feedback if they preferred.

GradeMark

Essays were marked online using the GradeMark facility of Turnitin UK. As shown in , the highlighter tool was used to select areas requiring specific typed comment, which the student then accesses by clicking on the speech bubble. The number of typed comments ranged from 6–15, averaging about 10. Instead of completing a separate generic feedback sheet, the final appraisal of the work was typed at the end of the work.

Figure 1 Screenshot of GradeMark Feedback. A screenshot (captured with Jing) showing the typed summary appraisal in blue typeface at the end of a student’s essay (1) and an example of a speech bubble (2) that refers to the yellow highlighted text (3). To access the comment, the student clicks on the speech bubble

Screencasting

Work was downloaded (with the students’ prior knowledge) to the author’s computer from TurnitinUK to enable offline marking. To circumvent the need to take notes as memory jogs, each piece of work was read and highlighted and the audio commentary recorded immediately in either Jing or Camtasia Studio (CS), whilst scrolling through the work in real time. The internal laptop microphone was used for the audio recording. In CS, if a mistake was made whilst recording, a silent pause of several seconds was introduced before repeating the part.

Jing: Software was downloaded from the TechSmith website, http://www.techsmith.com/download/jing/. Screencasts were saved in swf format and uploaded onto the module KLE page, with access restricted to the owner of the work. Students then viewed the movies online using their built-in flash player (see ).

Figure 2 Screenshot of the swf veiwer of a Jing-generated screencast. The screenshot (captured with Jing) of a Jing screencast shows areas of a student work that have been highlighted in yellow: (1) to indicate the section of the student’s work to which the commentary refers; (2) indicates the Flash viewer scroll bar, which shows the current position in the recording and can be moved manually by the viewer

Camtasia Studio 7 Software was provided by the STAF project. Each piece of feedback was recorded as a new camrec file which was then edited as a Camtasia project (). Two short recordings were saved in the Camtasia studio library to allow sharing between projects; an introductory message, and in anticipation of students incompletely answering a particular question, a clip with three narrated PowerPoint slides (). After recording the screencast, the project was edited by inserting the introduction and, if necessary, the PowerPoint slides. Any mistakes were edited out, using the silent pause introduced at the recording stage as tag in the audio track. The project was then rendered for web format (a suite including an mp4 file, an swf file and 4 accessory files) and uploaded onto the KLE. Only the mp4 and swf files were made visible to the students, giving the choice whether to view the file directly online or download it for archiving.

Collection of data

For the essays, the feedback was returned a week after the completion of the teaching, so students were emailed to ask them to complete a questionnaire to gather both qualitative (free form text) and quantitative (yes/no) responses. For the lab reports, the work was returned during semester so the questionnaire was handed out during a class test. Copies of both questionnaires are provided as supplementary files. Freeform responses were collated into a single word document that was uploaded into Wordle (http://www.wordle.net/) to produce a word cloud.

Figure 3 Screenshot of the Camstasia Studio user interface. The screenshot (captured with Jing) shows a typical Camtasia Studio project in the process of editing. The student’s name has been blocked out to retain anonymity; (1) The clip bin contains the camrec file that is currently being edited; (2) The clip bin tab; (3) The library tab: selecting this tab opens the library of pre-recorded material that can be dragged into the video timeline (see 7 and 9 as examples); (4) Highlighted words served as cues for verbal comments; (5) If less than full marks were awarded for a question, the mark was typed in red; (6) If full marks were awarded for a question, the mark was highlighted in yellow; (7) An example of where the feedback intro (from the library) was inserted into the timeline; (8) The cursor head shows the position in the timeline currently shown in the preview panel. The red and green tabs can be moved to mark the start and end positions of parts of the track that are to be deleted; (9) The insertion point of the Powerpoint slides; (10) The control bar can be used to move through the video in the preview panel

Results

GradeMark vs. Jing

Is audiovisual feedback quicker?

The 60 GradeMark essays were marked over four 8-hour days, giving an average marking time of 32 mins (25–40 min range). Previous experience had shown that the students did not look at both the GradeMark comments and the generic feedback form, so in this instance the summary comments were added at the end of the student’s work in GradeMark, as shown in . This represented a minor time saving of about 2 minutes compared to previous years using GradeMark when the separate form had to be uploaded to the KLE page.

For the Jing feedback, all 30 scripts were annotated and recorded in a single working day (8.5 hours) giving an average marking time of 16 minutes (range 9–20mins). The mark awarded was given at the end of the recording. In most instances, it was possible to complete the recording in a single take, with the recording time ranging from 3–5 minutes. Several recordings had to be redone, either due to over-running the 5 minute maximum recording time, or due to noisy interruptions such as the door bell or phone ringing. Occasionally a mistake would have taken too long to correct and so the commentary was re-recorded. Upon completion of the audio recording, the files were saved in swf format for later upload onto the module KLE page. The files generated were quite large (3–6MB) and upload via a home broadband connection took several minutes per script as compared to a few seconds using the University’s high-speed network. Overall, using Jing to provide feedback for essays represented up to 50% timesaving compared to GradeMark.

What did the students think of the essay feedback?

As the marking was completed one week after the teaching finished, the students were contacted by email to ask them to complete the questionnaire. There were very few (<10%) responses from the students who had received the GradeMark feedback, the general consensus being that GradeMark feedback was more accessible as it was available online, detailed and more legible than handwritten feedback and that the separate generic feedback sheet was not needed. For example in response to the question “How would you rate typed feedback versus handwritten comments you have received in other modules?” one student writes:

“I would rate it highly. Sometimes, handwritten comments are hard to read (which is understandable because the markers have a lot of papers to get through and need to do so quickly) but if we have made mistakes and cannot read how to correct them his can cause problems. Sometimes it can also be difficult to tell which comments relate to which part of the work; however Grademark eliminated all these problems.”

More of the students who had received audiovisual feedback responded to the survey (35%), perhaps a reflection of the ‘novelty value’. There was unanimous approval of this type of feedback, with students feeling that it was much more useful and detailed than other feedback they had received previously. Comments included:

“showed that the lecturer had read my work in detail.”

“it is more personal and you get a sense of where the marker is coming from.”

“… the lecturer can explain things easier, and show you exactly what aspects needed improving. There is not much space on the paper for an in-depth description of where you go wrong, but with Audiovisual feedback your attention can be brought to parts of your work and the lecturer can give a good, long detailed description of why it could have been better.”

Additionally, there was evidence that the students were using vocal cues to engage more deeply with the feedback.

“Audiovisual feedback is a lot better than written comments, as trying to decipher the context in which text is written can be difficult… AV allows you to put the comments into context, due to voice pitch, tone, emphasis on certain words etc.”

“the fact that you could hear someone talking to you and going through the essay with you … helps the feedback to be remembered.”

“more can be inferred from a comment when you can hear the particular way a marker is saying a comment in the audio.”

Some students had received audio feedback in other modules and, when asked if the screenshot was helpful, commented:

“It’s much more helpful having the essay in front of you being highlighted rather than having to guess from an mp3 which section is being referred to.”

“If it was just the audio file on its own I wouldn’t have found it so useful as it is harder to concentrate on just listening.”

The mark was given at the end of the recording, but no students commented that they would have liked the mark at the beginning.

Overall the use of Jing to provide feedback for essays was a great success, saving time for the tutor and being judged as better quality than handwritten feedback by the students.

Camtasia Studio 7

Does Camtasia Studio save time?

Although Jing proved very successful for essays, there is a major drawback precluding its use for more substantial pieces of work — a five minute maximum recording limit. Camtasia Studio 7 (CS) allows the user to make and edit longer screencasts and was used to provide feedback on 55 level 1 forensic genetics lab reports. Recordings were 6–9 minutes long and after editing and insertion of the introduction and slides (see Methods section), the completed projects ranged from 7–13 minutes. The total time per script from download to completion of editing was approx 30 minutes, excluding rendering time. In previous years, the lab proformas took up to 35 minutes to mark, so the time saving here appears less impressive than with Jing. After editing, the project was rendered for web format and uploaded on to the KLE page. The rendering took up to 10 minutes per project, but it was possible to be working on the next project whilst the previous project was rendering. The mp4 files were up to 16MB in size.

What did the students think of the Camtasia Studio screencasts?

The work was returned during the semester and questionnaires were distributed during a class test. A total of 43 responses (78%) were collected. Students were asked a series of yes/no questions about their experiences with the audiovisual feedback and their responses are shown in .

Figure 4 Student responses to yes/no options on the feedback questionnaire. The students (n=43) were asked whether they had received audiovisual feedback in other modules, if they had listened to it more than once, if they preferred audiovisual feedback to written and if the narrated screenshot was preferable to a simple audio track

Some students had received audiovisual feedback before, although the freeform responses lead the author to suspect that some of the students were confusing it with pure audio feedback. Almost half of the students said that they had listened to the feedback more than once and all but three of the students said that they preferred AV feedback to handwritten. All the students who responded to the question about whether the screenshot was necessary rather than just an audio commentary said that it was.

When polled on which format of the file they had viewed, 25 used the swf, 10 used the mp4 and eight had accessed both. Reasons for file choice included difficulty viewing one or other of the file types, wanting to have a permanent copy and one viewing of the swf file being sufficient.

There were several freeform answer questions and a word cloud of their responses created in Wordle is shown in . The predominant word is YES, affirming the positive responses to the question — “did you find the feedback useful, did it help you understand where you had gained or lost marks?” Other commonly occurring words and phrases were “better/more understandable than handwritten”, “useful”, “helpful”, “easier”, “understandable” and “personal”.

Figure 5 Word cloud of freeform responses. A word cloud is a graphical representation of the number of times a word appears in a text, with the largest word representing the most common word. The freeform responses were collated in a single word document and a word cloud of the top 120 words was generated using the online Wordle tool (http://www.wordle.net/)

The types of freeform comments were similar to those for the level 2 essay, essentially picking up on the points that the feedback was more personal and helped the students understand their mark better. For example:

“my errors were better resolved and understood.”

“much more informative and personal …it felt like I had a personal tutor.”

“I feel confident about my grade and where it came from.”

A number of students made specific mention of the PowerPoint slides that had been included, for example

“Liked how the PowerPoint slides were inserted too.”

“Slides were useful as it gave extra explanation.”

There were a few negative comments, mainly from the students who said they preferred handwritten comments. One student in particular commented that they did not like to spend a lot of time on feedback and another said that they were just happy to have got a good mark and the type of feedback was irrelevant. A student claimed to have “found it a bit unsettling to hear someone talk about work when I was not face to face”. Unlike with the Jing screencasts, a number of the students indicated that they would have liked to know the mark at the start.

Discussion

Screencasting can save you time

One of the primary aims of the current study was to determine whether screencasting software could be used to provide high quality feedback more quickly than written feedback. The starting point was a sample of essays marked using GradeMark to provide typed online comments, which also incorporated the elements of a generic feedback sheet that would usually be given separately. On average it took 32 minutes to mark each essay. The remaining essays were marked using Jing to record live screencasts, taking on average 16 minutes from download to completion. The subsequent upload time was only a few seconds per script when using the highspeed network. This represented a major time saving.

A different software package was used to record screencasts of a more substantial piece of work and when compared with the time taken to mark manually, the time to produce the fully edited screencasts was roughly the same. However, there is potential to reduce the editing time as discussed below.

Evaluation of the screencasting software

Two different screencasting technologies were used to provide audiovisual feedback. As a standalone package, Jing was easy to use, free, and ideal for short pieces of work. The output was a single swf file, which was simple to upload. Its main drawback was the 5 minute recording limit, precluding its use for more substantial pieces of work, like lab reports or project write-ups. One solution to this would be to make several shorter recordings covering individual sections or chapters of the work. For something like a masters or doctoral thesis this could be an advantage, as the student could deal with the feedback in small chunks. Another disadvantage of Jing is that the recordings cannot be edited directly meaning that mistakes either have to be corrected by apologising and repeating the comment or by re-recording the commentary.

Camtasia Studio 7 presented a bigger learning curve for the author, although good online tutorials were provided via a direct link to the Camtasia Studio Learning centre (http://www.techsmith.com/learn/camtasia/7). Once mastered, CS is a much more versatile tool as one can edit out mistakes and insert other media such as PowerPoint slides, videos or webcam movies. Rather like the ability to have common comments in GradeMark, one can pre-record media and save it to the Camtasia library for use in multiple different projects. This feature was exploited in the current study and the students were very appreciative of the inserted PowerPoint slides. A disadvantage is that there is the temptation to go overboard with the editing, for example by removing any vocal stumbles like “erms” and “uhms” — something the author was guilty of during this project! Whilst this clearly gives a more polished monologue and may help reduce the length of the recording (and file size), it is very time-consuming and the editing time could be reduced by leaving minor stumbles in. Camtasia projects can be rendered for many different output formats and this may take up to ten minutes. The web format used in the current study comprised of a suite of six files, all of which had to be uploaded. This was more complex and time-consuming (approx three mins per project) than the single file upload for Jing created screencasts. The file size (up to 16MB) may be an issue if you intend to email the files to students, but was not a problem in the current study as the files were posted on the module KLE page.

Students see screencast feedback as high quality

The response from the student body was overwhelmingly positive, with many students indicating that they would like to receive other feedback in this way, suggesting that they feel that it is better quality feedback. The freeform responses indicated that the students were engaging with the feedback and processing the verbal intonations to gain a deeper level of understanding. Those who had previously received audio feedback thought that having the screenshot was much more beneficial.

There is evidence in the literature that students’ engagement with and perception of written feedback can depend on the tutor giving the feedback (CitationOrsmond et al., 2005) and, as one student in the current study pointed out, this issue is likely to be the same with audio feedback.

“In the same way as some lecturers have poor handwriting, some may produce poor quality recordings. Yours were very clear, but others may not be, which effectively makes the feedback useless.”

The quality issue is two pronged — first the nature of the spoken comments and secondly, the sound quality. The depth of the feedback will depend on the individual but there are steps that once can take to ensure the sound and vocal quality. Although the current study was completed using the internal microphone on the author’s laptop and only two students made any comments on the sound quality, the use of a good quality external microphone is recommended and indeed the author has switched to using an external microphone in subsequent screencasts. To ensure the richness of vocal intonation (something the students clearly pick up upon when listening to the recording), the tutor should try to speak as naturally as possible, as if talking to the student face to face. Scripted commentaries are not recommended as they may sound rather stilted. Given how perceptive students are to tone of voice, they may sense anger or frustration in your voice, for example one student in the CitationMcGarvey (2010) study quoted “… sounded constantly angry.” So, try to remain calm whilst recording, however frustrated you may feel.

What improvements can be made?

There were only a few negative comments from the students in the current study, one of which seems to be common to audio only feedback (CitationMcGarvey, 2010), in that students can be a bit intimidated by a voice “coming out of the computer” at them. One solution may be to record the introductory message using a webcam, so that the student can see that there is a person behind the voice. It should be possible to record a generic introduction that could be saved to the library and used for multiple different types of work.

In both the Jing and CS screencasts the mark was given at the end of the recording. Although not an issue raised by students who had the essay feedback, several students receiving the lab report feedback said they would have preferred to hear their mark at the beginning. This may reflect the fact that the CS movies were generally much longer than the Jing ones. The response to this will be to inform students in the introductory message that they can fast forward to the end to get their mark.

Does one size fit all?

In the Biosciences, as in other disciplines, assessed work comes in many different formats. For example, across the modules that the author delivers the assessments include reflective portfolios, essays, lab reports, literature reviews, project reports, short-answer tests, case study reports and critical reviews of papers. Anything that can be submitted in an electronic format could potentially be suitable for screencast marking using one or both of the technologies evaluated here. So, if audiovisual feedback is as good as the student surveys suggest, should we be trying to incorporate this type of feedback into our programmes? In many cases, audiovisual feedback could benefit both staff (in terms of timesaving) and students (in terms of more thorough feedback), but there are some instances where alternative forms of feedback will still be more appropriate either for the student receiving it or the staff member giving it.

First, there is the issue of students (and staff) with learning or physical disabilities. Audiovisual feedback, like audio feedback (CitationTrimingham and Simmons, 2009) is likely to be particularly useful for dyslexic students and visually impaired students (e.g. the text in the student’s work can be enlarged or the colour changed, and there are zoom-in features). However, it is less likely to be useful for hearing impaired students, who may appreciate detailed written feedback instead. Likewise, a tutor with a speech impediment might feel much more comfortable providing written feedback than being compelled to record an audio commentary. Even able-bodied tutors might find the prospect of recording their feedback orally very stressful.

One must also be aware of one’s audience. If the majority of the students in the group are international students, one should avoid colloquialisms and speaking too quickly, and take care to clearly enunciate the words. Native speakers may find this rather exaggerated but as one international student commented:

“Voice needs to be loud and clear especially for international student whose first language is not English.”

It is not only the student population that has international representation; the teaching staff are often drawn from many different backgrounds. For a tutor with a poor command of spoken English (or whichever is the language used in their Institution), audio or audiovisual feedback may not be the most appropriate format of feedback for them, as the students may struggle to understand what is being said. However, one should not let an accent put one off trying audiovisual feedback, as the author has a strong regional accent and this has never been an issue for either teaching or feedback purposes.

Conclusion

In conclusion, the author has shown that screencasting software can be used to produce high quality audiovisual feedback that is well received by the students and that can also substantially reduce overall marking time. Whether all that is required is a quick commentary, or a more sophisticated edited project is needed, there are suitable screencasting technologies available, each with their own advantages and disadvantages. For the majority of students and tutors audiovisual feedback would be a useful exercise, however there are a minority of students and staff for whom this type of feedback would not be ideal.

Acknowledgements

Thanks to the students on the 2010/11 modules LSC-20050 and CHE-10042 for participating in this study. Thanks to the JISC STAF project for providing the software licence for Camtasia Studio 7.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.