1,531
Views
6
CrossRef citations to date
0
Altmetric
Research Articles

Use of Feed-forward Mechanisms in a Novel Research-led Module

Abstract

I describe a novel research-led module that combines reduced academic marking loads with increased feedback to students, and allows students to reflect on and improve attainment prior to summative assessment. The module is based around eight seminar-style presentations (one per week), on which the students write 500-word ‘news & views’ style articles (short pieces highlighting new results to a scientific audience). Students receive individual written feedback (annotated electronically on the work), plus an indicative mark, on their first submitted report. For subsequent reports, only a subset is marked each week, such that each student receives feedback on two further submissions. Simultaneously, they have access to written feedback on their peers’ reports (a total of two reports per student enrolled on the module). Students are encouraged to read and apply the general and specific messages from all the feedback to their own subsequent work (using it as feed-forward). At the end of the module, students self-assess their eight submissions and select the two they believe are their best pieces to put forward for summative assessment. Combining data from three cohorts, student attainment increased throughout the module, with higher marks for the two chosen reports than for the two marked reports or their first report. Students selecting previously unmarked reports also showed a greater increase in their mark for the module than students selecting reports that had previously received a mark. Module evaluation forms revealed that the students found access to feedback on others’ work helpful in shaping their own assignments.

Introduction

Feedback plays a key role in supporting student learning (CitationHattie et al. 1996, CitationBlack & Wiliam 1998), and there is a wealth of guidance identifying good practice in providing feedback to students, stressing particularly factors such as timeliness, utility and the provision of ‘feed-forward’ that allows students to improve on subsequent work (e.g. CitationHiggins et al. 2001, CitationGibbs & Simpson 2004, CitationNicol & Macfarlane-Dick 2004, CitationOrsmond et al. 2005, CitationDuncan 2007). The Quality Assurance Agency’s (QAA’s) Code of Practice on Assessment of Students states ‘Institutions provide appropriate and timely feedback to students on assessed work in a way that promotes learning and facilitates improvement …’ (CitationQAA 2006).

However, students do not always engage effectively with feedback (CitationGlover & Brown 2006). While the majority of students claim to read and think about feedback (CitationBevan et al. 2008, CitationMcCann & Saunders 2009), more limited numbers actively use feedback when preparing the next assignment (e.g. CitationOrsmond et al. 2005, CitationGlover & Brown 2006, CitationScott et al. 2011), perhaps because they lack appreciation of feedback as feed-forward (CitationChanock 2000). For students to learn, they must act on the information provided in feedback (CitationNicol 2010), and thus we need to encourage students to interact with feedback comments in a more active way.

For staff, the provision of timely and good quality feedback is time-consuming, and increasing student numbers make it more difficult for instructors to maintain the amount and quality of feedback and meet internal deadlines (CitationGibbs & Simpson 2004, CitationHope 2011). Consequently, there has been a reduction in the frequency of assignments and in the quality, quantity and timeliness of feedback (CitationGibbs & Simpson 2004). Tutors often believe that students take little or no notice of feedback (CitationGlover & Brown 2006, CitationCrisp 2007), which contrasts sharply with student perceptions (CitationBevan et al. 2008, CitationMcCann & Saunders 2009). To reduce marking loads, coursework assessment is generally used as both formative and summative assessment (CitationBrown et al. 1997, CitationWeaver 2006), although some evidence suggests formative feedback becomes less valuable when it is combined with summative assessment (CitationAtkins 1995, CitationBrown et al. 1997). Formative assessment tasks results in an opportunity for students to practice assessment types, which (combined with effective feedback) can have large positive effects on learning (CitationBlack & Wiliam 1998), particularly for some groups of students (CitationGibbs & Simpson 2004).

Here, I describe a module with an assessment strategy that (a) provides effective feedback to students that can be immediately applied to subsequent assignments (i.e. acts as feed-forward) and (b) reduces academic marking workloads by reducing the number of assignments that receive feedback each week (detailed in the description of the module later). I assess the effectiveness of the assessment strategy, and demonstrate that the students can use the information provided in the feedback to improve on their own work.

Description of module and assessment strategy

‘Topics in Biodiversity and Evolution’ is a final year undergraduate module (catering for approximately 32 students) designed to give students insight into the research process, and to link their knowledge and understanding of concepts developed during their degree programme to current research. Students are often unaware of the link between the research being carried out at their institution and the content of their courses (CitationJenkins et al. 1998, CitationBrew 2006); this module highlights those links.

The module teaching strategy takes the form of a series of eight seminars presented by active researchers, including postdoctoral researchers (providing them with potentially valuable teaching experience; Citationåkerlind 2005). The seminars are based on their research, including two publications that are made available to the students approximately one week prior to the session. The choice of the research is up to the speaker, providing it has been published and is available to the students, but speakers generally present research published in the last five years. Each speaker is allocated approximately an hour to present their research, in a format and at a level that can be understood by the audience, and which may include some discussion with the students. Following the presentation, a further hour is available for a question-and-answer session with the speaker. To encourage participation in this session, 5% of the module marks are associated with asking questions and answering those posed by the speaker.

Speakers are asked to chair the question-and-answer sessions and ensure that all students have the opportunity to ask questions, but are free to choose the method used to do this. During the sessions, the module coordinator notes the number of questions asked by each student during the session (the small numbers of students ensures that this is possible). Marks are awarded based on the number of sessions in which the student asked a question (50% of the available marks) and the total number of questions asked to a maximum of two per session (50% of the available marks). Thus, a student asking an average of two (or more) questions per seminar would be awarded the full five marks, and marks are rounded up to the nearest whole number. No judgment is made as to the quality of questions to encourage students to ask questions.

After each seminar, the students write a 500-word ‘news and views’ style article, aimed at a general scientific audience. Each piece is submitted two weeks after the seminar, such that each student submits a total of eight pieces of work. The first session of the module introduces the students to the key features of ‘news and views’ articles, and they are given a number of examples, together with the original paper on which they are based, to provide guidance on the coursework they are expected to produce. ‘News and views’ type articles feature in a number of scientific journals that the students should be familiar with, including Nature, Trends in Ecology and Evolution, and Journal of Experimental Biology.

The key feature of the module assessment strategy is that not all pieces of work are marked and fed back on. However, for their first submission, each student receives individual feedback annotated on to a paper or electronic copy of their work, together with a completed feedback grid () and an indicative mark to allow them to more easily judge the quality of their work. This is returned within one week to allow them to apply the feedback to the next submission. For all subsequent submissions, I give feedback on only a subset of the work (the remainder is assessed on a pass/fail basis only; see later). I annotate a selection of reports electronically with feedback, using the comment function in Word, add the completed feedback grid (), award an indicative mark, and post to the Virtual Learning Environment for all students to see. Prior to posting, reports are checked carefully to ensure anonymity for the students, and any details that could potentially identify a student are removed. Students may also request feedback on a specific aspect of their report by asking a question identifying an area with which they would like specific help (CitationMcKeachie 2002). Written feedback comments focus on the strengths and weaknesses of the report, with specific suggestions for improvement, with the aim of enabling learning rather than judging achievement (CitationMaclellan 2001). Again, this feedback is provided before the next submission is due. Students are expected to apply the feedback to subsequent reports, not amend previous ones.

Table 1 Feedback grid accompanying written comments. The grid is accompanied by a statement explaining the marking as follows: ‘The table below should give you an idea of the areas in which more work is needed and those that are done well in this report. Each statement below is given a score of 0 to 3 where 3 = excellent and 0 = not done at all. Note that not all of these may be relevant to all pieces of work, and the mark you receive for the work is not based entirely on these scores – the idea is to give you an indication of what is done well and where the work needs improvement.’

Reports that are to be marked are selected at random, but with the condition that each student should receive feedback on one of reports 2–4, and on one of reports 5–8. Thus, each student receives individual feedback on three reports (the first report, and two randomly selected reports), but has access to feedback on two further pieces of work by every student on the module. Over the course of the module, students have access to both high and low quality examples of work. It is a condition of passing the module that seven out of eight submissions are of passing grade, and so all submissions are assessed on a pass/fail basis and students are notified if they have submitted a piece of work that would be below passing grade (this has not yet happened, as the first session provides extensive detail on producing the coursework). Thus, all reports are read, but the student only receives detailed feedback on three of their own pieces of work. The total time spent marking associated with this component of the module is reduced by approximately two-thirds (1500 words per student rather than 4000), as only one-third of the reports are provided with feedback each week, and the assessment of the remainder on a pass/fail basis can be achieved very rapidly.

At the end of the module, students self-assess all their submissions, and select the two that they think are of the highest quality for summative assessment (worth 60% of the module mark), using the assessment criteria and the feedback on their own and other marked examples to guide them. The chosen reports may or may not include work that has previously been marked, but reports cannot be amended from the previous submission as the student is expected to use and apply the feedback as feed-forward and improve their work in this way. All summative submissions (selected reports) are marked without reference to any previous assessments, and a subset is check-marked in accordance with University guidelines. To accompany the two chosen news and views articles, the students are asked to write a 300-word reflection that outlines their reason for selection of these articles (10% of the module mark). This should focus on the features of the article that make it of high quality, demonstrating how the student has applied the feedback to their work, but may also touch on other areas (such as the student’s understanding of the subject area).

The remaining 25% of the module marks come from a defended poster, presented in pairs. The poster is based on a published paper from the recent literature (from the year in which the module is taking place, and the previous year). Students are asked to produce a scientific poster, as might be presented by the authors of the paper at a conference, and are directed towards online resources on poster design, and the numerous scientific posters around the department. Posters are presented in a poster session at the end of the module, and are assessed by 2–3 independent markers, using standardised marking criteria.

Analysis of ‘news and views’ marks

Is there evidence that students are using feedback to improve the quality of their ‘news and views’ submissions and select their best quality work?

I wished to determine whether the ‘news and views’ component of the assessment strategy was an effective way of allowing students to produce and select their best work. With this aim in mind, student marks were collated from the first three years of the module (2010/2011, 2011/2012 and 2012/2013 academic years). outlines the number of students enrolled on the module, and the number of reports submitted in each category.

Table 2 Numbers of students enrolled on the module and first, randomly selected and chosen reports assessed, for the three cohorts.

If students were able to use the feedback on their own and other students’ work effectively, then I would expect:

  1. An increase in marks awarded between the first and randomly selected reports as student work improves in quality.

  2. For the marks awarded to the chosen reports to be higher than both the randomly selected and first reports.

All analyses were carried out in R 2.13.0 (CitationR Development Core Team 2011). To assess whether there were differences between the marks awarded to students’ first, randomly selected and chosen reports, I used linear mixed effects models to investigate the effects of report type (first, random or chosen), cohort, and their interaction, on the awarded mark. Percentage marks were divided by 100 to give a value between zero and one, and arcsin-square root transformed to meet the assumption of normality of model residuals. Student identity (anonymous) was added as a random factor to account for the repeated measures nature of the data. The non-significant interaction (report type * cohort; F4,328 = 1.161, p = 0.328) was removed following CitationCrawley (2007), and only the model with main effects is presented here. Subsequently, each pair-wise comparison between report categories was made using paired t-tests.

Is there evidence that students are self-assessing unmarked work to improve on module marks?

Next, I was interested in whether there was any evidence that students were self-assessing unmarked work and using it to improve on their overall attainment in the module. To investigate this, I looked firstly at the number of previously marked reports selected by the student for final submission, and found that while 16 students selected from among their three previously marked reports (2 marked reports), 49 chose one report that had not previously been marked (1 marked report), and 21 students chose to submit two reports for which they had not previously received a mark (0 marked reports). If students are able to independently select their best work, then I would predict that those students choosing 1 or 0 marked reports should show greater improvement in marks between their first and randomly selected reports, and their final chosen reports.

To assess whether the number of previously marked reports chosen by a student for their final submission had an effect on their overall attainment, I first calculated the average mark for the two chosen reports, and the average mark for the first and randomly selected reports. I then calculated the difference between these values. I then assessed whether this mark difference differed between students choosing 0, 1 and 2 previously marked reports using analysis of variance (ANOVA) on log(mark difference + 10), followed by Tukey HSD post hoc tests. A constant was added to the mark difference value to ensure all numbers were positive (one student chose two previously marked reports that did not include their best mark), and data were log-transformed to meet the assumptions of normality.

Results

Is there evidence that students are using feedback to improve the quality of their submissions and select their best quality work?

There was a significant effect of report type on the mark awarded (F1,332 = 52.192, p < 0.001), but no effect of cohort (F1,83 = 0.350, p = 0.706). Consequently, data from the three cohorts were pooled, and post hoc tests carried out on this pooled data. There were significant differences in the marks awarded between first and randomly selected reports (F1,161 = 7.974, p = 0.005), between first and chosen reports (F1,170 = 87.995, p < 0.001) and between randomly selected and chosen reports (F1,248 = 59.336, p < 0.001). Students were awarded significantly higher marks for their chosen reports than for their first or randomly selected submissions, and marks for randomly selected submissions were significantly higher than for the first reports ().

Figure 1 Mean (± 1 standard error) mark (%) awarded to students first, randomly selected and chosen reports. Stars and dashed lines indicate where significant differences lie.

Is there evidence that students are self-assessing their work to improve on module marks?

There was a significant effect of number of reports chosen that had previously been marked on the calculated difference in marks (F2,83 = 5.516, p = 0.005). Students selecting 1 or 0 previously marked reports had a significantly higher difference in marks than students selecting 2 previously marked reports (Tukey HSD, adjusted p-values, 1 marked report: p = 0.004, 0 marked reports: p = 0.024). Students selecting 0 marked reports did not have a significantly greater mark difference than students selecting 1 marked report (p = 0.985; ).

Figure 2 Mean (± 1 standard error) difference in the average marks awarded for the two chosen reports and the average marks awarded for the first and two randomly selected reports, for students selecting 0, 1 and 2 reports that had previously been given a mark. Stars and dashed lines indicate where significant differences lie.

What are the student impressions of the module?

Feedback on the module (collected via standard departmental end-of-module evaluation forms) was generally positive. The structure and questions asked on these forms changed between cohorts, but some general themes remained. Of the students completing the questionnaire, 11/14 (2009/2010) and 16/20 (2010/2011) responded to the statement ‘the feedback on course work I received was useful’ with a positive answer. In 2012/2013 the module evaluation form changed and this question was removed. Those that did not find the feedback useful expressed (in the free-text section of the forms) a lack of confidence in their ability to select their own reports for summative assessment, and a perception that the unmarked work was a ‘waste of time’.

Of those that did find the feedback and assessment strategy useful, comments included:

I enjoyed this module, especially the general feedback and feedback on others work. It enabled me to refine my own work well. It has also given me skills which are transferable to other modules. (comment from 2011/2012)

Reading other people’s feedback has been helpful. (comment from 2010/2011)

Students also enjoyed the research-led aspect of the module, and the variety of topics covered. For example:

The module was highly informative and I liked how the coursework material was based on different research directions as I think it has allowed me to explore areas of biology I normally wouldn’t have read. (comment from 2011/2012)

In 2012/2013, the students were specifically asked which aspects of the module they found most enjoyable. Here, a number of students included comments on the research-led angle:

The module was highly informative and I liked how the coursework material was based on different research directions as I think it has allowed me to explore areas of biology.

Having a different lecturer each week talking about their most recent research was very interesting and kept each week fascinating.

… diversity of presentations covering different areas of genetics and ecology.

Students in 2012/2013 also provided positive feedback on the assessment strategy, with some individuals commenting on the benefits of small weekly assignments, being able to select assignments, and the amount of feedback. One student commented:

This has been the most valuable module I have taken in my degree. It has improved the way I read papers by changing the way I find the most important information in articles. I have therefore been able to apply this to other modules. The amount of feedback has been excellent and the class participation has improved the way I listen in lectures.

When asked how the module could be improved, this cohort (2012/2013) continued to express some discomfort in choosing their own assignments, and requested more guidance on how to do so, and more ‘general’ feedback.

Discussion

There was a significant increase in students’ marks between the first and the randomly selected reports, suggesting that the students’ ability to write the reports in line with the assessment criteria increased over the duration of the course. The final chosen reports gained significantly higher marks than both the first and randomly selected reports, suggesting that the students were able to use the feedback provided to select better work for summative assessment than that selected at random. These results suggest that the feedback supported their learning (CitationHattie et al. 1996, CitationBlack & Wiliam 1998) and that they actively used it to prepare subsequent reports (CitationOrsmond et al. 2005, CitationGlover & Brown 2006, CitationNicol 2010, CitationScott et al. 2011). A focus on multiple low-stakes assessment tasks can enhance student motivation to learn, compared to high-stakes tasks (CitationNicol & Macfarlane-Dick 2004), which may have been an influence here.

Students that chose no previously unmarked reports (i.e. selected two out of the three reports that had been marked during the course) showed a small positive mark difference compared to the average mark for all three, suggesting that they were able to select the best two of three, although it is highly likely that this decision was made on the basis of the indicative marks awarded only. Students selecting one or more reports that had previously not been marked, however, showed a significantly greater mark difference, suggesting that (1) some students chose to use the information they had learned during the course from the feedback and apply it to their own work (CitationChanock 2000, CitationNicol 2010), and (2) that those students were able to effectively use this information and select work that resulted in an increase in their final mark for the course, suggesting that the feedback also facilitated the development of self-assessment skills (CitationNicol & Macfarlane-Dick 2004).

Feedback comments are usually directed towards the student who has produced the assignment, but viewing comments that are directed at other students’ work can provide a wealth of information to all students (CitationNicol 2010). Comments on other students’ work are often shared through a collated list, which may derive from previous years’ assignments (CitationNicol & Macfarlane-Dick 2004). In the module described here, students are able to build a database of not only feedback comments, but also study the specific aspects of the reports on which they are based. Feedback is also provided regularly and is detailed (CitationGibbs & Simpson 2004), and can be applied directly to subsequent assignments within the module, closing the ‘feedback loop’ (CitationSadler 1989, CitationBoud 2000). Together, these assessment and feedback strategies allow students to perform well on the module.

Comments on the end-of-module evaluations reflected some negative impressions of the assessment strategy, and thus there are improvements that can be made. In the first year, I did not do enough to manage student expectations (which are linked to student satisfaction; CitationAppleton-Knapp & Krentler 2006), and so subsequently I have spent increased amounts of time explaining the assessment strategy and its benefits to the students. For students that are used to summative work only, as is the norm for the majority of the modules that they take, receiving only formative feedback on their work is unexpected and unusual, and many students do not fully appreciate the benefits of being able to ‘practice’ assignments in a formal way before the summative submission. The sector-wide decline in formative assessment (CitationAtkins 1995, CitationHigher Education Academy 2012) suggests that this could be a common problem.

Many students expressed a lack of confidence in their ability to choose their own reports, although the data suggest that the majority of students are able to do this. I suggest that incorporating peer-review of reports (CitationOrsmond 2011) prior to the final submission may give students more confidence in selecting their own reports, as well as relying on the guidance of peers. Evidence suggests that peer-review is an effective means of assessment, and that (given clear marking criteria), peer marking closely reflects that of academic staff (CitationStefani 1994, CitationOrsmond et al. 1997, CitationFalchikov & Goldfinch 2000). Furthermore, combining self-, peer- and co-assessment can encourage students to become more responsible and reflective learners (CitationDochy et al. 1999), suggesting benefits beyond report selection by using this approach.

For the more general structure of the module, a concern may be that staff are unwilling to discuss their own research with students (CitationBrew 2006), yet I did not find it a problem to recruit the required number of volunteers, and informal discussions with staff suggested that they enjoyed teaching on the module, particularly the question-and-answer session at the end of each seminar. The fact that staff were able to use already-prepared conference or seminar material, and that there was a low marking load (staff were asked only to contribute to the poster marking) may have contributed to this willingness.

Although I attribute the increase in student marks to the module feedback and assessment strategy, it is possible that other aspects of their experience are the underlying cause of the improvement. However, there is no other module that is taken by all students enrolled on Topics in Biodiversity and Evolution, and the students themselves have linked their improvements in other modules to this module, rather than vice versa. The small weekly assessments are unusual within the degree programmes, as other modules, including those taken simultaneously, generally have final summative coursework and/or examinations, rather than feedback provided within the module.

The assessment strategy I describe for this module provides a mechanism for generating rapid feedback that can be effectively used by students as feed-forward to improve their learning and attainment in the module. The module structure, and particularly the feedback strategy could be applied across subject areas, and is not restricted to research-led teaching in ecology and evolutionary biology. However, the feedback strategy is novel to the students, and runs counter to the experiences of assessment and feedback they have previously received during their degree programme. The feedback from students who did not value the format suggests that the expectations of students need to be carefully managed (CitationAppleton-Knapp & Krentler 2006) and the value of the approach within the context it is used clearly explained.

Acknowledgements

The author thanks the students and staff who participated in the module over the past three years, and to two anonymous referees whose comments helped improve the manuscript.

References

  • åkerlind, G.S. (2005) Postdoctoral researchers: Roles, functions and career prospects. Higher Education Research & Development 24, 21–40.
  • Appleton-Knapp, S.L. and Krentler, K.A. (2006) Measuring student expectations and their effects on satisfaction: The importance of managing student expectations. Journal of Marketing Education 28, 254–264.
  • Atkins, M. (1995) What should we be assessing? In Assessment for Learning in Higher Education ( ed. P. Knight). London: Kogan Page.
  • Bevan, R., Badge, J., Cann, A., Willmot, C. and Scott, J. (2008) Seeing eye-to-eye? Staff and student views on feedback. Bioscience Education 12, doi: 10.3108/beej.12.1.
  • Black, P. and Wiliam, D. (1998) Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice 5, 7–74.
  • Boud, D. (2000) Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education 22, 151–167.
  • Brew, A. (2006) Research and Teaching: Beyond the divide ( eds. N. Entwistle and R. King). Basingstoke: Palgrave Macmillan.
  • Brown, G., Bull, J. and Pendelbury, M. (1997) Assessing Student Learning in Higher Education. London: Routledge.
  • Chanock, K. (2000) Comments on essays: Do students understand what tutors write? Learning and Teaching in Higher Education 5, 95–105.
  • Crawley, M.J. (2007) The R Book. Chichester: John Wiley & Sons.
  • Crisp, B.R. (2007) Is it worth the effort? How feedback influences students’ subsequent submission of assessable work. Assessment & Evaluation in Higher Education 32, 571–581.
  • Dochy, F., Segers, M. and Sluijsmans, D. (1999) The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education 24, 331–350.
  • Duncan, N. (2007) ‘Feed-forward’: Improving students‘ use of tutors’ comments. Assessment & Evaluation in Higher Education 32, 271–283.
  • Falchikov, N. and Goldfinch, J. (2000) Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research 70, 287–322.
  • Gibbs, G. and Simpson, C. (2004) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education 1, 3–31.
  • Glover, C. and Brown, E. (2006) Written feedback for students: Too much, too detailed or too incomprehensible to be effective? Bioscience Education 7, doi: 10.3108/beej.2006.07000004.
  • Hattie, J., Biggs, J. and Purdie, N. (1996) Effects of learning skills interventions on student learning: A meta-analysis. Review of Educational Research 66, 99–136.
  • Higgins, R., Hartley, P. and Skelton, A. (2001) Getting the message across: The problem of communicating assessment feedback. Teaching in Higher Education 6, 269–274.
  • Higher Education Academy (2012) A Marked Improvement: Transforming assessment in higher education. York: The Higher Education Academy.
  • Hope, S.A. (2011) Making movies: The next big thing in feedback? Bioscience Education 18, doi: 10.3108/beej.18.2SE.
  • Jenkins, A., Blackman, T., Lindsay, R. and Paton-Saltzberg, R. (1998) Teaching and research: Student perspectives and policy implications. Studies in Higher Education 23, 127–141.
  • Maclellan, E. (2001) Assessment for learning: The differing perceptions of tutors and students. Assessment & Evaluation in Higher Education 26, 307–318.
  • McCann, L. and Saunders, G. (2009) Exploring Student Perceptions of Assessment Feedback. Project Report. Southampton: The Higher Education Academy Subject Centre for Social Policy and Social Work (SWAP).
  • McKeachie, W.J. (2002) Teaching Tips. Boston, MA: Houghton Mifflin.
  • Nicol, D. (2010) From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education 35, 501–517.
  • Nicol, D. and Macfarlane-Dick, D. (2004) Rethinking formative assessment in HE: A theoretical model and seven principles of good feedback practice. In Enhancing Student Learning Though Effective Formative Feedback ( eds. C. Juwah, D. Macfarlane-Dick, B. Matthew, D. Nicol and B. Smith). York: The Higher Education Academy.
  • Orsmond, P. (2011) Self- and Peer-assessment: Guidance on Practice in the Biosciences. Leeds: UK Centre for Bioscience, Higher Education Academy.
  • Orsmond, P., Merry, S. and Reiling, K. (1997) A study in self-assessment: Tutor and students’ perceptions of performance criteria. Assessment & Evaluation in Higher Education 22 (4), 357–369.
  • Orsmond, P., Merry, S. and Reiling, K. (2005) Biology students’ utilization of tutors’ formative feedback: A qualitative interview study. Assessment & Evaluation in Higher Education 30, 369–386.
  • Quality Assurance Agency (QAA) (2006) Code of Practice for the Assurance of Academic Quality and Standards in Higher Education. Section 6: Assessment of Students. Gloucester: QAA.
  • R Development Core Team (2011) R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing. http://www.R-project.org/.
  • Sadler, D.R. (1989) Formative assessment and the design of instructional systems. Instructional Science 18, 119–144.
  • Scott, J., Shields, C., Gardner, J. and Hancock, A. (2011) Student engagement with feedback. Bioscience Education 18, 9pp.
  • Stefani, L. (1994) Peer, self and tutor assessment: Relative reliabilities. Studies in Higher Education 19, 69–75.
  • Weaver, M.R. (2006) Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education 31, 379–394.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.