919
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Using seminars to assess computing students

Pages 44-57 | Published online: 15 Dec 2015

Abstract

This paper uses five years worth of assessment and feedback data to analyse the effectiveness of student-led seminars as an assessment strategy on a final year undergraduate e-commerce module. The strategy, which involves offering students opportunities to improve their grade by using multiple assessment points, is found to be effective but to suffer from inefficiencies for both student and teacher. A literature review provides ideas for improvements which retain the benefits of the strategy while reducing the workload for both students and teachers.

1. Introduction

For the past six years I have taught a final year undergraduate module on e-commerce to information systems and computing students at the University of Huddersfield. The module is core on some courses and optional on others, but is offered to most final year students studying in the Department of Informatics. The assessment strategy requires a single coursework assignment split into two parts: the first part should cover business-focused aspects of e-commerce, and the second implementation of an e-commerce website. The module had previously been taught by a colleague with the first part of the assignment being to write a business plan and the second to implement an e-commerce website appropriate to realising that business plan. After using this assignment for one year two problems were identified. First, business planning is not an area in which I have much expertise, or interest, so marking the business plans was time consuming and a little dull. Second, the close link between the business plan and the e-commerce website disadvantaged those students whose business plan limited the scope for exploring e-commerce technology.

After discussions with colleagues it was agreed that the two assignments would be decoupled for the following year. The second assignment would remain an e-commerce website implementation; that of a standard retail website, with students allowed to choose which product to sell. For the first part of the assignment students would research and write a seminar paper on a topic of current interest to e-commerce researchers and present their findings in a short seminar to a small group of students. Students would receive feedback on their seminar paper prior to presenting the seminar itself. This would allow them to address any identified weaknesses. Their contributions to other students’ seminars would also be assessed. In this way, students had two opportunities during the seminar presentations to improve their overall mark for the seminar assignment.

The use of student-led seminars for teaching and assessment at undergraduate level has been discussed by teachers in a range of disciplines. Peter CitationDaniel (1991) and CitationMick Healey (1991) present interesting reports on their experience of using seminars to assess geography students. Both report that assessing the seminar presentation itself is crucial in ensuring that students devote sufficient time to preparing an effective presentation. Without this the quality of the seminar presentations can be low, making it difficult for other students to learn anything from the seminar presentation. CitationClarke and Lane (2005), studying a module on an Early Childhood Studies course, note that small seminar groups also allow educators to cater to different learning styles among their students. All three of these papers assert that students value the opportunity to discuss their learning with their peers in a seminar setting. Finally, I had had previous experience of using small group seminars on a final year module covering current issues in computing, delivered with a more experienced colleague when I was teaching at Leeds Metropolitan University. This personal experience of student led seminars suggested that not only were students more motivated to engage with the topics discussed, but that they engaged more deeply with the process of academic investigation and that this had knock-on benefits, for example helping improve investigative research skills for their final year project.

After five years of using student led seminars on the e-commerce module the impression is that, on the whole, students enjoy the seminars, and that some students do make effective use of the feedback on their seminar paper to improve their grades during the seminar presentation. Evidence from the module evaluations is that over 80% of students are satisfied with the assessment and quality of feedback (rating it 4 or 5 on a Lickert scale running from 0 to 5)Footnote1. However, organising the student led seminars to fit within the constraints of the timetabled classes, and collating the evidence from the seminar paper, seminar presentation and contributions to other student seminars is time-consuming. This year, the Department of Informatics adopted a workload balance model that allocates only 20 minutes of staff time per student, outside of scheduled classes, for all assessment activity; this is much less than is currently spent organising and marking the e-commerce module’s seminars. At the same time, the University has adopted an assessment and feedback strategy (CitationUniversity of Huddersfield, 2010) concerned to improve the quality of feedback provided to students, measuring this through the National Student Survey and internal course and module evaluations.

Given this requirement to increase the efficiency of assessment while improving the effectiveness of feedback it seemed sensible to evaluate the current assessment strategy. The next section outlines the methodology used, justifying the choice in the light of the particularities of delivering this module over the past five years. The third section explores the available data about the seminar assessments, and the fourth looks to the literature for ideas on how to retain what works while meeting the department’s priority to reduce the time spent managing assessment activity. This evaluative focus of the study is why, unusually, the full literature review is presented after the data analysis.

2. Methodology

The purpose of this study is to evaluate the use of student led seminars within the assessment strategy of the final year e-commerce module of the Computing and Information Systems courses at the University of Huddersfield. The seminars have run since the 2006–07 academic year and have worked similarly in all five years. Students suggest a topic for their seminar paper, carry out their preliminary literature review to ensure that there are suitably authoritative sources available, agree the title with their teacher and write the seminar paper. They can choose the date when they will present their seminar paper to a small group of their peers. They also attend all seminar presentations by members of their group. Each seminar presentation lasts approximately 15 minutes. Students are assessed on their seminar paper, the presentation and on their contributions to seminar presentations by other members of the group. The study will help to decide whether to continue with this assessment strategy given the University and Departmental goals. This makes evaluation research (CitationSarantakos, 2005) a sensible choice of methodology. However, since this is a retrospective study, looking back on past practice, it will be informal and provide a formative, rather than a summative, evaluation. Based on the outline of the assessment strategy given above, two research questions spring to mind:

Q1: Do the seminar presentations allow students to improve on the grade given for their seminar paper?

Q2: Do students make effective use of the written feedback on their seminar paper to improve their performance in the seminar presentation?

Note that Q1 may be true even if students do not improve in the criteria assessed by the paper, since they may do very well in presentation skills and participation: hence the need for Q2.

The study will be quantitative, drawing on records of assessment and feedback on the module from the past five years. It is important to understand the nature of this data. The grades for the seminar paper and the overall grade for the seminar component are available for all five years. The overall grade for the seminar was based mainly on the seminar paper itself, which formed a baseline for the assessment: students were guaranteed not to get a lower grade than was indicated by their seminar paper feedback. The seminar presentation was used to assess whether weaknesses in the paper had been addressed; in essence, it allowed them to plug any holes identified in the paper. In the first year participation in seminars was assessed by the teacher grading each contribution on a three point scale: questions seeking clarification; probing questions; and critical analysis. This proved cumbersome, and from the second year onwards participation was assessed using a scatterplot () for each student, with their contributions graded along two axes: relevance and sophistication. The student’s name and identifier were written above the scatterplot and each contribution was recorded with a small “x” on the scatterplot: the more sophisticated the contribution, the further to the right; the more relevant, the higher up. The same scatterplot was used to record contributions to all seminars in which the student was a participant (though not the one they presented). Once all seminars had been completed the scatterplot provided an overall assessment of the student’s contributions to seminars. The participation mark was used mainly to deal with borderline cases: a strong performance (most marks in the top-right) would ensure the student was moved above the borderline grade. This holistic approach relied on a measure of trust between teacher and student.

Figure 1 The scatterplot used to assess participation in seminars

The actual feedback given to all students is available for three of the five years. For 2006–07 and 2008–09 only the moderation samples survive; for the purposes of this study these are not fully representative of their cohorts, so have been excluded. Seminar papers and presentations were marked using a standard rubric based on the assessment grid devised by CitationMargaret Price and Chris Rust (2004). Assessment criteria on understanding, analysis, synthesis and evaluation are common to both the seminar paper and presentation. Additional criteria cover written communication, spoken communication and quality of referencing. The common assessment criteria between paper and presentation make it feasible to analyse whether the student has made effective use of the feedback on their seminar paper to improve their work in time for the seminar presentation. All information about the marking criteria — the rubrics and scatterplot — were shown and explained to students at the start of the module.

Using data that was generated as part of the normal delivery of the module ensures that no students were disadvantaged by this study. The data was anonymized before analysis to comply with data protection requirements. A small number of students did not present their seminar, so their overall grade is necessarily the same as their seminar paper. For this reason they have been excluded from the analysis since they do not provide any information on the value of the seminar presentations.

3. Data and Analysis

The data existed in a number of formats: Excel and OpenOffice spreadsheets, Word documents, PDF documents and paper copies of feedback. In addition, the University grading scheme changed twice during the study period. In 2006–07 the University distinguished between Refer (resubmission allowed) and Fail (no resubmission) grades. From 2007–08 onwards all failed work (other than non-submissions) was automatically referred. In 2010–11 the new assessment and feedback strategy required a move to decile-based feedback. summarizes these schemes.

Figure 2 Grading schemes for the seminar assessment

To allow comparisons between years the grading schemes for 2006–07 and 2010–11 were mapped on to the “A, B, C, D, Fail” scheme: “Refer” was mapped to “Fail”; “100 — 90” “89 — 80” and “79 — 70” were mapped to “A”; “69 — 60” to “B”, “59 — 50” to “C”; “49 — 40” to “D”; the remaining deciles were mapped to “Fail”.

The University grading scheme was used on all the feedback sheets. For example, for the criterion “Level of understanding of the material” an “A” grade meant “Excellent understanding, no significant weaknesses.”, whereas a “B” grade meant “Good understanding. Weaknesses in one area compensated by good understanding elsewhere”. In arriving at a grade for the seminar paper and the overall seminar grade it was possible to have a feedback profile where e.g. half the criteria were “A” grade and half “B” grade. In this case, the grade recorded was “A/B”, indicating that the performance was on the borderline between the two grades. Thus, for the seminar paper and overall seminar grades there is a nine point scale: A, A/B, B, B/C, C, C/D, D, D/F and F. Typically, a “B” equated to 65% and a “B/C” to 60% etc. when calculating the overall mark for the module. However, the data for 2007–08 recorded the overall seminar grade as a percentage. Again, to allow comparisons between years, this percentage was re-coded to the nine-point scale, with e.g. 51 — 57% mapped to “C” and 58 – 60% mapped to “B/C”. All available data were then transcribed into a Microsoft SQL Server Express database allowing it to be analyzed using SQL statements.

3.1 Grades

Q1 was investigated using the data on the seminar paper and overall seminar grades only. The following sub-questions were addressed:

Q1.1: What proportion of students improved their grade between the seminar paper and overall grading?

Q1.2: Did the grade for the paper indicate whether a student was likely to improve their performance?

Q1.2 allows us to investigate whether the use of seminar presentations and participation to allow students to improve their grade was beneficial to any particular group of students.

shows the number of students on the module and the number whose overall seminar grade was better than the grade for their seminar paper. Given that “A” grade students could not improve their grade figures excluding “A” grade students are also included.

Figure 3 Students improving their grade, grouped by year

Across all years, 39% of students who could improve their grade did so. In the first year, 2006–07, fewer students improved their grade than in subsequent years: I do remember being disappointed that some students had not made effective use of the seminar presentations to improve their performance, so in subsequent years the opportunity offered by the seminar presentations was emphasized during classes. The data indicates that this has been successful, and that the seminar presentations are allowing a significant minority of students to improve their overall seminar grade: a positive answer to Q1. presents the information as a bar chart.

Figure 4 Percentage of students improving their grade, grouped by year

Q1.2 asks a subtler question: exactly who is benefitting from the second chance afforded by the seminar presentations? groups students from all years by the grade awarded for their seminar paper then counts how many of these students had a better overall grade. Half of all students with a C, C/D or D on their seminar paper improved on this after the seminar presentations. This is better than students with a A/B, B or B/C, only 31% of whom improved their grade. 35% of students with D/F or F improved, although as the F band covers a much wider range of performance it could hide some improvement. The data provides some evidence that the seminar presentations allow students performing in the middle of the attainment range to raise their game. One possible interpretation of the relatively weaker improvement at the higher end of the attainment scale is that many students are aiming for a 2:1 overall on their degree. So having attained a grade on the assignment that meets this goal they chose to focus their efforts elsewhere (recall that students were guaranteed not to get a lower grade than was indicated by their seminar paper feedback). The law of diminishing returns may apply to repeated assessment opportunities.

Figure 5 Students improving their grade, grouped by seminar paper grade

Figure 6 Percentage of students improving their grade, grouped by seminar paper grade

presents the same data graphically. Note that the 100% improvement at the d/f grade boundary is a good example of why graphs are not always the best way to present data: it represents a large percentage improvement over a trivial base; only one student was in this group, and they improved.

One final point of interest is that students on the borderline are more likely to improve their grade than students given a straight grade. A borderline grade indicates real potential to move up, so may motivate students to improve. Another interpretation is that I had placed the student on the borderline for their seminar paper because I was uncertain about the level of attainment. With additional evidence, it could be argued that I was more likely to increase the grade. However, the seminar presentations were assessed without reference to the feedback on the seminar paper, so the grades recorded in the seminar presentation were largely independent of the seminar paper; there was no way for me to make unwarranted favourable judgements during the presentations, other than subconsciously.

3.2 Feedback

Q2 was investigated by analysing the feedback for the seminar paper and seminar presentation. Participation was not used. There were four assessment criteria common to the paper and presentation: understanding, analysis, synthesis and evaluation. To assess whether students made effective use of the feedback, I considered the following sub-questions:

Q2.1 For each assessment criterion, what proportion of students improved their grade?

Q2.2 What proportion of students improved their grade on all, some or none of the assessment criteria where they could improve?

Q2.3 Did any students do worse?

shows, for each of the four assessment criteria, the number of students who improved their performance on that criterion (the table is split into two parts to fit it on the page). Again, we exclude students with an “A” for the criterion under consideration, since they could not improve their performance. presents this data as a bar chart. This answers Q2.1: across all years, and all criteria, around two fifths of students who could improve their grade for an assessment criterion did improve it.

Figure 7 Improvement in individual assessment criteria

Figure 8 Percentage improvement in individual assessment criteria

This shows that students can use the feedback from the seminar paper to improve their performance in the seminar presentation on any one of the assessment criteria. The improvement in overall grade identified in section 3.1 above is not down entirely to their brilliant presentation skills, or the incisiveness of their contributions to other students’ seminars. Thus the data provides some support for the hypothesis that the feedback on the seminar paper helps students to improve.

Q2.2 examines whether students improve uniformly. Producing data to answer Q2.2 is tricky in SQL, since it involves multiple conditions. However, inspection of the data showed that of the 116 students who did not gain an “A” for understanding on their seminar paper, only one student gained any “A” grades (for evaluation). Consequently the data could be partitioned into two sets: 115 students who could improve on all criteria, so were easy to analyze using SQL; and 45 students who had some “A” grades which they could not improve, who were analyzed manually. Seven students had straight “A” grades, so could not improve and were excluded from the analysis. The results are presented in .

Figure 9 Is improvement uniform or patchy?

Figure 10 Is improvement uniform or patchy?

It is clear that improvement is rather patchy, with only around a fifth of students improving in all criteria. There is also a large minority of students who do not improve at all. Inspection of the data shows that of the 54 students who did not improve at all 41 had already achieved a B/C grade or better. This adds weight to the suggestion above that many students are aiming for a 2:1 overall and having attained a “B/C” grade or better on the assignment chose to focus their efforts elsewhere.

Q2.3 asks whether the performance of any students was poorer in the seminar presentation than in the paper. We would expect this to be true; some students are very nervous in presentations. However, presenting the results of an investigation to a small group is a key skill for IT practitioners, so it is important that we allow students to practice their presentation skills. To help students deal with nerves they were allowed to choose the format of the presentation: some produced slides, others spoke from cue cards or short scripts; some stood at the front of the class, others sat with the group gathered in a circle. Even so, some students still performed poorly in the seminar presentation compared to the seminar paper. provides the relevant data. Again, inspection of the data identified that those students who received an “F” for any criteria also received an “F” for evaluation, except for one student who received a single “F” for synthesis. Again the data could be partitioned into two sets: 147 students who could get worse on all criteria, so were easy to analyze using SQL; and 13 students who had some “F” grades who were analyzed manually. Two students had straight “F” grades, so could not get worse and were excluded from the analysis.

Figure 11 Do some student perform less well in the seminar presentation?

Figure 12 Percentage of students performing less well in the seminar presentation

Even recognizing that some students get nervous in presentation it is surprising that two fifths of students performed less well in the presentation, on at least one criterion, than they did in their seminar paper. In fact, my impression was that the majority of students had done at least as well, if not better, on the seminar presentations than in their seminar paper. Further analysis of the data showed that of the 67 students who had performed less well on at least one criterion in the presentation, the majority were students who had achieved an “A” or “B” grade for that criterion in the seminar paper (see ). This provides yet more support for the idea that, having achieved their 2:1, students did not put much further effort into the seminars.

Figure 13 Whose performance gets worse?

Figure 14 Percentage of “A” and “B” grade students whose presentations scored lower than their paper on the indicated criteria

4. Literature Review and Discussion

Seminars are widely used in higher education, particularly in the social and medical sciences (see ). The choice of content for seminars can be dictated by the teacher (CitationLevia and Quiring, 2008; Casteel and Bridges, 2007; Ratkić, 2009; Weberschock et al, 2005) or by negotiation between student and teacher (CitationPoirier, 2008; Daniel, 1991). A mixed approach is for students to present a seminar based on a key text (CitationHealey, 1991) that may be specified by the teacher. One advantage here is that students need not write a full seminar paper, but can simply prepare a presentation on the key text drawing on additional sources to highlight alternative ideas and arguments. Participants can read the key text themselves, allowing them to join in the debate. This method ensures all students engage with the key texts, and are exposed to a range of additional sources. All students also engage with the wider literature on their particular seminar topic.

Figure 15 Who uses assessed seminars?

Assessment is often seen as crucial to ensuring students engage with seminars (CitationDaniel, 1991; Clarke and Lane, 2005; Levia and Quiring, 2008). CitationHolley (2002) reports that in a traditional non-assessed seminar only around a quarter of participating students took notes, limiting their value as a learning opportunity. CitationJaarsma et al (2009) found that where participation was not assessed student participants spent only 0 — 0.5 hours preparing rather than the 1 — 1.5 hours students felt they needed to spend. Healey explains that “[a]lthough students preparing and giving a seminar can benefit greatly from the experience and learn a lot about the topic covered, the rest of the group often learn little” (Citation1991, p.228). CitationHealey (1991) sought to address this problem by emphasising that questions addressed in student led seminars were as likely to appear on the final exam paper as questions on teacher-led material. Even when seminars are not summatively assessed there is assessment going on: in seminars observed by CitationFejes et al (2005) the academic felt they had to note which students contributed, even though this changed the dynamic of the seminar from collaborative to competitive. Thus there are good reasons to make assessment of seminars explicit, for both presenters and participants.

One significant issue in the use of assessed seminars is the workload. CitationHolley (2002) developed an on-line seminar saving 6 hours of contact time, but this was more than offset by the around 200 hours of preparation required. CitationCastel and Bridges (2007) used seminars with small classes of advanced psychology undergraduates, but found the preparation and assessment load heavy. It included: written feedback to the seminar presenter on content and presentation skills; assessment of student participation in seminars attended; written feedback to each student on reaction papers to that week’s seminar; other written assignments; and a final written paper. CitationLevia and Quiring (2008) gave students feedback on multiple drafts of their seminar materials. Some students also reported a heavier workload than on non-seminar courses (CitationWeberschock et al, 2005) and others found the approach inefficient (CitationJaarsma et al, 2009). Ratkić (2007) describes a dialogue seminar course where students engaged in ‘slow reading’ of sources, taking marginal notes which formed the basis for the subsequent seminar and ‘slow writing’ session to produce a reflective summary of the sources. Again, the workload for students is quite heavy. CitationHealey (1991) assessed the seminar itself and a written summary, prepared after the seminar, covering both the presentation and discussion. However, the written summary was only needed if the presentation handouts were not good enough; an incentive to prepare well, perhaps.

There are, despite the workload, benefits from seminars. Allowing students to choose the topic they investigated proved popular with my e-commerce students. Therese Poirier found that “[b]ecause students had a choice in what emerging issue they presented, they were more likely to be passionate about the perspective chosen” (CitationPoirier, 2008, p.3): passionate students are likely to be both engaged and engaging. There is also the potential for instant feedback to student presenters. Student participants can use a ‘clicker’ system to provide ongoing feedback to the presenter (CitationPoirier, 2008); I used a hand-writing capture device to allow me to give students my written feedback immediately after the seminar.

Many students attending seminars enjoy the small group work (CitationClarke and Lane, 2005) and value the opportunity to engage in discussions (CitationClarke and Lane, 2005; Weberschock, 2005; Jaarsma et al 2009; Castel and Bridges, 2007). I found that some seminars generated more discussion than others, and that sometimes the discussion veered off topic, but feedback from students suggested that, on the whole, they did enjoy the opportunity to engage in small group discussions. However, CitationJaarsma et al (2008) found that their students did not regard the interaction with other students as contributing much to their learning, rating the teacher’s performance as having a much stronger influence. They felt that “the teacher should say which information is correct or incorrect, should draw conclusions, should draw schemes or maps”. Students recognise the risk that they are just “sharing their ignorance” (CitationAdams, 2000), and want input from an expert. Some students also worried that the seminars could be an uncomfortable experience (CitationClarke and Lane, 2005) and felt that the teacher should take a more active role in managing group dynamics to ensure a successful learning experience for all (CitationClarke and Lane, 2005). One approach to help nervous students cope is to have students work in pairs, or threes, to present the seminar (CitationHealey, 1991; Daniel, 1991; Poirier, 2008; Levia and Quiring, 2008). Finally, CitationLevia and Quiring (2008) found no improvement in grades from introducing student led seminars onto a courseFootnote2. Thus there is a worry that, while enjoyable, student led seminars may not be that effective. I believe that this worry can be addressed by careful design of the learning and assessment strategy. On my e-commerce module there was a mix of teacher and student led seminars with regular traditional lectures; the module evaluations indicate that over 80% of students feel they have improved their understanding of e-commerce (rating it 4 or 5 on a Lickert scale running from 0 to 5).

The literature provides additional ideas on how to ensure effective learning by involving students in the design and delivery of seminar courses. CitationWeberschock et al (2005) discuss seminars on evidence based medicine (EBM) delivered to final year medical students. Here, a group of 6 undergraduates would discuss case studies supervised by an assisting tutor, trained in EBM. The approach proved effective in improving knowledge and understanding, but what is interesting is that nine of the assisting tutors were themselves undergraduate students. This demonstrates that, with a structured approach to follow, peer-assisted learning can be very effective and may help reduce the staff workload associated with seminars.

CitationPoirier (2008) reports a different use of students: to assess the seminars themselves. The student presenting the seminar circulated a pre-presentation abstract and learning objectives (based on Bloom’s taxonomy) to all participants. Using a teacher-defined rubric students marked these written materials and the presentation itself. The teacher also marked the presentation, although this mark was not used unless it differed from the mean student mark by more than 10%. Students can also help develop the marking criteria. At the start of each year both CitationDaniel (1991) and CitationHealy (1991) led discussions with their students of what made a good seminar. Both used these discussions to develop summative assessment criteria, with Daniel discussing the draft with his students and agreeing final criteria. At the prompting of the students Daniel also agreed that the final grade would be negotiated between himself and the student. The agreed criteria structured their discussion and despite his initial reluctance to negotiate grades he writes: “My experience has been that students are quite capable of evaluating the quality of the seminar and their effectiveness as a seminar leader” (CitationDaniel, 1991, p.60). In contrast CitationHealey (1991) did not negotiate the final grade but used the student assessments of the presentation and his own assessment of a written summary submitted by the presenting students to determine an overall mark.

CitationLevia and Quiring (2008) also used marking criteria to guide students, this time in the form of an analytical rubric similar to the ones I use. They used the rubric in a pre-seminar review meeting between students and teachers to guide the group’s final preparations for presenting their seminar. However, they found that the analytical rubric wasn’t effective in ensuring that all groups documented and defended assumptions during the subsequent seminar, and suggest that such review meetings need a written draft of the presentation to highlight such problems.

As mentioned above, many teachers allow students to present in pairs or threes, whereas my students presented alone. One problem with group presentations is how to distribute the group mark between students. CitationHealey (1991) simply gave all students the same mark. CitationDaniel (1991) encouraged, and CitationPoirier (2008) required, students to present pro and con arguments, ensuring that the two students could be marked on their individual contribution. As my e-commerce students often presented divergent opinions on the same topic this explicit pro / con approach seems an attractive option for both simplifying the organization of the seminars and providing peer support.

We end this section with a short list of recommendations, derived from the literature, on how to make effective use of seminar presentations for assessment purposes ().

Figure 16 Best practices for assessed seminars, gleaned from the literature

5. Conclusions and Recommendations

The analysis of data from my e-commerce module indicates that feedback in the form of a graded rubric and general comments can help students to improve on an existing grade, although it is important to remind them regularly of these opportunities to improve. Students with middle range grades, and those on grade borderlines, seem to benefit most. There is strong evidence that students with the higher grades did less well in the presentations than in the written paper. My interpretation is that at this point the law of diminishing returns kicked in: as the likely return on time invested decreased they chose to focus their efforts on other assignments. These findings suggest that the current assessment strategy is beneficial for many students, but that it is inefficient since both the teacher and the stronger students end up doing unnecessary additional work.

The literature review provides a variety of ideas for improving my seminar-based assessment strategy. First, students should continue to choose their seminar topic as this is an effective way to encourage engagement. However, they’ll choose from a list of key texts and propositions relevant to those texts, then work in pairs with one student preparing an argument for, and one against, the proposition. Together they will then deliver a single 30 minute seminar.

Rather than writing a seminar paper students will develop materials to support the presentation: abstract, learning objectives, reading list, and slides or handouts. These will be assessed formatively in negotiation with their teacher before their seminar presentation, and summatively as part of the assessment of the presentation itself. This provides early feedback, without the heavy overhead of summative assessment. There will then be an opportunity to submit a post-seminar written summary of the presentation as compensatory evidence if the original seminar materials are of poor quality. As this will be optional, students who are happy with their grade need do no further work. In effect, this inverts the current workflow of “write a paper then present it”, and should entail fewer written papers being submitted for summative assessment.

Participation will continue to be assessed as now, using a scatter plot, but rather than helping to assess borderline cases it will have a fixed proportion of the overall mark. If feasible, students will be given access to a rolling record of their participation, again to encourage them to improve.

Finally, the module will continue to use complementary teacher-led materials to ensure that students feel they are being taught, as well as learning for themselves.

Notes

1 From 2006–07 to 2008–09 only course evaluation questionnaires were used at the University of Huddersfield, so no data about this specific module is available for these years. Since 2009–10 module evaluation questionnaires have been run, and these provide the quoted statistics.

2 Though they did feel that their use of an analytical rubric in formative and summative assessment had improved student learning.

References

  • Adams Scott (2000) Dogbert consults, Dilbert [online]. Available at < http://dilbert.com/strips/comic/2000-06-12/ > [Accessed 2 June 2011].
  • Casteel Mark A Bridges K Robert (2007) Goodbye lectures: a student-led seminar approach to teaching upper division courses, Teaching of Psychology 43 (2), pp107 - 110
  • Clarke Karen and Lane Andrew M. (2005) Seminar and tutorial sessions: a case study evaluating relationships with academic performance and student satisfaction, Journal of Further and Higher Education, 29 (1) pp15 - 23.
  • Daniel Peter A. (1991) Assessing student-led seminars through a process of negotiation, Journal of Geography in Higher Education, 15(1), pp57 - 62.
  • Fejes Andreas Johansson Kristina and Dahlgren Madeleine Abrandt (2005) Learning to play the seminar game: student’s initial encounters with a badic working form in higher education, Teaching in Higher Education, 10 (1), pp29 - 41.
  • Healey Mick (1991) Improving the effectiveness of student-led seminars, Journal of Geography in Higher Education, 15 (2), pp228 - 231.
  • Holley Debbie (2002) Which room is the virtual seminar in please? Education + Training, 44 (3), pp.112 - 121.
  • Jaarsma A Debbie C, de Grave Willem S Muijtjens Arno M, MScherpbier, Albert J J A, van Beukelen Peter (2008) Perceptions of learning as a function of seminar group factors, Medical Education 42, pp1178- 1184.
  • Jaarsma A, Debbie C, Dolmans Diana DHJM, Muijtjens Arno M M, Boerboom Tobias T B, van Beukelen Peter, Scherpbier Albert J J A (2009) Students’ and teachers’ perceived and actual verbal interactions in seminar groups, Medical Education 43, pp368 - 376.
  • Levia Del F. Jr and Quiring Steven M (2008) Assessment of student learning in a hybrid PBL capstone seminar, Journal of Geography in Higher Education, 32 (2), pp.217 - 231.
  • Poirier Therese I (2008) Instructional design and assessment: a seminar course on contemporary pharmacy issues, American journal of pharmaceutical education, 72 (2)
  • Price Margaret and Rust Chris (2004) Business assessment criteria grid, The Higher Education Academy < http://www.heacademy.ac.uk/resources/detail/resource_database/id347_assessment_grid_price_rust > (Accessed 26 April 2011)
  • Sarantakos Sotirios (2005) Social Research 3rd ed., Palgrave Macmillan, Basingstoke, UK.
  • University of Huddersfield (2010) Assessment and feedback strategy, University of Huddersfield, < http://www2.hud.ac.uk/shared/shared_tli/documents/support/assessment_and_feedback_strategy_2010_FINAL.pdf > (Accessed 26 April 2011)
  • Weberschock Tobias Bernd Ginn, Timothy Charles, Reinhold Johannes, Strametz Reinhard, Krug Daniel, Bergold Martin and Schulze Johannes (2005) Changes in knowledge and skills of Year 3 undergraduates in evidence-based medicine seminars, Medical Education 2005, 39, pp.665 - 671.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.