2,149
Views
1
CrossRef citations to date
0
Altmetric
Clinical Education for papers

Assessment programs to enhance learning

ORCID Icon, &

Abstract

Background: Assessment is an essential part of the educational system. Usually, assessment is used to measure students’ knowledge, but it can also be used to drive students’ learning and behaviors. Assessment programs should be constructed in a well-balanced way allowing not only measuring students’ knowledge but also changing students’ behavior.

Objectives: In this paper, we will discuss different strategies that can be used to change students’ behaviors which may improve their learning.

Main findings: Assessment as well as the learning material should be congruent with the objectives. Based on the literature, we summarize findings that would address key points to allow assessment for learning instead of learning.

Conclusions: When constructing an assessment program, choices have to be made that depend on aspects like logistics, financial, organizational, managerial, educational culture, and cooperation of individual and key faculty members. It is important to realize that whatever decisions you make, students will always adapt their behavior to the way you construct your assessment program.

Introduction

It is important to realize that the way we assess our students has an enormous effect on their learning and their learning behavior. Assessment programs should therefore be constructed in a well-balanced and sensible way. In this paper, we will discuss practical choices that intend to optimize assessment strategies.

It has been said that assessment drives learning i.e. by assessing students we force them to learn what we want them to learn.Citation1 Ian Hart explicitly illustrated this though by using the phrase ‘Students learn what you inspect rather than what you expect them to learn’ at the AMEE conference in Prague 1998. Looking at assessment from the perspective of the student brings us to the following well known but often not explicitly formulated issues. The first 2 things students want to know at the start of a course is ‘When is the test and what happens if I fail?’ These two are strong determinants of study behavior (time on task). In addition a third question they ask is what do I need to know. The learning goals/objectives should be clear to students and staff right from the beginning and the assessment as well as the learning material should be congruent with the objectives. In this paper, we discussed specific strategies to drive students’ learning through assessment.

Tip 1. Regular testing = regular learning

It is well appreciated that the majority of students start learning only when they need to: i.e. 2–3 weeks before a test.Citation2 It is known that procrastination is a common problem, interfering with knowledge retention let alone knowledge application. So how can we steer students to study more regularly? Regular study will increase their knowledge retention.Citation3 One way may be to give students assignments, but if these assignments do not add to their final mark, they may not put sufficient effort in itCitation4 and therefore they will not study regularly. When such assignment has consequences for their mark students will put a serious effort in it.Citation4,5 Thus, an assessment program should steer students’ study strategies to enhance learning and long-term retention. Also, scheduling your test more regularly will decrease students’ procrastination.Citation6 For example, a test’s schedule of every 3–4 weeks will not allow students to lean back and postpone studying their learning material.Citation7

Tip 2. Time = limited

A week has a limited number of hours. The working hours per week vary from country to country. Formal working weeks in Europe ranges from 36 to 40 h. Students also have a social life, practice sport, enjoy their family and friends, and of course need to sleep and work. A week has five working days and two weekend days, depending on the culture a calculation should be and can be made which amount of study time is justifiable. Based on the European Credit system where 60 ECTS equal 1680 h. Consequentially, this means that in a 42-week full-time study program a student should spend 40 h per week studying. In this context, it is important to realize that there is a relation between the number of contact hours and self-study time. The optimum number of contact hours per week is 12. The self-study time goes down with an increasing number of contact hours.Citation8

Tip 3. Avoiding competition

Competition between tests or other curriculum activities should be avoided. A test should not be considered as standalone but seen in the context of the complete assessment program. When constructing such a program, it should be realized that tests and other assessment and educational activities should be scheduled in such a way that there is no competition. Otherwise, students will study hard for one test (most often the first one in a row) and spend less time on the others. In other words, a program should have a feasible planning of tests and educational activities in order to be ‘do-able.’

Tip 4. Tests are significant

The above-mentioned considerations are helpful in stimulating students to pass tests at the formal assessment moment. Students should take tests seriously. We would undermine this if the re-sit would be very attractive for students. This is the case when the resit is planned soon and when the resit assesses only parts of the regular assessment. A resit within 2 weeks after the formal test does not only stimulate procrastination behavior of students before the formal test, but also competes with educational activities during the week of the resit. This we should avoid. That means that the best place for a resit is the summer vacation. This does not compete with formal educational activities and of course is unattractive for students: an extra stimulus to pass the formal test.

We need to gather enough test information of students. The more information we have, the more reliable high-stakes decisions will be.Citation9 That is another reason to use sub-tests. If a student has an unsatisfactory score after all subtests, then the resit should cover the whole body of knowledge. However, in the summer vacation there is no time for ‘spreading’ subtests. So, the whole body of knowledge will be tested at once. This is also less attractive for students. The result is that students will spend more time on task for the regular tests.

Tip 5. Meet expectations

Students often feel insecure when facing a test. It is important to realize this and to communicate that all students can pass your test. Passing tests must be feasible; they should not be too difficult. Tests should reflect the study material regarding content and difficulty level. Suppose that only 10% of the study material is difficult then also the test should reflect this and contain more or less the same percentage of difficult items. Tests should contain items on easy as well as difficult parts of the study material and not just the difficult issues. It is unrealistic to assume that students will know the easy parts and not assess that. In addition, it would also be unfair to only assess the difficulty parts, since students need to acquire the easy as well difficult material.

Tip 6. Compensation increases motivation

Tests are not perfect, since there is always noise in the measurement and students should not become victim of that. Students that have ‘enough’ knowledge should not fail. If students fail a test while knowing enough, they may lose their motivation to continue. At the same time, capable students that did not prepare well enough, should also be given the change to compensate their ‘mistake.’ To achieve that, students should receive a strong stimulus to study harder. This stimulus is an opportunity to compensate a low (initial) result with a high mark on a subsequent test.Citation10 Instead of students taking one high-stake test, they would take several tests that in the end would compose their grade. Repetition of the content in sequential tests will stimulate students to repeat study material on which they previously performed poorly.

Tip 7. Cognitive psychological considerations

The aspects mentioned above all fit in findings from cognitive psychology. These suggest on the one hand that spacing study activities benefit students’ long-term retention. This is known as the spacing effect.Citation11 Thus, avoiding students to cram before a test and spread their study activities benefits their knowledge retention. This is explained by the fact that students would retrieve the same information repeatedly over time, which would make it easier to retrieve it later on. Another important finding from cognitive psychology is the so-called testing effect, which refers to improving the long-term retention by being tested instead of re-studying.Citation12 So on the other hand testing itself induces a learning effect. Combining both effects would result in a cumulative assessment in which previous material is repeated over time and it would steer students to study in a spaced way instead of cramming before the test.

Tip 8. What does demotivate students?

As previously mentioned the assessment program should be constructed to motivate students to study optimally. The value of a test and the expectation students have are key motivational factors. On the other hand, competition between tests or other study activities works demotivating: you can only do one thing at a time. Too many resit possibilities and unfair standard setting procedures are big other de-motivators.

The absence of ‘repair’ possibilities and performances without consequences should be prevented.Citation2

Tip 9. Preparing an assessment program

Measuring knowledge

Students should be regularly assessed in a variety of ways in line with the formulated learning objectives and at different levels. At the end of students’ university training, they should have acquired an extensive amount of knowledge. Often, students’ assessment focuses on the knowledge that they are currently studying but not at the end of the level. We argue that it is important to verify students learning through testing their current knowledge, but also focus on which stage students are of their knowledge acquisition. To enhance learning, feedback must be provided.Citation13,14

Part of an assessment program could be a progress test, which is a longitudinal systematical assessment that assesses students’ knowledge at the end level.Citation15 Since this test is at end level and curriculum independent, students cannot learn for it. It serves as a thermometer and feedback instrument and allows us to verify students’ knowledge growth and whether the amount of knowledge matches their progression in the curriculum. Integrating a progress test in an assessment program brings many benefits, especially if a consortium of different universities is created.Citation15,16 Progress testing has been shown to be a valid and reliable tool to measure students’ knowledge,Citation17 knowledge growth,Citation18 different types of knowledge,Citation18 and benchmarking.Citation15,19,20 However, a progress test cannot replace other knowledge tests, since it is necessary to engage students in a more regularly testing in which they will focus on specific learning material.

Especially when working with large study units, end of block tests have shown to be ineffective for learning, since students cram before tests. Besides that, end of block tests do not allow for regular feedback or compensation. Also, it tends to become the only assessment that students have in one block, which makes the stake of the end of block test high and difficult to pass.

In contrast, a cumulative assessment format increases students self-study hours,Citation7 it will spread their study time throughout the semesterCitation7 and will decrease the stakes of the individual tests.Citation9 Besides that, it will allow regular feedback for students in which they will be aware of their knowledge gaps as their knowledge.

Type of questions

The type of questions plays an important role in students’ learning.Citation21,22 Traditionally, questions have been classified accordingly to Bloom’s taxonomy.Citation23 Lower order questions are items requiring students to remember or/and basically understand of the knowledge. Higher order questions are items requiring students to apply, analyse, or/and evaluate.Citation24 Students who practice solely with higher order questions have been shown to perform better on lower order and higher order questions than students who practice with just lower order questions.Citation21,22

When designing an assessment program, questions that require lower and higher order of cognitive processing should be used. Lower order questions are desirable for novice students who have not yet acquired the necessary knowledge to be applied. For example in progress testing, novice medical students correctly answered more lower order questions whereas more advanced students correctly answered more higher order questions.Citation18 Thus, the usage of questions should be aligned with the learning objectives and the test difficulty.Citation25

Measuring competency

With the change of curriculum paradigm from knowledge based to competency based, new ways of assessment were developed, especially for clinical practice. The measurement of clinical competence often occurs during the professional practice through observation.Citation26 Assessing clinical practice is more challenging than assessing students’ knowledge, since clinical competence takes into account other aspects, such as communication, practical skills and collaboration. Besides assessing students’ in a controlled environment, it is important to assess students’ in their real practice, known as workplace-based assessment.Citation27 Implementing workplace-based assessment is key to insure that students possess the necessary clinical competence for an unsupervised practice. This implementation, however, is time-consuming and requires well-trained teachers. In addition, for a reliable measurement of clinical competence it is necessary to use many observations of the same students by different raters.Citation28

Theoretically, the change from knowledge to competency framework allows students to learn at their own pace, since they have to master a certain competence instead of only acquiring knowledge. This may result in changing the current milestone (for a discussion see Norman, Norcini, Bordage, 2014).Citation29 It is also necessary to develop new ways of assessment that allow us to change the milestone. For example, the Entrustable Professional Activities (EPA)Citation30 is one way of assessment that allows assessing students at their own pace.

Tip 10. Using all the information possible

In an assessment program that contains both low and high-stake tests, there should be no single high-stake test, but only a high-stake decision at the end of a semester or even a year.Citation31 For this decision, all tests should be considered for students’ assessment and in such a system compensation between assessments should be possible. This will give more information about the students since many tests that are at different levels, from different perspectives and measuring different competencies can be taken into account.

Conclusions

It may be clear from the above that many choices have to be made. These will not only touch upon the issues we discussed. Additional factors that influence these choices are logistics, financial, organizational, managerial, educational culture, cooperation of individual and key faculty members, and many others. Whatever comes out of it compromises will be needed. Which characteristic has more weight in the compromise will depend on the context and the purpose of the assessment.Citation31

Finally, it should be emphasized that whatever assessment program you choose students will always adapt their behavior in order to learn as efficient as possible. Choices you made may have an undesired effect on students’ learning behavior. It is therefore detrimental to monitor the process and make adjustments when and where necessary.

Disclosure statement

No potential conflict of interest was reported by the authors.

Funding

This research was partially funded by CAPES – Brazilian Federal Agency for Support and Evaluation of Graduate Education [grant number 9568-13-1] awarded to Dario Cecilio-Fernandes.

Notes on contributors

Dario Cecilio-Fernandes is a PhD student, Center for Education Development and Research in Health Professions (CEDAR), University of Groningen and University Medical Center Groningen.

Janke Cohen-Schotanus, PhD, is an emeritus professor, Center for Education Development and Research in Health Professions (CEDAR) , University of Groningen and University Medical Center Groningen.

René A. Tio, MD, PhD, is an associate professor, Center for Education Development and Research in Health Professions (CEDAR) and Department of Cardiology, University of Groningen and University Medical Center Groningen.

References

  • Wood T. Assessment not only drives learning, it may also help learning. Med Educ. 2009;43(1):5–6.10.1111/med.2008.43.issue-1
  • Cohen-Schotanus J. Student assessment and examination rules. Med Teach. 1999;21(3):318–321.10.1080/01421599979626
  • Custers EJ. Long-term retention of basic science knowledge: a review study. Adv Health Sci Educ Theory Pract. 2010;15(1):109–128.10.1007/s10459-008-9101-y
  • Wolf LF, Smith JK. The consequence of consequence: motivation, anxiety, and test performance. Appl Meas Educ. 1995;8(3):227–242.10.1207/s15324818ame0803_3
  • Napoli AR, Raymond LA. How reliable are our assessment data?: A comparison of the reliability of data produced in graded and un-graded conditions. Res High Educ. 2004;45(8):921–929.10.1007/s11162-004-5954-y
  • Tuckman BW. Using tests as an incentive to motivate procrastinators to study. J Exp Educ. 1998;66(2):141–147.10.1080/00220979809601400
  • Kerdijk W, Cohen-Schotanus J, Mulder BF, Muntinghe FLH, Tio RA. Cumulative versus end-of-course assessment: effects on self-study time and test performance. Med Educ. 2015;49(7):709–716.10.1111/medu.12756
  • Schmidt HG, Cohen-Schotanus J, Van Der Molen HT, Splinter TAW,Bulte J, Holdrinet R, van Rossum ​HJM. Learning more by being taught less: a “time-for-self-study” theory explaining curricular effects on graduation rate and study duration. High Educ. 2010;60(3):287–300.10.1007/s10734-009-9300-3
  • van der Vleuten CPM, Schuwirth LWT, Driessen EW, Dijkstra J, Tigelaar D, Baartman LKJ, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205–214.10.3109/0142159X.2012.652239
  • Norcini J, Guille R. Combining tests and setting standards. In: Geoffrey R. Norman, Cees PM van der Vleuten, and David I. Newble, eds. International handbook of research in medical education. Berlin: Springer; 2002. p. 811–834.
  • Carpenter SK, Cepeda NJ, Rohrer D, Kang SHK, Pashler H. Using spacing to enhance diverse forms of learning: review of recent research and implications for instruction. Educ Psychol Rev. 2012;24(3):369–378.10.1007/s10648-012-9205-z
  • Roediger HL III, Karpicke JD. Test-enhanced learning: taking memory tests improves long-term retention. Psychol Sci. 2006;17(3):249–255.10.1111/j.1467-9280.2006.01693.x
  • Butler AC, Roediger HL. Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Mem Cognit. 2008;36(3):604–616.10.3758/MC.36.3.604
  • Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81–112.10.3102/003465430298487
  • Tio RA, Schutte B, Meiboom AA, Greidanus J, Dubois EA, Bremers AJ. The progress test of medicine: the Dutch experience. Perspect Med Educ. 2016;5(1):51–55.10.1007/s40037-015-0237-1
  • Heeneman S, Schut S, Donkers J, van der Vleuten C, Muijtjens A. Embedding of the progress test in an assessment program designed according to the principles of programmatic assessment. Med Teach. 2017;39(1):44–52.10.1080/0142159X.2016.1230183
  • Schuwirth LWT, van der Vleuten CPM. The use of progress testing. Perspect Med Educ. 2012;1(1):24–30.10.1007/s40037-012-0007-2
  • Cecilio-Fernandes D, Kerdijk W, Jaarsma ADC, Tio RA. Development of cognitive processing and judgments of knowledge in medical students: analysis of progress test results. Med Teach. 2016;38(11):1125–1129.10.3109/0142159X.2016.1170781
  • Cecilio-Fernandes D, Aalders WS, de Vries J, Tio RA. The impact of massed and spaced-out curriculum in oncology knowledge acquisition. J Cancer Educ. 2017. doi:10.1007/s13187-017-1190-y.
  • Cecilio-Fernandes D, Aalders WS, Bremers AJ, Tio RA, de Vries J. The impact of curriculum design in the acquisition of knowledge of oncology: comparison among four medical schools. J Cancer Educ. 2017. doi:10.1007/s13187-017-1219-2.
  • Redfield DL, Rousseau EW. A meta-analysis of experimental research on teacher questioning behavior. Rev Educ Res. 1981;51(2):237–245.10.3102/00346543051002237
  • Jensen JL, McDaniel MA, Woodard SM, Kummer TA. Teaching to the test … or testing to teach: exams requiring higher order thinking skills encourage greater conceptual understanding. Educ Psychol Rev. 2014;26(2):307–329.10.1007/s10648-013-9248-9
  • Bloom BS. Taxonomy of educational objectives: the classification of education goals. Cognitive domain. Handbook 1. New York: Longman; 1956.
  • Crowe A, Dirks C, Wenderoth MP. Biology in bloom: implementing bloom's taxonomy to enhance student learning in biology. CBE Life Sci Educ. 2008;7(4):368–381.10.1187/cbe.08-05-0024
  • Biggs JB. Teaching for quality learning at university: What the student does. London: McGraw-Hill Education (UK); 2011.
  • Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet. 2001;357(9260):945–949.10.1016/S0140-6736(00)04221-5
  • Norcini J. Workplace based assessment. In: Swanwick T, editor. Understanding medical education: evidence, theory and practice. 2nd ed. West Sussex, UK: Wiley Blackwell; 2010. p. 232–245.10.1002/9781444320282
  • Wilkinson JR, Crossley JG, Wragg A, Mills P, Cowan G, Wade W. Implementing workplace-based assessment across the medical specialties in the United Kingdom. Med Educ. 2008;42(4):364–373.10.1111/j.1365-2923.2008.03010.x
  • Norman G, Norcini J, Bordage G. Competency-based education: milestones or millstones? J Grad Med Educ. 2014; 6:1–6.
  • Ten Cate O. Nuts and bolts of entrustable professional activities. J Grad Med Educ. 2013;5(1):157–158.10.4300/JGME-D-12-00380.1
  • Van Der Vleuten C. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ Theory Pract. 1996;1(1):41–67.10.1007/BF00596229