1,868
Views
3
CrossRef citations to date
0
Altmetric
Letter to the Editor

Evaluating assessment programmes using programme evaluation models

, &

Dear Sir

We read with great interest the insightful article entitled “Twelve Tips for programmatic assessment” by van der Vleuten et al. (Citation2015). The authors have addressed the important issue of “programmatic assessment” and have eloquently provided concrete tips for its implementation. Development of a comprehensive competency assessment programme, instead of focusing on the assessment tools is an often forgotten aspect of student assessment.

As part of their suggestions, in Tip 9, the authors have pointed out that assessment systems should be evaluated, and they have also offered practical recommendations for evaluating the learning effect of the assessment programme. We would like to take this opportunity to elaborate on this issue by discussing the theoretical aspect of evaluating the assessment programme.

Assessment programmes like educational programmes comprise a series of components and activities designed in a specific context to attain intended goals. As a result, it seems advisable to employ programme evaluation theories and models to organise evaluation activities related to assessment programmes. Although, most of the programme evaluation models have been properly applied in the context of educational programmes, their utilisation in assessment programmes has not yet been reported. A few studies, such as that by Bok et al. (Citation2013) which have addressed the evaluation of an assessment programme have not reported using any specific evaluation model.

A variety of well-established evaluation models exist in the context of educational programmes from which assessment programme evaluators can opt to enrich their work. Quasi-experimental models and the well-known Kirkpatrick model are suitable if outcome achievement is considered. Goal-free evaluation models, on the other hand, focus on actual outcomes rather than predetermined ones, which in turn help evaluators to disclose assessment programme’s untoward effects. Context, Input, Process, and Product (CIPP) evaluation model and logic model can assist planners and evaluators through all phases of an assessment programme and can even be employed from the scratch when the programme is still being developed. CIPP model is especially helpful by paying special attention to the complex context in which assessment programmes should be implemented.

In sum, educators can choose from a variety of programme evaluation models to develop, monitor and evaluate assessment programmes in the medical education context. However, further studies are required to determine which of these models works best in the setting of assessment programme evaluation.

Declaration of interest: The authors report no conflicts of interest.

References

  • Bok HG, Teunissen PW, Favier RP, Rietbroek NJ, Theyse LF, Brommer H, Haarhuis JC, van Beukelen P, van der Vleuten CP, Jaarsma DA. 2013. Programmatic assessment of competency-based workplace learning: When theory meets practice. BMC Med Educ 13:123
  • van der Vleuten CP, Schuwirth LWT, Driessen EW, Govaerts MJB, Heeneman S. 2015. Twelve Tips for programmatic assessment. Med Teach 37(7). doi:10.3109/0142159X.2014.973388

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.