Abstract
This paper reports a follow-on project that assessed a series of portfolios assembled by a cohort of participants attending a course for prospective general practice trainers. In an attempt to enhance reliability, a framework for defining and addressing problems using a reflective practice model was offered to participants. The reliability of the judgements made by a panel of assessors about individual 'components', together with an overall global judgement about performance were studied. The reliability of individual assessors' judgements (i.e. their consistency) was moderate, but inter-rater reliability did not reach a level that could support making a safe summative judgement. Despite offering a possible structure for demonstrating reflective processes, the levels of reliability reached were similar to the earlier work and other subjective assessments generally, and perhaps reflected individuality of personal agendas of both the assessed and the assessors, and variations in portfolio structure and content; even agreement among the assessors about evidence of the framework being used was poor. Suggestions for approaches in the future are made. The conclusion remains that while portfolios might be valuable as resources for learning, as assessment tools they should be treated as problematic.