1,169
Views
5
CrossRef citations to date
0
Altmetric
Web Papers

Do GP trainers use feedback in drawing up their Personal Development Plans (PDPs)? Results from a quantitative study

, &
Pages e718-e724 | Published online: 11 Apr 2012

Abstract

Background: General practice (GP) trainers play a key role GP trainees’ education. To stimulate development of trainer competencies a Personal Development Plan (PDP) can be helpful, especially when feedback is incorporated.

Aims: To investigate to what extent GP trainers use feedback in PDPs.

Methods: GP trainers were provided with three feedback sources: trainees’ rating scores, trainees’ narrative comments, and self-assessment scores. Trainers were instructed to use these while drawing up PDP goals. With quantitative analyses the extent to feedback sources were used was determined.

Results: Of the trainers 93% submitted a PDP. More than 75% of goals were based on provided feedback. Multiple sources addressing the same issue increased feedback use. If two sources pointed in the same direction, feedback was used more often if one of them concerned “narrative comments”. Ratings were lowest for GP-related Expertise and Teaching Skills. Most goals defined concerned these domains. Fewer goals regarded Personal Functioning. Proportion of feedback used concerning Personal functioning was lowest.

Conclusions: GP trainers use most feedback and address issues most commented upon. Narrative comments deserve a profound place when eliciting feedback. Research into the quality with which feedback is used in PDP goals should complement these results.

Introduction

In specialty training, the major part of the trainees’ education consists of workplace-based learning. Clinical teachers are the principal facilitators of learning in clinical practice (Dolmans et al. Citation2002; Teunissen et al. 2007). The quality of the (teaching) competencies of clinical teachers should therefore be an area of great attention for training institutes (Prideaux et al. Citation2000; Kilminster et al. Citation2007). The past two decades’ positive developments have been seen in training institutes’ efforts in this area (Steinert et al. Citation2006; Steinert & Mann Citation2006). It is becoming common practice for faculties to require clinicians, involved in the training of post-graduate trainees, to certify themselves as clinical teachers. Certification usually follows upon satisfactory completion of didactical courses specifically designed for clinical teachers. This has been an important step forward in professionalizing work-place-based learning in the medical setting (Bandiera et al. Citation2005; McLeod et al. Citation2008). It is however also important to focus on the period after certification. Faculties therefore need to assure that clinical teachers, once certified, maintain their gained competencies and if needed, develop them further (Hatem et al. Citation2011).

To support this, promoting the use of structured feedback programmes for clinical teachers, could play an important role. Feedback is generally considered to be an important tool to help to reflect upon one's competencies. It can give incentives to self-developmental activities, given that feedback is accepted and used. Several studies reported about factors that are of influence on the acceptance of feedback in general. Adding narrative comments to rating scores (Sargeant et al. Citation2005; Overeem et al. Citation2010), linking questionnaires to a desired standard of assessed job skills (Dolmans et al. Citation2002; Lockyer Citation2003; Wood et al. Citation2006), ensuring credibility of feedback sources (Veloski et al. Citation2006; Wood et al., Citation2006; Berk Citation2009), presenting feedback in an inviting format (DeNissi & Kluger Citation2000; Atwater & Brett Citation2006) and combining written feedback with a self-assessment (Stalmeijer et al. Citation2010) all improve acceptance of feedback.

To structure and support the actual use of feedback, an often used tool is the Personal Development Plan (PDP) or Personal Learning Plan (Challis Citation2000; Evans et al. Citation2002). PDPs are generally part of a portfolio, an accepted method to stimulate self refection in performance assessment (Carraccio & Englander Citation2004; Tochel et al. 2009). In a PDP, a person describes one's intended goals and gives an indication of the timeframe in which these goals should be achieved. Systematically gathering feedback on one's competencies is generally regarded as a valuable point of departure for writing a PDP (Challis Citation2000). It helps to make a person aware of the current levels of competency, and more specifically areas in need of improvement, making it easier to formulate relevant goals for a PDP.

In addition, for those who are interested in professionalizing their clinical teachers, a PDP provides an opportunity to monitor whether teachers actually use their feedback in planning goals for competence development. This can be accomplished by systematically comparing the content of the goals drawn up in a PDP with the content of the feedback given.

In the current study we want to investigate to what extent general practice (GP) trainers use their feedback. Therefore, the content of goals in the PDPs is related to the three sources of feedback received. Next to determine the extent to which GP trainers use the feedback they receive to set goals in PDPs, we will also explore which (combination of) feedback source(s) was most impactful (in terms of usage) and whether there was a preference to address feedback of specific domains

Methods

Context

The Dutch GP speciality training is a 3-year postgraduate training. Trainees work their entire first and third year in a GP under the supervision of an experienced GP who is also a certified GP trainer. In GP speciality training, GP trainers are the clinical teachers. In order to become and stay certified, GP trainers are required to attend modular trainers’ courses to update existing and learn new (teaching) skills. These skills are described in the Dutch competency profile of GP trainers (Board of General Practice, Nursing Home Medicine and Mentally Handicapped Medicine, CHVG). In addition, once (until 2007) or twice a year they receive structured feedback following a specified format from their trainee who fills in an evaluation form covering the four domains of the competency profile: GP-related Expertise, Teaching Skills, Facilitating GP Trainees, and Personal Functioning. The evaluation provides two types of feedback: (1) scores on rating scales and (2) narrative comments: best points and points of improvement.

Participants

Our study was carried out between 2008 and 2009 at the Speciality Training for General Practice of the Academic Medical Center—University of Amsterdam (AMC-UvA).

All trainers of a first-year GP trainee in 2008 who had at least 2 years’ experience (n = 85) were invited to participate. The 2 years’ experience was required in order to be able to provide the trainer with feedback from more than one trainee.

Study design

A prospective cohort design was used. Trainers’ evaluations by their trainees of the past 4 years were collected from their files. Furthermore, participating trainers filled in a web-based self-assessment questionnaire. For each trainer an overview was generated in which the trainees’ rating scores (feedback source 1), trainees’ narrative comments (feedback source 2) and the trainers’ self-assessment scores (feedback source 3), were summarized. Participants were provided with this overview within 2 weeks after completing the self-assessment. They were subsequently asked to write a PDP using the overview.

Instruments

The trainee evaluation form was compiled by the Dutch National Board of GP Speciality Training Institutes and is based on the Dutch competency profile of GP trainers (Board of General Practice, Nursing Home Medicine, and Mentally Handicapped Medicine, CHVG). The current version consists of 29 items () across four domains: GP-related Expertise, Teaching Skills, Facilitating GP Trainees, and Personal Functioning. Items are scored on a 5-point Likert scale, whereby 1 stands for totally disagree, 2 for disagree, 3 for neutral 4 for agree, and 5 for totally agree. The evaluation form ends with an open-ended question asking trainees to name three points of improvement for the trainer, as well as the three best points (narrative comments). After filling in the evaluation form, GP trainers and trainees discuss the answers.

Table 1  Mean (standard deviation) scores on trainee evaluation, trainers’ self-assessment scores (1–5) and the percentage narrative comments related to the 29 items of the questionnaires

Until 2004, an older version of the trainee questionnaire was used, which contained no narrative comments. In order to be able to analyze it together with the current version, we converted the items of the older version to the items of the current version. Some items could not be converted, resulting in a smaller number of values for these items.

The trainer self-assessment is web-based and contains the same 29 items as the trainee evaluation form but formulated from the trainers’ perspective.

Procedures

Participants were informed about this study by the first author during a modular course and were encouraged to use the SMART criteria to evaluate their goals. SMART stands for Specific, Measurable, Acceptable, Realistic, and Time-bound. The layout of the overview was demonstrated and trainers were urged to use it for writing a PDP. Writing a PDP was made mandatory for all trainers as part of their annual evaluation. All participants received a handout summarising the information given in the meeting. They were all asked to give written informed consent for using their self-assessment scores and PDPs for our study.

Participants’ personal data were gathered from the administrative database of our institute. The ethical review committee of the Dutch Society of Medical Education (NVMO) approved the study.

Analyses

To make the narrative comments and the PDP goals suitable for quantitative analysis, all narrative comments and PDP goals were classified into 29 items covering the four domains in the trainee evaluation form and the trainer self-assessment ( and ). This classification was done separately by two GP staff members (Judith M. van Es (JVE) and Margreet Wieringa-de Waard (MW)). PDPs and narrative comments were anonymized. Results were compared and if not congruent, discussed until consensus was reached. Some narrative comments could not be categorized under any of the 29 items. These comments all concerned the personal functioning of the trainer, for example: “Should be a bit more relaxed”, or “Should show a bit more self-confidence”. They were classified into item 28 of the domain personal functioning (“Is open to feedback and acts upon it”) and as such, broadening this item to more general personal functioning and leaving item 29 of that domain as a more specific aspect of personal functioning (“Is enthusiastic about training GP trainees”).

Table 2  The number of goals referring to an item and the number (percentages) of goals that were based on feedback (trainee's scores, self-assessment scores or narrative comments)

Being interested in the items that need improvement, we included those items on the trainees’ feedback questionnaire into our analyses that received less favourable feedback. This was made operational by including the items that received scores lower than four given by at least one trainee. For the self-assessment questionnaire we also used as the cut-off a score lower than four. We only analyzed the narrative comments as far as they concerned points of improvement.

Descriptive data analysis was performed with SPSS 16.0®.

Results

Response rate

Of the 85 trainers invited, 73 participated in the study: two were excluded due to conflicting roles (being both GP trainers and staff members); three retired before the end of the study; six were not available during the study period; and one could not participate due to logistical problems. Of the 73 who agreed to participate, 5 ultimately did not submit a PDP, despite several reminders by both mail and telephone. Lack of time was mentioned by all as the reason for non-compliance.

Characteristics of participants

Of the 68 participants, 21% were female, their average age was 50 (SD 6.5), and 19% worked in a single-handed practice. On average, they had been active as trainers for 4.2 years.

Characteristics of feedback and PDP

Trainers received on average 4.2 evaluations (range 1–8). There were several reasons for this wide range. Until 2007, trainees were only required to evaluate their trainers once a year. A considerable number of trainers (22%) had been active as trainers for less than 3 years and not every trainer had exactly one trainee a year due to circumstances such as trainee maternity, sick leave of trainees or trainers or sabbaticals of trainers.

Trainees’ mean rating scores on most items were higher than trainers’ self-assessment scores (). The average rating score given by trainees was never 3 or lower (which would indicate a neutral or less favourable judgement). “Using the promoted communication guidelines” (3.87) scored lowest and “Is adequately obtainable and available to trainee” (4.70) highest. Also, none of the average trainer's scores were below 3. “Demonstrating how to keep up to date with relevant medical literature” had the lowest score (3.32) and “Has and adequate Internet connection” the highest (4.69).

At the domain level, trainees’ scores averaged at 4.23 (GP-related Expertise), 4.32 (Teaching Skills), 4.45 (Accommodating Trainees), and 4.55 (Personal Functioning). The trainers’ self assessment scores were 3.88, 3.92, 4.16, and 4.22, respectively (data not shown).

The most frequently mentioned points of improvement concerned item 28 “Is open to feedback an acts upon it”, item 14 “Giving adequate feedback” and item 5 “Having an orderly practice management”. Item 28, however, encompassed more narrative comments regarding personal functioning than are described for this item (see Analyses section).

The amount of feedback received differed considerably per domain (). By far the highest amount of feedback addressed Teaching Skills, whereas the least feedback was given on Personal Functioning. This difference remains when taking the number of items in each domain into consideration

Table 3  Received feedback per domain, arranged by number of feedback (FB) sources

On average one PDP contained 2.4 goals. The domain GP-related Medical Expertise accounted for 36% (n = 75) of the goals, Teaching Skills for 45% (n = 94), Accommodating Trainees for 15% (n = 32), and Personal Functioning for 4% (n = 8) ().

Results preparatory classification of narrative comments and PDP goals

As explained in the analyses section to make narrative comments and PDP goals suitable for quantitative analysis they were classified into the competencies described in the 29 items of the questionnaires. It appeared that PDP goals and narrative comments frequently embodied more aspects than one item grasps. As a result the total 333 narrative comments given were related 437 times to one of the 29 items (on average, 1 point of improvement could be related to 1.3 items) (). The 160 goals could be related 209 times to one of the 29 items (on average, 1 goal could be related to one item 1.3 times) ().

Use of feedback to define goals

Three-quarters of the defined goals (76.1%) in the PDP were related to items on which feedback was received (). For the four domains this was 61.3%, 86.2%, 81.3%, and 75.0%, respectively. Since the process of bringing narrative comments under the domain “Personal Functioning” (see Analyses section) differed slightly from the process used for the other domains, we calculated feedback use disregarding this domain, too. The results were similar.

Impact of feedback combinations on feedback use

The proportion of used feedback was on average higher when it was based on more than one feedback source

(). The proportion of used feedback was lowest when the feedback was based only on the trainee's scores. If feedback was based on two sources, it was used most when narrative comments were among them. The average proportion of used feedback was highest for GP-related Expertise and, again, lowest for Personal Functioning.

Discussion

Providing our GP trainers with an overview of only written feedback from their trainees together with their own self-assessment scores inspired them to use these sources for more than three quarters of the goals they defined in their PDP. For those who have an interest in professionalizing their staff of clinical teachers this is encouraging. It sustains our hypothesis that structured feedback programmes followed by drawing up PDPs are a good method to stimulate clinical teachers to work on keeping themselves competent and monitor goals they define for themselves in relation to feedback received and self-assessment.

We also found that if feedback scores, narrative comments, and self-assessment scores addressed the same issue, the chances of the issue to be used in a PDP increased. This finding is congruent with other studies (Seifert et al. Citation2003; Sargeant et al. Citation2008; Stalmeijer et al. Citation2010). The three sources did not seem to have the same impact, though. If the same feedback came from two sources, the chance of being used was larger when one of them was a narrative comment. This quantitatively sustains the qualitative findings of Overeem, who found that consultants find narrative comments helpful in understanding and accepting feedback and therefore influence the use of feedback (Overeem et al. Citation2010). We hypothesize that the strength of narrative comments lies in the fact that they usually concern rather concrete descriptions of (un)desirable behaviour. This makes it easier for the receiver to understand which behaviour is at stake and what should be done to improve it. The combination of trainer self-assessment scores and narrative comments appeared to have the most impact.

Most goals were based on the two domains which received the most feedback: GP-related Expertise and Teaching Skills. Nevertheless, a fairly large number of goals (23.9%) had no relationship with the three sources of feedback. This was most outspoken for GP-related Medical Expertise. The preference for goals in this domain, even in the absence of feedback, could be due to the double role of GP trainers. Being both professionals and teachers, choosing goals that increase medical expertise benefits them in both roles.

Only scarce feedback was given on the domain Personal Functioning. Since only two items represented this domain, this may come as no surprise. However, the feedback given per item and the proportion of used feedback was scarce too. This may be a consequence of the fact that in this domain the largest part of the feedback came from one source only. Another explanation might be that comments on Personal Functioning were often directed at character versus task related aspects like: “should not ramble on about things” or “should be less of a perfectionist” or “should be more patient”. This type of personalized feedback does not give clear clues directed at work-related tasks and according to DeNissi and Kluger can reduce feedback receivers’ motivation and distract the focus away from task improvement.

Our data showed that both GP trainers and trainees seem very satisfied with GP trainers’ job skills. This may well be justifiable, but it is known that self-assessment scores are not valid and it is also possible that trainee's scores are somewhat inflated because they give socially desirable responses due to the dependent nature of the relationship they have with the trainer they are assessing (Davis et al. Citation2006; Lockyer et al. Citation2007; Colthart et al. Citation2008; Sargeant et al. Citation2008).

Limitations

We were unable to provide every GP trainer with the same amount of trainee feedback due to changing policies concerning trainer evaluation over the years. It is also a consequence of maternity leaves of trainees, illness, and in some cases failing administration. Nevertheless, all but three trainers received narrative comments and we think that this, together with the trainees’ scores, provided enough input for writing a PDP.

Another limitation concerns the difficulty of stimulating trainers to draw up a PDP. The effort to promote compliance was high and it is questionable whether this is feasible in a non-research setting. More information about the effectiveness of PDPs in competency development is needed to know whether these efforts are worthwhile.

Recommendations for practice and further research

Our research highlights the importance of using different sources in feedback programmes. Moreover, when not many different feedback-givers are available as in our setting, eliciting feedback by different methods, both rating scores and narrative comments should be recommended. The latter can make the rating scores more concrete and therefore easier for trainers to convert into goals. We therefore suggest in further research to study the effectiveness of feedback by not only asking for points of improvement in a general manner, but to also make an effort to elicit more specific narrative comments on individual items or domains. Another suggestion is to explore facilitating the interpretation of feedback for its receiver (e.g., by giving ranges of scores of colleagues, or facilitating opportunities to obtain oral clarification by the feedback givers) This can help to assimilate feedback that perhaps otherwise would go unnoticed (Sargeant et al. Citation2008; Archer Citation2010).

Special attention needs to be paid to interventions that stimulate the use of feedback in the domains “Accommodating Trainees” and “Personal Functioning”. Regarding the latter, more attention should be paid to gathering more feedback by adding more items.

In this study we focused on the quantitative aspects of feedback use. It was a first step in exploring the use of feedback by GP trainers. We neither studied the quality of the feedback, nor the quality of the goals defined. Further research into this issue should be conducted to complement these results. Last, but not least, we would recommend further research into the relationship between feedback use and the ultimate achievement of goals.

Funding

This research was funded by the Dutch Foundation for the General Practioner Speciality training.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.

References

  • Archer JC. State of the science in health professional education: Effective feedback. Med Educ 2010; 44: 101–108
  • Atwater L, Brett J. Feedback format: Does it influence manager's reaction to feedback?. J Occup Organ Psychol 2006; 79: 517–532
  • Bandiera G, Lee S, Foote J. Faculty perceptions and practice impact of a faculty development workshop on emergency medicine teaching. CJEM 2005; 7: 321–327
  • Berk RA. Using the 360 degrees multisource feedback model to evaluate teaching and professionalism. Med Teach 2009; 31: 1073–1080
  • Carraccio C, Englander R. Evaluating competence using a portfolio: A literature review and web-based application to the ACGME competencies. Teach Learn Med 2004; 16: 381–387
  • Challis M. AMEE Medical Education Guide No. 19: Personal learning Plans. Med Teach 2000; 22: 225–236
  • Colthart I, Bagnall G, Evans A, Allbutt H, Haig A, Illing J, McKinstry B. The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME Guide no. 10. Med Teach 2008; 30: 124–145
  • Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA 2006; 296: 1094–1102
  • DeNissi AS, Kluger N. Feedback effectiveness: Can 360-degree appraisal be improved?. Acad Manage Exec 2000; 14: 129–139
  • Dolmans DH, Wolfhagen HA, Essed GG, Scherpbier AJ, van der Vleuten CP. Students' perceptions of relationships between some educational variables in the out-patient setting. Med Educ 2002; 36: 735–741
  • Evans A, Ali S, Singleton C, Nolan P, Bahrami J. The effectiveness of personal education plans in continuing professional development: An evaluation. Med Teach 2002; 24: 79–84
  • Hatem CJ, Searle NS, Gunderman R, Krane NK, Perkowski L, Schutze GE, Steinert Y. The educational attributes and responsibilities of effective medical educators. Acad Med 2011; 86: 474–480
  • Kilminster S, Cottrell D, Grant J, Jolly B. AMEE Guide No. 27: Effective educational and clinical supervision. Med Teach 2007; 29: 2–19
  • Lockyer J. Multisource feedback in the assessment of physician competencies. J Contin Educ Health Prof 2003; 23: 4–12
  • Lockyer JM, Violato C, Fidler HM. What multisource feedback factors influence physician self-assessments? A five-year longitudinal study. Acad Med 2007; 82: S77–S80
  • McLeod PJ, Brawer J, Steinert Y, Chalk C, McLeod A. A pilot study designed to acquaint medical educators with basic pedagogic principles. Med Teach 2008; 30: 92–93
  • Overeem K, Lombarts MJ, Arah OA, Klazinga NS, Grol RP, Wollersheim HC. Three methods of multi-source feedback compared: A plea for narrative comments and coworkers’ perspectives. Med Teach 2010; 32: 141–147
  • Prideaux D, Alexander H, Bower A, Dacre J, Haist S, Jolly B, Norcini J, Roberts T, Rothman A, Rowe R, Tallett S. Clinical teaching: Maintaining an educational role for doctors in the new health care environment. Med Educ 2000; 34: 820–826
  • Sargeant J, Mann K, Ferrier S. Exploring family physicians’ reactions to multisource feedback: Perceptions of credibility and usefulness. Med Educ 2005; 39: 497–504
  • Sargeant J, Mann K, van der Vleuten C, Metsemakers J. “Directed” self-assessment: Practice and feedback within a social context. J Contin Educ Health Prof 2008; 28: 47–54
  • Seifert CF, Yukl G, McDonald RA. Effects of multisource feedback and a feedback facilitator on the influence behavior of managers toward subordinates. J Appl Psychol 2003; 88: 561–569
  • Stalmeijer RE, Dolmans DH, Wolfhagen IH, Peters WG, van CL, Scherpbier AJ. Combined student ratings and self-assessment provide useful feedback for clinical teachers. Adv Health Sci Educ Theory Pract 2010; 15: 315–328
  • Steenvoorde JA. How residents learn: Qualitative evidence for the pivotal role of clinical activities. Med Educ 2007; 41: 763–770
  • Steinert Y, Mann KV. Faculty development: Principles and practices. J Vet Med Educ 2006; 33: 317–324
  • Steinert Y, Mann K, Centeno A, Dolmans D, Spencer J, Gelula M, Prideaux D. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME Guide No. 8. Med Teach 2006; 28(6)497–526
  • Teunissen PW, Scheele F, Scherpbier AJ, van der Vleuten CP, Boor K, van Luijk SJ, van Diemen-Tochel C, Haig A, Hesketh A, Cadzow A, et al. The effectiveness of portfolios for post-graduate assessment and education: BEME Guide No 12. Med Teach 2009; 31: 299–318
  • Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systematic review of the literature on assessment, feedback and physicians' clinical performance: BEME Guide No. 7. Med Teach 2006; 28: 117–128
  • Wood L, Hassell A, Whitehouse A, Bullock A, Wall D. A literature review of multi-source feedback systems within and without health services, leading to 10 tips for their successful design. Med Teach 2006; 28: e185–e191

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.