1,736
Views
2
CrossRef citations to date
0
Altmetric
Research Article

SMART, SMARTER, SMARTEST: The influence of peer groups compared to practice visits on the quality of action plans

, &
Pages e582-e588 | Published online: 11 Apr 2012

Abstract

Background: It has been reported that appraisal by peers can be effective.

Aim: To investigate whether feedback from a peer group (PG) compared to that by a staff member during a practice visit (PV) is as effective in improving the quality of action plans.

Methods: Seventy-three general practitioner (GP) trainers randomized into either a PG or PV were instructed to draw up action plans using the SMART criteria to realize the goals set in their personal development plans (PDPs). To improve action plans, feedback was given in either PG or PV. Quality of baseline and follow-up action plans, operationalized as the SMARTness with which plans were formulated, was assessed using a study-specific instrument.

Results: Response rate for submitting both baseline and follow-up action plans was 89% in the PG versus 79% in the PV. It was feasible to determine scores on all SMART criteria, except for the criterion ‘Acceptability’. Significant improvement was made on the remaining four criteria irrespective of the feedback setting.

Conclusions: PGs cost less and seem equally effective in improving the SMARTness of the action plans. Moreover, they also seem to stimulate GP trainers more to write a PDP. Therefore, they may be favoured over PVs.

Introduction

Self-directed learning is a process in which individuals take the initiative to diagnose their learning needs, design learning experiences, locate resources and evaluate their progress (Knowles Citation1975). The concept has been applied by many educators for the past three decades. To promote the self-directed learning processes and monitor their results, portfolios were introduced. These have proven to be effective in making learners more responsible for their own learning (Tochel et al. Citation2009; van Tartwijk & Driessen Citation2009).

One item very often included in a portfolio is the personal development plan (PDP), also referred to as a personal learning plan (Challis Citation2000). PDPs are usually the result of a structured formative assessment process that involves collecting and reviewing external feedback, as well as guided self-reflection activities. Self-reflection combined with external feedback enables learners to define areas in which further (educational) development is needed and to translate these needs into learning goals (Stalmeijer et al. Citation2009).

Commonly, in a PDP, the approach to achieve developmental goals is formulated by way of action plans. A common tool used to help improve the quality of an action plan in profit- and non-profit organizations like the educational setting is the SMART acronym (Newby Citation2003). SMART stands for five criteria: Specific, Measurable, Acceptable, Realistic and Time-bound (sometimes slightly different terms are used for this acronym: attainable instead of acceptable, for instance). These criteria are meant to help define goals and formulate action plans clearly and enhance their effectiveness (Newby Citation2003). As such, the SMARTness of an action plan can be seen as an aspect of an action plan's quality.

In our context, General Practitioner (GP) Speciality training, the clinical teachers, GP trainers, play a central role in training future GPs (Schol et al. Citation2005; Miller & Archer Citation2010). GP trainers (experienced GPs) are required to follow modular GP trainer's courses to keep their knowledge and teaching skills up to date. To monitor and manage the performance of individual trainers, practice visits (PVs) are performed by the teaching staff (GPs and behavioural scientists) of the GP speciality training.

But unlike the standardized procedure that is generally followed for PVs as part of the appraisal of GP practices, our educational PVs to our GP trainers were until recently not standardized. O’Brien concluded that (educational) standardized PVs that make use of specific tools to assess developmental needs have an effect on professional practice and Finlay found that GPs consider them to enhance their learning and improve their practice (O’Brien et al. Citation2007; Finlay & McLaren Citation2009).

In line with these findings, we recently implemented a format for our PVs that standardizes both the interview during the visit and the subsequent reporting of this interview. We also asked our GP trainers to write a PDP before receiving a PV, herewith introducing a specific tool to enable us to gain a better insight into their developmental needs. We helped them to define goals by providing them with feedback consisting of their scores on a self-assessment questionnaire and their scores on the evaluations of their GP trainees of the past 4 years (van Es et al. Citation2011). GP trainers used the feedback for more than 75% of the goals they defined.

In this study, we will focus on GP trainers’ goal and action plan writing abilities by investigating how SMART action plans for the goals in their PDPs are defined; the SMARTness of the action plans. Assuming that the SMARTer an action plan is defined, the greater the chances are of achieving the goal; this should ultimately lead to GP trainers managing their developmental needs more adequately.

PVs, however, take up a lot of staff time. Practices are located within a 100 km radius of the GP speciality training institute where the teaching staff is based. As a consequence, in addition to the preparation time and time for writing reports on the PVs, much time is spent travelling.

In literature on GP practice appraisal, it has been reported that appraisal by peers can be just as effective as appraisal by trained non-peers (van den Hombergh et al. Citation1999; Bowie et al. Citation2009). Furthermore, feedback is considered to be more worthwhile when it has been given by peers (van den Hombergh et al. Citation1999; Bowie et al. Citation2009). Obviously, peer groups (PGs) take considerably less staff time: they address multiple GP trainers at once and reduce travel time. As such, they could be a more cost-effective way to manage GP trainers’ individual developmental needs than PVs.

Considering the above, we will investigate whether the effect of feedback on the SMARTness of action plans from PGs equals that of feedback in a PV.

Therefore, we need an instrument to assess the SMARTness of an action plan. Since, to the best of our knowledge, no such instrument exists, we will first develop one.

Methods

Participants

This study was conducted in 2008–2009 at the speciality training for general practice of the Academic Medical Center, University of Amsterdam. All GP trainers of first-year GP trainees in 2008 with at least 2 years’ experience as a GP trainer (n = 85) were invited to participate in the study.

GP speciality training is a 3-year post-graduate training programme. In year 1 and 3, trainees are allocated to a GP trainer for the entire year. They also attend modular courses at the GP training institute 1 day a week. In the second year, they work through rotations in clinical settings.

Study design and procedures

We performed a randomized trial. The SMARTness of the action plans was assessed before intervention (baseline) and after intervention (follow-up). GP trainers were randomized into either of the two intervention groups: PG meeting or PV. Subsequently, to help GP trainers choose relevant goals for their PDPs, we provided them with an overview of written competency based feedback. This feedback consisted of self-assessment scores and the rating scores and narrative comments from the evaluations of the trainees who had been allocated to their practice in the past 4 years (van Es et al. Citation2011). Trainers were asked to write a PDP (baseline) using the feedback provided. We did not limit the number of goals that could be addressed by action plans in a PDP. At the same time, a date was set for either participation in a PG or PV. During the PG meetings or PVs, they received feedback on the SMARTness of their PDPs. For each GP trainer, the first goal and accompanying action plan was used for this study. They were subsequently asked to review and adjust their action plans using the feedback they received. Both the initial (baseline) and the definitive (follow-up) action plans were sent to a research assistant to be anonymized before their SMARTness was scored by the first (Judy M. van Es (JVE)) and the last author (Margreet Wieringa-de Waard (MW)).

PGs comprised between three and five GP trainers, and teaching staff of the GP training institute presided them. GP trainers could sign up for different dates and groups were compiled accordingly. PG meetings were held at the GP training institute and some (to reduce travel time for GP trainers) at the GP practice of one of the participating GP trainers. GP trainers who worked at the same practice were not allocated to the same PG. PG meetings were scheduled to last for 3 h.

PVs were also carried out by teaching staff and lasted for 1.5 h. For both PGs and PVs, we made sure teaching staff having other professional or private relationships with a GP trainer were not assigned to that particular trainer. Two weeks prior to PG meetings or PVs, presiding teaching staff received the feedback overviews and baseline PDPs for preparation purposes.

Instruction of participants

All participants were informed about this study during a meeting on a modular-course day. At this meeting, JVE explained the principles of writing a PDP. Participants were shown the layout of the overview in which their feedback would be collected. They were urged to use the feedback in drawing up their PDP. Participants were further instructed to structure their goals in action plans using the subheadings Aim, Action, Result and Time span (AART) and they were encouraged to use the SMART criteria to evaluate their action plans.

Participation was part of the regular modular course and therefore obligatory. Each participant gave written informed consent for using his or her PDP for this study. GP trainers who did not submit PDPs were reminded to do so several times by both e-mail and telephone.

Data on the personal characteristics of participants were gathered from the administrative database of our institute. The ethics committee of the Dutch Society of Medical Education approved this study and exempted it from judgement by the medical ethical committee since it involves no patients.

Instruction of teaching staff

All teaching staff at our GP training institute is trained to run the group sessions and modular courses GP trainers have to attend. Topics that are covered in these modular courses include GP-related expertise and teaching skills.

Teaching staff received training on how to conduct PVs and preside over the PGs during two separate 3 h sessions held for this study. They were trained to use a standardized agenda in which time to be used was specified for each part of the programme. We developed a standardized agenda and a format for the minutes of the PVs and PG meetings. In addition, the teaching staff was taught how to give (PVs) or trigger (PGs) feedback on how SMART action plans were defined.

The teaching staff prepared the PG meeting or PV by studying the feedback overviews and PDPs of the participating GP trainers.

Instruments

A five-point Likert scale was used to score each SMART criterion. Anchors were defined for the highest (five) and lowest (one) scores. For the criterion Measurable, the score of three was also defined. In addition, questions were formulated to aid scoring each criterion.

Subsequently, the instrument was tested by JVE and MW, both of whom are experienced GPs with long-term involvement in GP speciality training. They independently scored the goals of 10 randomly selected PDPs using the scoring instrument. If comparison of the scores showed differences greater than one point, or the authors could not decide on a score, the matter was discussed. If the questions and anchors appeared to be insufficient for capturing the differentiation in the material to be scored, then they were adjusted. The pilot made it clear that the action plans gave too little information on the criterion Acceptability (A). Therefore, this criterion was not included in the final instrument. The 10 action plans used in the pilot were scored again using the definitive instrument. In addition to scoring SMARTness, the use of the AART criteria was scored with a yes/no tick box. See Appendix for the final version of the scoring instrument used.

Analyses

Dichotomous data are presented as percentages. Scores on a Likert scale are presented as averages. We used multivariate analyses for repeated measures to test the effect of the intervention (PG or PV) on the improvement in scores (baseline compared to follow-up) on the SMART criteria (Specific, Measurable, Realistic, Time-bound). In cases where the effects in this overall analysis were significant, post hoc analyses were performed to test the effects per SMART criterion.

Results

Response rate

We originally invited all 85 trainers who were active as trainers of first-year GP trainees. They were randomly assigned to either the PV group or the PG (). Two were excluded because of their conflicting roles as both trainers and teaching staff members of the GP training institute at the time of our study; 10 others could not participate due to retirement before the end of the study, not being available for the whole study period or logistical problems. A total of 25 trainers from the PV group and 32 trainers from the PG submitted baseline and follow-up PDPs. These are included in the analyses. GP trainers who did not submit a PDP gave restricted time as their reason for not doing so.

Figure 1. Flow chart of study group.

Figure 1. Flow chart of study group.

Personal characteristics

The average age of trainers in years was 51 in the PV group and 49 in the PG (). Both groups consisted of more male than female trainers (PG 63%; PV group 68%). The number of part-time trainers in the PG was considerably higher than in the PV group (78% versus 52%).

Table 1.  Personal characteristics of GP trainers and their practices

Smartness scores

The majority of GP trainers (>80%) defined their goals using the subheadings of AART (). This was the case in both the PV group and PG, although the baseline scores were slightly higher in the latter. Use of the subheadings increased in both groups after the intervention, except for Result in the PV group. Time span in the PG decreased from 100% at baseline to 96.9.

Table 2.  Percentage of PDPs in which subheadings of AART were used

The SMARTness scores improved from baseline to follow-up irrespective of the intervention (F (1, 56) = 56,41 p < 0.000) (). Although not all SMART criteria benefited equally from the intervention (F (1, 54) = 6,50 p < 0.001) post hoc analyses revealed that the improvement in scores were significant for all SMART criteria: Specific (F (1, 56) = 69,70 p < 0.00), Measurable (F (1, 56) = 21,97 p < 0.00), Realistic (F (1, 56) = 39,8 p < 0.00) and Time-bound (F (1, 56) = 19,74 p < 0.00). Overall, scores in the PG were higher than in the PV group (F (1, 56) = 5,4 p < 0.024).

Table 3.  Mean SMARTness scores

Discussion

Discussing the Smartness of the action plans for goals that are defined in a PDP improves this same SMARTness. This is congruent with literature about the positive effects of training the goal-setting abilities of case managers working in revalidation and mental-healthcare (Clarke et al. Citation2009; Marsland & Bowman Citation2010).

Our study also shows that this improvement occurs after both a PV and participation in a PG – improvement in scores was seen for both modes of intervention, with no significant differences between the two. This is encouraging for any educational setting addressing the quality and development of its clinical teachers since resources and time are usually limited everywhere and therefore have to be used as efficiently as possible. The cost-effectiveness of PGs may well prompt a change towards the increased use of peers in appraisal, but there are further potential benefits. In PGs, participants benefit from discussing the feedback received by other participants, too, and in addition, feedback from peers seems to be highly appreciated (Bowie et al. Citation2009) and leads to better practice outcomes than non-physician observers (van den Hombergh et al. Citation1999).

There were also a number of remarkable results. The baseline SMARTness scores were fairly high. We could argue that this was because we instructed trainers to use the AART format, which is also used by GP trainees in their portfolios and as a consequence, GP trainers are already familiar with it. Surprisingly and despite randomization, baseline SMARTness scores were higher in the PG than in the PV group. Perhaps, GP trainers pay more attention to their action plans in anticipation of feedback from peers as compared to feedback during a PV from a single teaching staff member. Another explanation may be that in the PG, more GP trainers (50% versus 40%) had already participated in the voluntary GP practice appraisal that is conducted by the Dutch Society of General Practitioners (NHG, Nederlands Huisartsen Genootschap). For this appraisal, practice improvement plans are required for which the SMART criteria are used, too. Interestingly, the percentage of participants who submitted their baseline PDPs was higher in the PG (68% versus 89%). Also, the baseline scores were higher. It could be that the anticipation of meeting peers stimulates GP trainers to prepare themselves better. However, this stimulating effect does not persist in finalizing their follow-up PDPs after the PGs (90% versus 91%).

Another interesting finding is that the SMART criterion Specificity seemed to benefit most from the interventions. Based on our observations during this study, we hypothesize that this is the criterion on which most feedback was given both in the PGs and during the PVs.

We find it striking that it appears to be possible to define SMART criteria by setting anchors per criterion in such a way that it is possible to measure change in SMARTness. As far as we know, there are no previous publications on this subject.

Limitations

There were a number of limitations to our study. First, much effort was required to get GP trainers to submit their PDPs. Since it was part of a research project, however, we were able to invest considerably in compliance. Doing so in a non-research setting may well put too much strain upon the resources available. On the other hand, going through the PDP cycle several times will mean that GP trainers get used to it – experience it as less time consuming – and eventually may appreciate it more.

Another limitation is that we did not use a control group. We can, therefore, not be sure whether the intervention caused the improvement of the SMARTness of action plans, or merely the fact that GP trainers revised their goals.

The small number of participants is a limitation, too. Unfortunately, this burdens many research projects in medical education. Multicentre trials are needed to achieve larger numbers of participants, but this inevitably leads to difficulties in guaranteeing similar conditions at all participating centres.

Future research

Our presumption that ‘the SMARTer an action plan is defined, the likelier its successful realization will be’ has not been tested by us, or, to our knowledge, by anyone else. It is the conclusion of a logical train of thoughts but if proven wrong, it would shake up the whole idea of the effectiveness of using the SMART criteria, or, for that matter, any other mode of intervention to improve goal-setting abilities. So, it remains an assumption, not a fact. In future research, it would be a challenge to find evidence in its favour.

In this study, we looked at the quality of action plans from the methodological point of view: Are they defined SMARTly? In a previous article, we described the relationship between the content of the goals and written competency based feedback received by GP trainers (van Es et al. Citation2011). We have not investigated the quality of the content of the plans. It would be interesting to investigate the influence of PGs versus PVs regarding the quality of the content of the action plans.

Acknowledgements

The authors thank all GP trainers of the Academic Medical Center, University of Amsterdam, who participated in this study. They also thank Machteld IJff for her support in data collection.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.

References

  • Bowie P, Cameron N, Staples I, McMillan R, McKay J, Lough M. Verifying appraisal evidence using feedback from trained peers: Views and experiences of Scottish GP appraisers. Br J Gen Pract 2009; 59: 484–489
  • Clarke SP, Crowe TP, Oades LG, Deane FP. Do goal-setting interventions improve the quality of goals in mental health services?. Psychiatr Rehabil J 2009; 32: 292–299
  • Challis M. AMEE medical education guide no. 19: Personal learning plans. Med Teach 2000; 22: 225–236
  • Finlay K, McLaren S. Does appraisal enhance learning, improve practice and encourage continuing professional development? A survey of general practitioners’ experiences of appraisal. Qual Prim Care 2009; 17: 387–395
  • Knowles M. Self-directed learning: A guide for learners and teachers. Association Press, New York 1975
  • Marsland E, Bowman J. An interactive education session and follow-up support as a strategy to improve clinicans’ goal-writing skills: A randomized controlled trial. J Eval Clin Pract 2010; 16: 3–13
  • Miller A, Archer J. Impact of workplace based assessment on doctors’ education and performance: A systematic review. BMJ 2010; 341: c5064
  • Newby D. Personal development plans: Making them work, making them count. Adv Psychiatr Treat 2003; 9: 5–10
  • O’Brien MA, Rogers S, Jamtvedt G, Oxman AD, Odgaard-Jensen J, Kristoffersen DT, Forsetlund L, Bainbridge D, Freemantle N, Davis DA, et al. Educational outreach visits: Effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2007; Oct 17(4): CD000409
  • Schol S, Goedhuys J, Notten T, Betz W. Individualised training to improve teaching competence of general practitioner trainers: A randomised controlled trial. Med Educ 2005; 39: 991–998
  • Stalmeijer RE, Dolmans DH, Wolfhagen IH, Peters WG, van Coppenolle L, Scherpbier AJ. Combined student ratings and self-assessment provide useful feedback for clinical teachers. Adv Health Sci Educ Theory Pract 2009; 15: 315–328
  • Tochel C, Haig A, Hesketh A, Cadzow A, Beggs K, Colthart I, Peacock H. The effectiveness of portfolios for post-graduate assessment and education: BEME guide no 12. Med Teach 2009; 31: 299–318
  • van den Hombergh P, Grol R, van den Hoogen HJ, van den Bosch WJ. Practice visits as a tool in quality improvement: Mutual visits and feedback by peers compared with visits and feedback by non-physician observers. Qual Health Care 1999; 8: 161–166
  • van Es JM, Wieringa-de Waard M, Visser MRM. 2011. Do GP trainers use feedback in drawing up their Personal Development Plans? Results from a quantative study. in submission
  • van Tartwijk J, Driessen EW. Portfolios for assessment and learning: AMEE guide no. 45. Med Teach 2009; 31: 790–801

Appendix

Assessment of meeting the SMART criteria of PDP goals Research number:   Date:

1) Tick box mark whether format subheadings

A=Aim

A=Action

R=Result

T=Time span

are recognizable in the goal? (The letters A, A, R, T do not literally have to be used.)

2) Assess how well goal is defined regarding the individual SMART criteria (use anchors to define which rating, 1-2-3-4-5 you think is best)

Anchors SMART criteria

Specific

  • 1. No distinction made between aim and action. No judgement possible whether action will lead to defined aim. Goal is defined vaguely and no explanatory details are given.

  • 5. Aim, action and result are defined as separate entities. Action leads to a result that reflects the aim defined. Goal is written out in detail.

  • Questions that can support judgement on this criterion:

  • - Do defined results lead to attainment of aim?

  • - Are vague terms like a little, somewhat, sometimes used?

  • - Do defined actions lead to defined results?

  • - Is clear who is responsible for what?

  • - Is the subject defined too broad or abstract?

  • - Are actions defined in such a way that they can be replicated by someone else?

  • - Are aim, action and result distinctly defined?

Measurable

  • 1. a. Defined result of this goal can only be measured with great efforts or in subjective terms or

  •   b. Results of this goal could be measured but no benchmarks are defined.

  • 3. Results of this goal are difficult to measure but attempts have been made to do so.

  • 5. Results are easy to measure objectively both pre- as well as post-intervention. It is described how attained result can be measured qualitatively or quantitatively. Is description given of how and when results will be measured?

Questions that can support judgement on this criterion:

  • - Are the measurements of the result of this goal defined in such a way that they could be replicated or evaluated by somebody else?

  • - Is it the result that is being measured or something else?

Realistic

  • 1. Time investment does not seem realistic for an average GP trainer. There is no relationship between aim, action and result.

  • 5. Time investment seems realistic for an average GP trainer. Aim, action and result relate to one another.

Questions that can support judgement on this criterion:

  • - Does it seem realistic in stated time and with stated investment?

  • - Is it realistic to think that third parties want to collaborate towards this goal?

  • - Is the circle of aim–action–result–aim a closed one?

Time-bound

  • 1. Time schedule and needed time are not defined.

  • 5. Time schedule and needed time are defined.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.