1,853
Views
36
CrossRef citations to date
0
Altmetric
Research Article

Feedback data sources that inform physician self-assessment

, , , , , , & show all
Pages e113-e120 | Published online: 28 Jan 2011

Abstract

Background: Self-assessment is a process of interpreting data about one's performance and comparing it to explicit or implicit standards.

Aim: To examine the external data sources physicians used to monitor themselves.

Methods: Focus groups were conducted with physicians who participated in three practice improvement activities: a multisource feedback program; a program providing patient and chart audit data; and practice-based learning groups. We used grounded theory strategies to understand the external sources that stimulated self-assessment and how they worked.

Results: Data from seven focus groups (49 physicians) were analyzed. Physicians used information from structured programs, other educational activities, professional colleagues, and patients. Data were of varying quality, often from non-formal sources with implicit (not explicit) standards. Mandatory programs elicited variable responses, whereas data and activities the physicians selected themselves were more likely to be accepted. Physicians used the information to create a reference point against which they could weigh their performance using it variably depending on their personal interpretation of its accuracy, application, and utility.

Conclusions: Physicians use and interpret data and standards of varying quality to inform self-assessment. Physicians may benefit from regular and routine feedback and guidance on how to seek out data for self-assessment.

Introduction

For physicians, many stimuli and data sources exist for practice-based improvement. These include colleagues, patients, audit and feedback, educational programs, systems changes, and financial rewards (Marinopoulos et al. Citation2007; Forsetlund et al. Citation2009; Straus et al. Citation2009; Varkey Citation2010). Systematic reviews conclude that change can be slow and unpredictable. For particularly difficult changes that involve new ways of thinking and working, physicians may require multiple sources of information, offered through many formats, over time (Marinopoulos et al. Citation2007; Forsetlund et al. Citation2009). Unfortunately, while these studies may guide curriculum designers, teachers, and those responsible for physician performance, they do not tell us how physicians themselves think about and make sense of the stimuli and data relative to their own clinical practices.

Physicians have demonstrated limited ability to assess themselves accurately (Davis et al. Citation2006). Providing objective data routinely to physicians has been identified as a potential solution to this problem (Davis et al. Citation2006; Duffy Citation2008). However, it is naïve to think that the challenge of improved self-assessment will be solved by programs of audit and feedback. Self-assessment is a complex multifaceted and multipurpose activity (Eva & Regehr Citation2008). An examination (Regehr & Eva Citation2006) of the theoretical and research base for self-assessment concludes that an interplay of self-efficacy, cognitive and metacogntive theory, social cognition, reflective practice, and the development of expertise all inform our understanding of the processes involved. Further, self-assessment is a situationally bound cognitive process that is context specific and dependent on expertise. More recently, a study (Sargeant et al. Citation2010) of undergraduates, postgraduate trainees, and practicing physicians found that self-assessment was informed by dynamic and fluid processes in which data from external and internal sources were interpreted. When physicians interpret their data, they may accept, reject, ignore, or seek out more information. Many conditions (e.g., emotions, work/school environment) and tensions (between and within people and between people and the learning environment) can influence how data are used.

Examinations of the self-assessment literature are complicated by varying definitions of self-assessment. The term has been used to describe thought processes, pedagogical strategies for individual learners, and quality assurance endeavors for professionals (Eva & Regehr Citation2008). One definition appears to capture these multidimensional components: self-assessment is a process of interpreting data about one's own performance and comparing it to an explicit or implicit standard (Epstein et al. Citation2008). This definition requires consideration of how two major domains interact: the integration of high-quality external and internal data to assess current performance and promote future learning, and the capacity for ongoing self-monitoring during everyday clinical practice (Epstein et al. Citation2008).

The purpose of this study was to examine the external data sources that primary care physicians used to assess their performance to determine whether they are practicing appropriately. In particular, we were interested in how three types of activities informed self-assessment: formal structured programs designed by professional organizations to promote self-assessment and reflection; relationships with others (e.g., professional colleagues and learners); and other educational activities and resources (e.g., continuing medical education (CME) courses and clinical practice guidelines).

Methods

For this qualitative study using focus groups, we invited physician participants from three purposively selected programs. Each program provides a structured approach to self-assessment by providing self-assessment data and/or standards and encouraging reflection. We recruited from these programs to ensure groups of physicians who had had a common experience receiving data. We believed that selecting participants from these types of programs would yield more fruitful discussion as the physicians would be more aware of the self-assessment construct based on their personal experiences and be able to contribute more effectively to the discussion. Participation was voluntary.

Selected programs included the Physician Achievement Review (PAR) Program in Alberta (a multisource feedback program) (www.par-program.org), the American Board of Internal Medicine Practice Improvement Module (PIM) Program (http://www.abim.org/pims), and the Foundation for Medical Practice Education Practice Based Small Group Learning (PBSG) Program (http://www.fmpe.org).

In the PAR program, physicians complete a performance self-assessment questionnaire and receive questionnaire-based feedback about their medical expertise, professionalism, communication skills, and office management from patients, medical colleagues, and co-workers (e.g., nurses, pharmacists). Study credit can be obtained if physicians complete a reflective exercise following receipt of their data. This program was developed in Alberta, Canada and is being used in Nova Scotia, Canada. Physicians must participate in the program every 5 years to maintain their license to practice. We asked the company that administers the program to invite family physicians/general practitioners who scored at the 90th percentile to participate. Previous work had shown that higher performing physicians reflected on and used the data from the program (Sargeant et al. Citation2006), suggesting that high scorers might provide the best data pertaining to self-assessment.

In the PIM program, physicians select a clinical area, complete a medical record audit, an assessment of their office systems, and survey patients. Through a web-based performance report, the physicians compare their data to clinical practice guidelines and use this to implement a quality improvement intervention. Physicians receive credit toward maintenance of certification once they reflect on the impact of the quality improvement intervention. Participation in a PIM module is required as part of the maintenance of certification program. ABIM recruited physicians who had completed at least one PIM through an external agency.

The PBSG program is an accredited continuing education program. Physicians participate monthly in a facilitated small group learning program with a stable group of physicians in which they discuss cases, review evidence-based information, and discuss the challenges of integrating this knowledge into day-to-day practice. Log sheets completed after the program document physicians’ reflections on the material, their plans for practice implementation as well as anticipated barriers to that implementation. Participation in PBSG is voluntary. Over 4000 participating Canadian physicians are organized into 500 small groups. Participants receive study credits based on completion of the log sheets. The Foundation for Medical Practice sent invitation letters to facilitators of groups in Nova Scotia inviting groups to volunteer for the study.

We asked focus group participants to describe their use of and perceptions about their respective program as a mechanism for self-assessment and reflection. We also asked them to consider how the people around them (namely patients, consultants/specialist colleagues, peers, other health care professionals, residents, and students) informed them. Finally, we asked about the role played by other educational or self-assessment activities (e.g., self-assessment programs, clinical practice guidelines, CME, maintenance of certification examination preparation, audits). Physicians were encouraged to describe how they assessed themselves including the sources of information used and how they determined what else they needed to learn. provides the template used for the focus group questions. We conducted a total of seven focus groups. The groups were facilitated by the same member of the research team and assisted by other members. The 90-min discussions were audio-recorded and transcribed.

Figure 1. Example of focus group questions used for PAR* program.

Figure 1. Example of focus group questions used for PAR* program.

We conducted the analysis iteratively as a team. First, we independently read the transcripts and then as a team discussed and explored emerging themes from each group and finally across groups and activities. As a second phase, we used a constant comparative approach to compare and contrast participants’ descriptions of their experiences and perceptions across activities. The purpose was to develop in-depth understanding of three central concepts: the degree to which the programs, people, or other sources of information were perceived as useful in informing self-assessments; their perceptions and reasoning about the utility of the data source; and an overarching comparative perspective about the strategies and conditions needed to inform practitioner self-assessment. In this analysis, we were guided by the principles of managing and analyzing qualitative data (Corbin & Straus Citation2008). Based on these findings, we considered the implications for practicing physicians in supporting and informing self-assessment.

Ethics approval was obtained from participating institutions’ ethics review boards.

Results

The focus groups provided data about self-assessment practices from 49 participants over seven focus groups (PAR = 3; PBSG = 2; PIM = 2) (). The groups freely discussed the topics introduced by the facilitator. Participants shared personal experiences that often prompted others to share their reflections or similar experiences. In some cases, physicians provided sensitive narratives about their own knowledge gaps or circumstances in which they described feedback that was discordant with their personal self-assessments. Diverse opinions were expressed in all groups on a wide range of topics. Across groups, physicians described three major sources of information that stimulated self-assessment and the role(s) each source played. The sources were the structured programs (which we used to initiate the invitation and the discussion), the professionals and patients with whom the physician worked, and the educational activities in which they participated.

Table 1.  Number of participants from focus groups

Role played by the structured programs in self‐assessment

Physician Achievement Review

Perceptions about the use of PAR, a multisource feedback program, came primarily from focus groups conducted in Alberta. It also came from Nova Scotia physicians participating in the PBSG focus groups who had also participated in the Nova Scotia PAR program. Participants provided variable accounts as to its helpfulness in stimulating self-assessment. Some found PAR useful. Others found the process an encumbrance on their work and an imposition. Some physicians had ignored (not read) their feedback or had given it only a cursory look, suggesting they did not view the data as sufficiently credible or personally meaningful. For example, one physician stated, Sometimes I have a hard time reading the report. I just thought, oh, I passed, well! (L1).

The physicians who used PAR data for self-assessment described how the data stimulated them to think about how well they were doing based on the scores. They mentioned surprises which led them to think about potential reasons for the feedback they received in specific areas. The data gave physicians practice changes to consider and some identified changes they had made. The data could be reassuring to physicians who queried whether their performance had begun to “deteriorate”. Others expressed that the data could also be frustrating and engender feelings of helplessness, particularly if it concerned aspects of their practice that they could not change (e.g., length of appointments, size of waiting room). Repeated exposure to PAR reduced anxiety and increased its acceptability for some participants.

Overall, while PAR stimulated thinking about work and practice, physicians used the data variably.

Practice Improvement Module

Like PAR, PIM was viewed both positively and negatively by participants. Some used the data to confirm their practice and others to inform practice change. As one physician described,

For me I did it [the audit] on Hepatitis C. It was something that I wanted to do because I want to grasp the concept in the treatment guidelines more than just know about it. And it really made me learn more. (N3)

Other participants described how the chart reviews had served as a reminder of things to do within a specific clinical area. The process identified clinical issues to be addressed, problems with office staff and procedures, patient perceptions of them and their staff, and communication issues with patients. For one physician, it was a wake-up call, But people [patients] I thought I’d been keeping up with things and I find actually, no. I am missing things. (P1). Some used the data to reflect on and identify practice changes needed at a systems level related to the electronic medical record, procedures they could adopt, and ways they could work differently within their team. Alternatively, some physicians discounted aspects of PIM, questioning the credibility of certain measures (e.g., patient satisfaction), the limits of numerical data in depicting quality of care, or attributing the results to external factors beyond the physician's control. Changes could be short lived, dependent on memory and recall of the appropriate protocol. Some regarded PIM as an irritant. It took time and participants felt they were forced to undertake an audit.

In summary, there was variability in acceptance and use of the PIM process and data. While some physicians were able to incorporate their feedback to improve care and systems of practice, others did not, particularly those who saw the program as an obligation or did not feel they were empowered to change their approach to care.

Practice-based small groups

In contrast to their perceptions of PIM and PAR, physicians who participated in the learning groups were uniformly positive about the program and its ability to stimulate reflection and self-assessment. Relationships and group dynamics appeared central to its effectiveness; one physician noted that the PBSG experience could be less positive if the group dynamics were not conducive to an honest and open exchange among members.

Participants described how group discussion of the evidence and cases triggered them to assess their own practices and plan change. As peers, they learned from each other in a non-threatening way and recognized that physicians with different years and types of expertise could help them. They mentioned being grounded by and judging their practices against their colleague's experiences. As one physician stated:

Well, it's sort of one of those where all of a sudden everybody's talking about something, and you’re sitting there very quiet going, I have no idea what they’re talking about, and in this group you can actually say that … that's an area that I need to go and read up on. (A5)

The discussion validated their practices and helped them identify gaps. The discussion was also a springboard for later reading. Importantly, they could ask questions and discuss their “really tough cases” without feeling vulnerable. They used the group to poll for approaches to care. Through these mechanisms, they vicariously experienced other people's practices. They noted these meetings helped them learn the art of medicine as they reflected on other people's approaches and weighed/assessed their own practice in comparison. They felt the discussions led the entire group to “raise the bar” and collectively adopt new tests, learn new skills and approaches to communicate with people.

PBSG was viewed as a safe collaborative place in which reflection on practice could occur, practice could be calibrated, and physicians confirmed or adjusted their perceptions about how well they were doing.

In summary, all three of the structured programs stimulated reflection about practice and how physicians were doing. The “obligatory” programs, PIM and PAR, also engendered irritation and annoyance, but nonetheless appeared to stimulate change for physicians open to feedback on how they were doing. By contrast, all the physicians who participated in the PBSG focus groups were positive about the group experience and its impact on accurate self-assessment.

Role of people around the physician in informing self‐assessment

Patients

Data provided by patients stimulated physicians to think about their practices. Patients triggered self-assessment when they presented with symptoms or signs that were new or when a patient outcome was unexpected (good or bad). Patients who were puzzling caused physicians to think about the care they were providing. Patient questions and information they brought with them from the internet stimulated thinking and information seeking to fill knowledge gaps.

Several physicians described a purposeful and routine way of thinking about the patients and their own performance, “it comes down to reviewing every clinical decision you make. Not necessarily overtly, but just, did I get that right? … Did I get through to the patient?” (K3).

Patients who left the practice also triggered self-assessment, particularly if physicians believed they had tried their best. Conversely, patient stability and demand could be reassuring, and for some physicians appeared as the sole indicator that they were performing well. As noted by one physician, I think a lot of my self-assessment is not conscious, it's more—the patients like me okay. They seem to get better. (A4).

Patients stimulated self-assessment particularly when they or their condition challenged the physician. Conversely, a stable patient population for a primary care physician was reassuring.

Colleagues–peers

A physician's peers, particularly those in close proximity (e.g., within a group, building, or call-group) often stimulated self-assessment.

Some physicians described the value of being called in by or calling in colleagues to see a patient whose presentation was unusual. Similarly, reviewing another physician's records also stimulated thought and practice comparison. Providing care for another physician's patients allowed the physician to see different standards of care and approaches to management.

Trusted colleagues were regarded as valuable sources of feedback. Yet many physicians reported not working in hospitals or large enough groups to foster corridor and lunch room conversations. Informal discussions about care at the end of a day, lunch, or coffee were reflective opportunities and triggers for examining oneself. As one physician noted,

In our office we tend to at 6 o’clock at night, one of the guys might come in to my office, or I might go into their's and say, “I saw this person today.” And then we might chat about it at that time, if they are still around … it doesn’t always work in a busy, busy clinic. (J2)

Discussions with peers served to inform physicians about how well they were doing as well as to reassure and confirm their practice.

Colleagues–consultants

Consultants, in contrast to peers, provided a stimulus for assessment that came from a different, sometimes authoritative perspective.

Input provided by consultants was described as helpful and provided guidance in caring for a patient. Letters sent to the consultant were used to validate a possible solution that the generalist thought might work for the patient. Sometimes, operative reports and letters from the consultant led the physician to go back through the patient's chart to see if he/she had missed something. Information provided by consultants was often weighed carefully, particularly if the advice was critical of the physician's care. While the physicians indicated that they took the advice most of the time, they also described situations when the advice was irritating, variable in quality, or patronizing making it more difficult to assimilate. When consultants who were respected for their knowledge and expertise provided criticism or unexpected feedback, it forced self-reflection. Depending on how it was provided, it could be destabilizing and lead to seeking peer feedback to verify or validate their practice.

Especially when like [A3] said, she got a letter from the specialist criticizing her for not doing something, and then you know, she doubts herself. So her self-assessment all of a sudden is kind of screwed up. Well she comes back [to PBSG to] get grounded. (A2)

Consultant feedback stimulated thinking about approaches to care, led physicians to confirm or question their management, and provided information that could be used to inform care provided to other patients. When consultants provided information that was critical or different than expected, it could be de-stabilizing.

Other health care professionals

Some participants identified nurses, pharmacists, and physical therapists as sources of information. These professionals often stimulated self-assessment through discussions which enabled an exchange of information, triggered an awareness of knowledge and care gaps, and suggested areas to be addressed. Unlike feedback from consultants (sometimes perceived as patronizing or adversarial) or from externally imposed sources (e.g., PAR or PIM), physicians did not describe resenting this feedback or disregarding it.

Feedback from other health care professionals appeared to stimulate physicians to think about their care of the patient being considered, other similar patients, and aspects of practice that were outside of the physician's usual scope of practice.

Students and residents

Learners often facilitated self-assessment. Preparing to teach required reading which stimulated thinking about the physician's knowledge base. Learner questions often led to further reading. Feedback in the form of teaching evaluations also stimulated thinking. The physicians acknowledged that teaching could be both stimulating and stressful, pushing them to keep up and keep ahead.

But whenever you teach the resident … [if] I feel that I am missing something or I am deficient on something, I go back and read on it and then come back and teach them. (P2)

Teaching stimulated thinking about the physicians’ knowledge base as well as their currency.

Interpersonal and interprofessional contact stimulated self-assessment. Generally, the physicians used the data to reflect on their own knowledge and practice. When the feedback was particularly critical, there might be resistance but nonetheless it evoked reflection about practice.

Role played by educational activities

Educational activities stimulated self-assessment.

CME courses appeared to have several roles. Reviewing course brochures helped the physician identify whether the list of topics was a current learning need for them. Attendance at courses stimulated changes and provided opportunity for collegial interaction. Confirming knowledge was an important function. Physicians stated:

L2: If you go to a lecture and are familiar with most of what the lecturer said, I think you feel better than if you hadn’t. At least I knew what he was talking about.

L6: Those are the ones I like to go to!

L2: If somebody was talking about the current way to deal with hypertension or diabetes … and you felt that yes, you were aware of that, I think that is better than not feeling that. I think that is one way you can assess yourself.

Formal self-assessment programs that assessed knowledge and preparation for maintenance of certification examinations helped physicians recognize what they did know and highlighted areas that required attention. Similarly, audits and data provided by third parties (e.g., insurers and organizations) often stimulated thinking about the practice. These objective data could also force the physician to review care provided, particularly if results were at variance with the norm. Physicians who had the opportunity to audit another physician's care noted that it stimulated comparison with their own practice and resulted in making changes.

Practice guidelines were a source of stimuli for some as they helped physicians identify what they did not know or do. They could also be a source of tension that led physicians to query the protocol and its appropriateness for some patients (e.g., elderly patients) based on who developed the protocol and who it was intended to aid.

CME courses, audit, practice guidelines, and self-assessment programs helped physicians assess themselves. These sources appeared to stimulate thought about the physician's knowledge and skill.

Discussion

In this study, we set out to explore the external sources of information, both formal and informal, that practicing physicians used to inform their self-assessments. To discuss our findings, we return to the definition of self-assessment offered earlier, “self-assessment is a process of interpreting data about one's performance and comparing it to explicit or implicit standards” (Epstein et al. Citation2008). Three elements appear integral to self-assessment: the availability of an explicit or implicit standard; data about performance; and interpretation of the data against the standards available. The participating physicians provided helpful insights about each of these elements.

We studied three programs. Each program had a component of self-assessment, provided data and explicit standards for comparison, and encouraged interpretation and reflection. We also learned about informal sources of data (e.g., the people and educational events) and other formal sources (e.g., audits) that physicians drew on to calibrate their performance. For these other formal and informal sources, the quality of data and standards were variable.

The three programs used high-quality data and standards. Differences in physician use of the data for self-assessment emerged. The PAR program (which uses normative standards that compares a physician to their peer group) and the PIM program (which uses evidence-based standards) both required physicians to acquire and complete data about their practices and behaviors to maintain licensure or certification. These programs are “high stakes” but infrequent with PAR occurring every 5 years and PIM every 10 years. Physicians appeared to respond to these data variably, some accepted it for self-assessment, and others rejected it. In contrast, the PBSG program was voluntary. The self-assessment was imbedded within each groups’ social network. It emerged informally through the monthly discussion of evidence-based standards and case discussion material, interpreted by the group, and modified to the local context. Physicians appeared to respond to these data in a positive manner. Earlier research tells us that self-assessment, reflection, and practice change are influenced by physicians’ professional networks and local standards of care (Gabbay & le May Citation2004).

Additionally, of the three programs, PBSG was the only one that regularly relied on interpretation of data and interaction with peers. Through discussion, each physician could explore how well he/she was doing relative to the standard and the group's interpretation of the standard. The depth and breadth of experience and knowledge, freely shared among trusted colleagues were integral to the perceived effectiveness of this approach. These groups and other high-quality professional networks and study groups have the potential to nurture a culture of using external information for self-assessment.

We found that physicians used and reflected on many non-formal non-explicit sources of data provided by their professional colleagues, patients, and the educational resources. These data were self-selected and were often interpreted by the physician in a solitary fashion, often in the absence of an explicit standard. For example, physicians told us that they felt they were “doing ok” (performing acceptably) because their patients got better or remained in the practice. Their self-perceived ability to address patient and learner questions also told them their knowledge and skills were acceptable. Similarly, they assessed how they were doing through collegial interactions with peers, consultants, and other health care professionals. CME activities often validated approaches. The standards for these sources often appeared to be implicit and set by the individual physician. Generally, information received from these self-selected sources appeared to be received positively. However, this raises concerns about the quality of the data and the standards used, and whether they provided an accurate source of data about performance.

As noted elsewhere, physicians provide various reasons for using or rejecting the data including their perceptions about the intent of the program, the credibility of the sources, their patients, and the time and resources required to act upon the data (Fidler et al. Citation1999; Overeem et al. Citation2009; Sargeant et al. Citation2009). Our observation that physicians were more likely to describe resistance to feedback provided by mandatory programs is consistent with findings of an earlier study, which showed that physicians were likely to make small limited adjustments when faced with regulations (Fox et al. Citation1989). This gives regulatory and professional organizations a reason to pause and reflect upon approaches to providing data and engaging physicians.

While our findings tell us that physicians assessed themselves by drawing on many sources of information, for the most part, they did not appear to routinely seek out information in a systematic way. Information-seeking was triggered by a variety of cues, especially if something occurred that did not seem quite right, or through interaction with others that created self-awareness of gaps.

In summary, physicians appeared to use and interpret data and standards of varying quality to inform self-assessment. They frequently lacked access to high-quality standards and data on a systematic basis to inform their self-assessment. They often used subjective data of anecdotal and variable quality. When they did receive formal personalized feedback (e.g., PAR and PIM), it could be threatening. Interpretation of data and subsequent decisions about its use appeared to be enhanced by informed discussion and interaction with trusted colleagues, as occurred in the PBSG groups.

There are limitations to the study. We conducted seven focus groups drawing on physicians who participated in three programs we believed offered physicians an opportunity to reflect upon and consider their work. Two of the programs were obligatory while the third program (PBSG) was voluntary. It is likely that physicians who had voluntarily committed to PBSG participation and were part of stable groups would positively report their experiences. Nonetheless, findings from the PBSG approach and its voluntary context enrich the findings and serve as comparison to the mandatory programs.

Our research opens up several new areas for study. If physicians were provided with objective data more frequently, would they be more comfortable incorporating these data into their practices? If mentorship or peer discussions were introduced to help physicians interpret and use their data, would they find “regulatory” data more acceptable? Finally, what is the quality (reliability and validity) of the implicit standards physicians have created for themselves from the variety of serendipitous sources they use to inform self-assessment?

In an era of quality improvement and patient safety, this study suggests a number of directions for health care organizations, professional, and regulatory bodies for facilitating self-assessment. Consideration needs to be given to how performance standards may be made more explicit and objective data can be made more routinely available to physicians. It appears that physicians may need guidance to systematically seek out and interpret performance data. Ways need to support physicians so that credible, relevant data are not rejected without consideration, that physicians expect to seek out information about their performance and measure it against evidence-based standards of care, and that a collegial, supportive environment that facilitates self-questioning, self-assessment, and benchmarking exists.

Acknowledgments

The Medical Council of Canada, the American Board of Internal Medicine, and the Office of Continuing Medical Education, Faculty of Medicine, Dalhousie University provided funding for the study. We thank Cees van der Vleuten and Kevin Eva for their contributions to the study design, data collection, and analysis.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.

References

  • Corbin JM, Strauss AL. Basics of qualitative research: Techniques and procedures for developing grounded theory, 3rd. Sage, Thousand Oaks, CA 2008
  • Davis DA, Mazmanian PE, Fordis M, van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA 2006; 296: 1137–1139
  • Duffy FD. Transforming continuing medical education through maintenance of certification, Hager M, Russell S, Fletcher SW, editors. 2008, Continuing education in the health professions: Improving healthcare through lifelong learning. Proceedings of a Conference sponsored by the Josiah Macy Jr Foundation, November 28–December 1, 2007; Bermuda. New York: Josiah Macy Jr Foundation, 2008 [Published 2010, February 14]. Available from: www.josiahmacyfoundation.org
  • Epstein RM, Siegel DJ, Silberman J. Self-monitoring in clinical practice: A challenge for medical educators. J Cont Educ Health Prof 2008; 28: 5–13
  • Eva KW, Regehr G. “I’ll never play professional football” and other fallacies of self-assessment. J Cont Educ Health Prof 2008; 28: 14–19
  • Fidler H, Lockyer J, Toews J, Violato C. Changing physicians’ practice: The effect of individual feedback. Acad Med 1999; 74: 702–714
  • Forsetlund L, Bjørndal A, Rashidian A, Jamtvedt G, O'Brien MA, Wolf F, Davis D, Odgaard-Jensen J, Oxman AD. Continuing education meetings and workshops: Effects on professional practice and health care outcomes. CD003030, Cochrane Database Syst Rev. 2009
  • Fox RD, Mazmanian PE, Putnam RW. Changing and learning in the lives of physicians. Praeger, New York 1989
  • Gabbay J, le May A. Evidence based guidelines or collectively constructed “mindlines?” Ethnographic study of knowledge management in primary care. BMJ 30; 2004; 329(7473)1013
  • Marinopoulos SS, Dorman T, Ratanawongsa N, Wilson LM, Ashar BH, Magaziner JL, Miller RG, Thomas PA, Prokopowicz GP, Qayyum R, et al. 2007. Effectiveness of continuing medical education. Agency for Healthcare Research and Quality, Contract No. 290-02-0018
  • Overeem K, Wollersheim H, Driesssen E, Lombarts K, van de Ven G, Grol R, Arah O. Doctors’ perceptions of why 360-degree feedback doe (not) work: A qualitative study. Med Educ 2009; 43: 874–882
  • Regehr G, Eva K. Self-assessment, self-direction, and the self-regulating professional. Clin Orthop Relat Res 2006; 449: 34–38
  • Sargeant J, Armson H, Chesluk B, Dornan T, Eva K, Holmboe E, Lockyer J, Loney E, Mann K, van der Vleuten C. Process and dimensions of self-assessment. Acad Med 2010; 85: 1212–1220
  • Sargeant J, Mann K, Sinclair D, Ferrier S, Muirhead P, van der Vleuten C, Metsemakers J. Learning in practice: Experiences and perceptions of high scoring physicians. Acad Med 2006; 81: 655–660
  • Sargeant J, Mann K, van der Vleuten C, Metsemakers J. Reflection: A link between receiving and using assessment feedback. Adv Health Sci Educ Theory Pract 2009; 14: 399–410
  • Straus S, Tetroe J, Graham ID. Knowledge translation in health care: Moving from evidence to practice. Wiley-Blackwell BMJ Books, Chichester, West SussexUK 2009
  • Varkey P, editor. Medical quality management: Theory and practice. Jones and Bartlett Publishers, Sudbury, MA 2010

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.