10,333
Views
60
CrossRef citations to date
0
Altmetric
Research Article

Assessment of professionalism: A consolidation of current thinking

Pages e952-e956 | Published online: 03 Sep 2012

Abstract

Context: Professionalism has become a hot topic in medical education. Professionalism needs to be assessed if it is to be viewed as both positive and relevant.

Objectives: The assessment of professionalism is an evolving field. This review aims to consolidate current thinking.

Implications: Assessment of professionalism has progressed from an initial focus on the development and attainment of professional identity, through identifying areas of deficiency, to the attainment of a set of identifiable positive attributes and behaviours. It is now beginning to recognise the challenge of assessing a multi-dimensional construct, looking beyond the measurement of behaviour to embrace a diversity of approaches.

Conclusions: Professionalism should be assessed longitudinally. It requires combinations of different approaches, assessing professionalism at individual, interpersonal and societal/institutional levels. Increasing the depth and the quality of reliability and validity of existing programmes in various contexts may be more appropriate than concentrating on developing new instruments. Increasing the number of tests and the number of relevant contexts will increase the reliability of the result. Similarly increasing the number of observers increases reliability. Feedback, encouraging reflection, can promote change in behaviour and identity formation.

Introduction

The medical profession's relationship with society has come under strain in recent years due to a combination of factors, including a reaction to high-profile examples of unprofessional behaviour. The profession has responded by redefining its core values and norms, mainly in terms of character traits and observable behaviours and imposing greater collegiate authority on its members.

The focus on normative definitions of professionalism misses the influence of context, institutions and socio-economic and political concerns and leads to an over-emphasis on codes of behaviour (Martimiankis et al. Citation2009). Professionalism is a complex, multi-dimensional construct that varies across historical time periods and cultural contexts (Hodges et al. Citation2011). It has elements at all three levels of House's (Citation1977) PSSP model (), which draws on the psychological social psychology, symbolic internationalist and the personality and social structure perspectives, and recognises the relevance of the three levels of analysis and their interactions. For example, a doctor's professional behaviour is dependent not only on their personal characteristics, but may be more influenced by situational and contextual phenomena arising during learning and practice. During this process individuals may look to institutionalised norms and conventions, reproduced and reinforced in day-to-day interactions, to structure their behaviour, giving it meaning and justification. The post-modern view, however, is less deterministic. Individuals are viewed as making sense of institutions through their own unique backgrounds and in the current context in which the institution resides. Meaning is created rather than transmitted and culture is constantly being re-created (Tierney Citation1997).

Figure 1. House's PSSP model.

Figure 1. House's PSSP model.

Professionalism has, in turn, become a hot topic in medical education with growing recognition of the importance of medical students and doctors developing excellence in professionalism (Stern Citation2006). The lack of a consensus definition of professionalism has, however, limited its operationalisation (van Mook et al. Citation2009a). There is little evidence to support the assumption that simply defining outcomes and providing teaching and learning experiences will positively impact on attitudes towards professionalism and/or professional behaviour in medical school or subsequent practice (Jha et al. Citation2007). Professionalism needs to be assessed if it is to be viewed as both positive and relevant (Cohen Citation2006; Stern and Papadakis Citation2006). The assessment of professionalism is an integral component of the GMC's (Citation2009) recommendations for undergraduate curricula and all four domains of its approach to appraisal and re-validation (GMC Citation2011).

The assessment of professionalism is an evolving field. This review aims to consolidate current thinking.

The scope of assessment

The academic literature on the assessment of professionalism has moved from an initial focus on the development and attainment of professional identity (through identifying areas of deficiency, such as the loss of ethical principles) to the attainment of a set of identifiable positive attributes and behaviours (Baldwin & Daugherty Citation2006). It is now beginning to recognise the challenge of assessing a multi-dimensional construct and the need to embrace a diversity of approaches.

The primary focus has been on the measurement of professional behaviour, the assumption being behaviour is reflective of the underlying dimensions of professionalism; cognitive, attitudinal, personality and characteristics. However, this has not been supported by evidence from socio-cognitive psychology. For example, attitudes have been found to be poor predictors of behaviour particularly when external constraints, such as social pressure to behave in a particular way, are strong (Rees & Knight Citation2007). An individual's behaviour is more likely to be influenced by situational and contextual phenomena arising during learning and practice than by their underlying attitudes (Wallace et al Citation2005; Rees & Knight Citation2008). This has the potential for students and doctors being unfairly labelled as ‘unprofessional’ in their attitudes by observers who ignore contextual circumstances. The concept of situationally specific professionalism challenges, dilemmas or lapses may therefore be more useful than a global concept of a characteristic or trait of unprofessional behaviour (Hodges et al. Citation2011). Assessment, therefore, must include the context-dependent nature of professional behaviours. It should include assessment of the decisions, responses and behaviours of all actors in each context, gathering longitudinal data from students and teachers as well as other key players, such as nurses, other health care professionals, patients. It ought to include monitoring of the learning/practice environment and doctors’ interpersonal relationships, e.g. student-teacher, teacher-student, student-patient, for problematic interpersonal phenomena (Holtman Citation2008). In measuring professionalism symmetry, i.e. where all levels in the organisational hierarchy are evaluated using the same methods, may help alleviate the tension produced by students being assessed by faculty members who do not always practice what they preach (Brainard & Brislen Citation2007). Inherent in this approach is the provision of feedback to improve the performance of teams as well as improve structural elements, e.g. the habitual, patterned and thus pre-reflexive way of understanding and behaving that helps generate and regulate the practices that make up the social life of the primary care team (Bourdieu Citation1990).

Similarly, with the emphasis on measurement of behaviours, doctors may be encouraged to ‘fake’ professional behaviours with the potential for doctors with professionally acceptable behaviours, but unprofessional attitudes, being assessed as professional (van Mook et al. Citation2009a). Assessment methods that capture both behaviours and attitudes require to be further developed and tested, for example observation coupled with conversations during which attitudes are revealed (Rees & Knight Citation2007). Feedback, particularly where it encourages reflection, may lead to change and promote identity formation (Goldie Citation2012).

The concentration on the measurement of behaviour has also ignored the knowledge base of professionalism. It is important that students’ possess this knowledge and that it is adequately tested (van Mook et al. Citation2009b; Hodges et al. Citation2011).

Professionalism at the societal/institutional level can be understood in the context of the goals, aspirations and collective behaviours of the healthcare and educational institutions and the wider medical profession. Assessment at this level is in its infancy. It is likely to involve measuring, through dialogue and meaningful input from public stakeholders, the extent to which the profession, or one of its subgroups, for example GPs, meet the expectations of the wider society As such, it may take the form of institutional outcomes, e.g. patient outcomes or processes (accreditation requirements) (Hodges et al. Citation2011). Other approaches, such as critiquing how professionalism has been characterised and the power dynamics of its enforcement, may lead to improved institutional and organisational climate and practice (Hafferty & Castellani Citation2009).

General assessment principles

Medical competence has long been considered by educators to involve combinations of constructs, which although cannot be observed directly but are measurable, e.g. knowledge, skills, problem solving and attitudes. Constructs were assumed to be stable, generic and independent of other constructs. Many assessment instruments were developed often with the aim of being a single definitive test of a construct, e.g. the MCQ as a test of knowledge. However, the idea of stable and generic constructs has proved no longer tenable (Elstein et al. Citation1978), and assessment has moved onto competencies. Competencies are tasks that a qualified medical professional should be able to perform successfully. Assessment in the medical education setting has concentrated on measuring individual's performance under test conditions. Assessment should ideally involve measuring performance in everyday practice (van Mook et al. Citation2009b).

No single instrument can be used for each competency. Combinations of instruments need to be used (van der Vleuten & Schuwirth Citation2005). This is particularly pertinent when attempting to measure complex multi-dimensional constructs such as professionalism. Instruments that hold context static, e.g. MCQs that measure knowledge base remain valuable in the assessment of competencies.

In establishing the reliability of instruments, traditional psychometric methods require to be extended to defend assessment decisions to the various stakeholders involved. Reliability may not always be conditional on objectivity and standardisation, but often on adequate sampling and the expertise of those making judgements (Schuwirth & van der Vleuten Citation2006). In considering which instruments to use, the following criteria need to be considered:

  • Validity

  • Reliability

  • Feasibility

  • Acceptability

The impact on education and learning is also of importance (van der Vleuten Citation1996).

The utility of an instrument is a function of the relationship between all these elements. In practice, a trade-off exists between these utility criteria (Thistlethwaite & Spencer Citation2008). Different weightings need to be applied depending on the context and purpose of the assessment. In high-stake examinations reliability will have higher priority in the choice of assessment method. In formative situations, where the final decision is based on a triangulation of different assessments, reliability can be compromised in favour of educational impact (van der Vleuten Citation1996).

Assessment tools for measuring professionalism

Building on earlier work by Lynch et al. (Citation2004) and Veloski et al. (Citation2005), Wilkinson et al. (Citation2009) identified nine clusters of assessment tools for measuring professionalism. lists the different clusters with examples of assessment tools in each category (see Wilkinson et al. (Citation2009) for individual references). Many of the instruments used have not been fully tested in terms of their reliability and validity (Lynch et al. Citation2004; Veloski et al. Citation2005; Jha et al. Citation2007; Wilkinson et al. Citation2009).

Box 1 Assessment tools for measuring professionalism

In choosing which instruments to use Miller's (Citation1990) framework for the assessment of clinical skills, competence and performance is useful for illustrating their relative position and use (). The lowest two levels test aspects of cognitive knowledge while the upper levels focus on behavioural aspects.

Figure 2. Miller's learning pyramid.

Figure 2. Miller's learning pyramid.

Currently the most commonly used instruments are peer assessments, OSCEs, observation by faculty members, which often involve the use of standardised checklists, learner portfolios and critical incident reports. Written comments and reports from formal evaluation sessions, completed by a supervisor and/or other staff are also often used (van Mook et al. Citation2009b). Increasing the depth and the quality of reliability and validity of existing programmes in various contexts may be more appropriate than concentrating on developing new instruments (Hodges et al. Citation2011). However, as previously mentioned, methods that capture both behaviours and attitudes and approaches to assessment at societal/institutional levels require further development and testing.

General guidelines for assessing professionalism

General guidelines have been developed for assessing professionalism (van Mook et al. Citation2009b):

  • Professionalism is a multi-dimensional construct and as such should be assessed at individual, interpersonal and societal/institutional levels.

  • When measuring professionalism, no single instrument captures all its dimensions. Combining multiple methods, triangulation, is necessary (Thistlethwaite & Spencer Citation2008). While assessment at performance level is important, knowledge, values and attitudes should also be measured (van Mook et al. Citation2009b).

  • The purpose of assessment must be made clear, particularly whether it is for formative and/or summative purposes. Feedback provided by formative assessment has the potential to change behaviour (Phelan et al. Citation1993; Papadakis et al. Citation2001; Goldie Citation2012), particularly when it is longitudinal, frequent and helps guide remediation (Van Luijk et al. Citation2000; Projectteam Consortium Abeundi Citation2005). Using instruments which provide descriptive comments are most effective (Hunt Citation1992).

  • The choice of outcome is related to the usefulness of the method. For example, while peer assessment works well as a formative assessment tool when used summatively, it often fails to discriminate due to a reluctance to judge their peers in a negative light (Arnold et al. Citation2005).

  • Decide what the reference for assessors should be – norm or criteria. With the lack of consensus, concrete, operationalisable definition(s) of professionalism (Stern Citation2006) criteria referenced standards are preferable. However, setting the reference standard can be difficult as the incidence of professional lapses is often low (Hafferty Citation2006).

  • Increasing the number of tests and the number of relevant contexts will increase the reliability of the result. The closer the assessment is to reality, the more valid it is likely to be (van der Vleuten Citation1996). Similarly, increasing the number of observers increases reliability. Assessors need to be trained to rate students’ performance objectively and avoid ‘attribution bias’, i.e. the tendency to generalise observed behaviours to all contexts (Stern Citation2006).

  • The assessment should ideally include a situation that involves conflict (Arnold Citation2002; Hafferty Citation2006; Stern Citation2006). Assessment should not only include proposing a solution to the dilemma, but also establish the reasoning behind the proposal (Stern Citation2006).

  • The assessment should incorporate a longitudinal trajectory. Professionalism is a process (Hilton & Slotnik Citation2005; Hafferty Citation2006). It should be assessed throughout medical school, post-graduate training and beyond.

  • The assessment should be supported by adequate guidance, suggestions for remediation, as well as decisions regarding continuation of training (Hodges et al Citation2011). There is evidence from retrospective studies that practicing doctors facing disciplinary action from licensing boards had a higher incidence of prior professional lapses (Papadakis et al Citation2004; Ainsworth & Szauter Citation2006).

  • Feedback, encouraging reflection, should be provided during and directly after the observations (van Mook Citation2009b). Reflection contributes to individual learning and identity formation (Bravilosky et al. 2011; Goldie Citation2012). Provision of constructive feedback has been shown to improve professional behaviour (Phelan et al. Citation1993; Papadakis et al. Citation2001; Goldie Citation2012).

  • Where assessment tools are to be used in new contexts, re-validation with attention to cultural relevance is important (Hodges et al. Citation2011).

Declaration of interest: The author reports no conflicts of interest. The author alone is responsible for the content and writing of this article.

References

  • Ainsworth MA, Szauter KM. Medical student professionalism: Are we measuring the right behaviours? A comparison of professional lapses by students and physicians. Acad Med 2006; 81(10)S83–S86
  • Arnold L. Assessing professional behaviour: Yesterday, today and tomorrow. Acad Med 2002; 77(6)502–515
  • Arnold L, Shue CK, Kritt B, Ginsburg S, Stern DT. Medical students’ views on peer assessment of professionalism. J Gen Int Med 2005; 20(9)819–824
  • Baldwin DC, Daugherty SR. Using surveys to assess professionalism in individuals and institutions. Measuring medical professionalism, DT Stern. Oxford University Press, New York 2006; 95–117
  • Bourdieu P. The logic of practice. Polity Press, Cambridge 1990
  • Brainard AH, Brislen HC. Viewpoint: Learning professionalism: A view from the trenches. Acad Med 2007; 82(11)1010–1014
  • Bravilosky C, Charlin B, Beausleil S, Cote S, van der Vleuten C. Measurement of clinical reflective capacity early in training as a predictor of clinical reasoning performance at the end of residency: An experimental study on the script concordance test. Med Educ 2001; 35(5)430–436
  • Cohen. JJ. Professionalism in medical education, an American perspective: From evidence to accountability. Med Educ 2006; 40: 607–617
  • Elstein SA, Schulman LS, Sprafka SA. Medical problem-solving: An analysis of clinical reasoning. Harvard University Press, Cambridge, MA 1978
  • Jha V, Bekker HL, Duffy SR, Roberts TE. A systematic review of studies assessing and facilitating attitudes towards professionalism in medicine. Med Educ 2007; 41(8)822–829
  • General Medical Council. Tomorrow's doctors. GMC, London 2009
  • General Medical Council. The good medical practice framework for appraisal and re-validation. GMC, London 2011
  • Goldie J, 2012. The formation of professional identity in medical students: Considerations for educators. Med Teach. (In Press)
  • Hafferty FW. Measuring medical professionalism: A commentary. Measuring medical professionalism, DT Stern. Oxford University Press, New York 2006; 281–307
  • Hafferty FW, Castellani B. A sociological framing of medicine's modern-day professionalism movement. Med Educ 2009; 43: 826–828
  • Hilton SR, Slotnik HB. Proto-professionalism: How professionalisation occurs across the continuum of medical education. Med Educ 2005; 39: 58–66
  • Hodges BD, Ginsburg S, Cruess R, Cruess S, Delport R, Hafferty F, Ho MJ, Holmboe E, Holtman M, Ohbu S, et al. Assessment of professionalism: Recommendations from the Ottawa (2010) Conference. Med Teach 2011; 33(5)354–63
  • Holtman MC. A theoretical sketch of medical professionalism as a normative complex. Adv Health Sci Educ 2008; 13: 233–245
  • House JS. The three faces of social psychology. Sociometry 1977; 40: 161–177
  • Hunt DD. Functional and dysfunctional characteristics of the prevailing model of clinical evaluation systems in North American medical schools. Acad Med 1992; 67(4)254–259
  • Lynch DC, Surdyk PM, Eiser AR. Assessing professionalism: A review of the. literature. Med Teach 2004; 26(4)366–373
  • Martimiankis MA, Maniate JM, Hodges BD. Sociological interpretations of professionalism. Med Educ 2009; 43: 829–837
  • Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990; 65(Supplement)S63–S7
  • Papadakis M, Loeser H, Healy K. Early detection and evaluation of professionalism deficiencies in medical students: One school's approach. Acad Med 2001; 76(11)1100–1106
  • Papadakis. M, Hodgson CS, Teherani A, Koatsu ND. Unprofessional behaviour in medical school is associated with subsequent disciplinary action by a state medical board. Acad Med 2004; 79: 244–249
  • Phelan S, Obenshain SS, Galey WR. Evaluation of the non-cognitive professional traits of medical students. Acad Med 1993; 68(10)799–803
  • Projectteam Consilium Abeundi. 2005. Professional behaviour: Teaching, assessing and coaching students. Final report, Mosae Libris.
  • Rees CE, Knight LV. The trouble with assessing students’ professionalism: Theoretical insights from sociocognitive psychology. Acad Med 2007; 82(1)46–50
  • Rees CE, Knight LV. Banning, detection, attribution and reaction: The role of assessors in constructing students’ unprofessional behaviours. Med Educ 2008; 42: 125–127
  • Schuwirth LWT, van der Vleuten CPM. Medical education: Challenges for educationalists. BMJ 2006; 333: 544–546
  • Stern DT. Measuring medical professionalism. Oxford University Press, New York 2006
  • Stern DT, Papadakis M. The developing physician – Becoming a professional. NJME 2006; 355(17)1794–1799
  • Thistlethwaite JE, Spencer JE. Professionalism in medicine. Radcliffe Publishing Ltd, Abingdon, UK 2008
  • Tierney WG. Organizational socialization in higher education. J Higher Educ 1997; 68: 1–16
  • van Luijk SJ, Smeets SGE, Smits J, Wolfhagen IH, Perquin MLF. Assessing professional behaviour and he role of academic advice as the Maastricht Medical School. Med Teach 2000; 22(2)168–172
  • van der Vleuten CPM. The assessment of professional competence: Developments, research, and practical implications. Adv Health Sci Educ 1996; 1: 41–47
  • van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: From methods to programmes. Med Educ 2005; 39: 309–317
  • van Mook WN, van Luijk SJ, O'Sullivan H, Wass V, Zwaveling JH, Schuwirth LWT, van der Vleuten CPM. The concepts of professionalism and professional behaviour: Conflicts in both definition and learning outcome. Eur J Int Med 2009a; 20(8)e85–89
  • van Mook WN, van Luijk SJ, O'Sullivan H, Wass V, Schuwirth LWT, van der Vleuten CPM. General considerations regarding assessment of professional behaviour. Eur J Int Med 2009b; 20(8)e90–95
  • Veloski JJ, Fields SK, Boex JR, Blank LL. Measuring professionalism: A review of studies with instruments reported in the literature between 1982 and 2002. Acad Med 2005; 80: 366–370
  • Wallace D, Paulson R, Lord C, Bond CJ. Which behaviours do attitudes predict? Meta-analysing the effects of social pressure and perceived difficulty. Rev Gen Psychol 2005; 9: 214–227
  • Wilkinson TJ, Wade WB, Knock LD. A blueprint to assess professionalism – Results of a systematic review. Acad Med 2009; 84: 551–558

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.