1,093
Views
14
CrossRef citations to date
0
Altmetric
Research Article

What are we preparing them for? Development of an inventory of tasks for medical, surgical and supportive specialties

, , , , , & show all
Pages e1068-e1077 | Published online: 31 Oct 2012

Abstract

Background: Internationally, postgraduate medical education (PGME) has shifted to competency-based training. To evaluate the effects of this shift on the outcomes of PGME appropriate instruments are needed.

Aim: To provide an inventory of tasks specialists perform in practice, which can be used as an instrument to evaluate the outcomes of PGME across disciplines.

Methods: Following methodology from job analysis in human resource management, we used document analyses, observations, interviews and questionnaires. Two thousand seven hundred and twenty eight specialists were then asked to indicate how frequently they performed each task in the inventory, and to suggest additional tasks. Face and content validity was evaluated using interviews and the questionnaire. Tasks with similar content were combined in a total of 12 clusters. Internal consistency was evaluated by calculating Cronbach's alpha. Construct validity was determined by examining predefined differences in task performance between medical, surgical and supportive disciplines.

Results: Seven hundred and six specialists (36%) returned the questionnaire. The resulting inventory of 91 tasks showed adequate face and content validity. Internal consistency of clusters of tasks was adequate. Significant differences in task performance between medical, surgical and supportive disciplines indicated construct validity.

Conclusion: We established a comprehensive, generic and valid inventory of tasks of specialists which appears to be applicable across medical, surgical and supportive disciplines.

Introduction

Calls from society for more transparency with regard to the content and quality of medical education have been the prime drivers of the fairly recent shift to competency-based postgraduate medical education and training in many, mainly Western, countries (Maudsley et al. Citation2000; Jones et al. Citation2001; Leach Citation2001; Carraccio et al. Citation2002; Frank Citation2005; Scheele et al. Citation2008; Lillevang et al. Citation2009). The introduction of competency-based curricula was based on the assumption that such training would provide future physicians with better preparation for the full scope of contemporary medical practice or, in the words of the Royal College of Physicians and Surgeons of Canada, would train ‘better physicians’ (Frank Citation2005). While many met this transformation of postgraduate medical education (PGME) with enthusiasm, it also stirred up considerable unease (Grant Citation1999; Talbot Citation2004; ten Cate & Scheele, Citation2007; Reeves et al. Citation2009). Others have pointed to the huge investment of time, money and effort that has to be expended to reform medical education or have questioned whether these efforts can actually be expected to result in better physicians (Cooper Citation2006; Hays Citation2007). Although any curriculum should be evaluated regularly, the misgivings regarding the recent changes make it only the more urgent to evaluate the effects of competency-based postgraduate medical training using sound and appropriate measurement instruments.

Although the obvious way to evaluate the new postgraduate training programmes would be to ascertain whether they actually produce better physicians, the feasibility of this approach seems questionable, mainly due to the varied and often conflicting definitions of what is meant by a ‘good’ physician (de Bont Citation1997; Ashcroft Citation2002; Letters Citation2002; Miles & Leinster Citation2010). In order to circumvent this problem, we opted for a different approach, looking for ways to develop an evaluation instrument based on the tasks a physician is expected to be able to perform. This approach was prompted by the consideration that, basically, the goal of each phase of the medical education continuum should be to ensure optimal preparation of students and trainees to move on to the next phase of training or to professional practice. For senior postgraduate trainees, the obvious next step is independent performance of the full scope of the tasks routinely performed by a specialist. This suggests that outcomes of PGME can be effectively evaluated by determining to which extent training programmes deliver trainees who are competent to perform these tasks.

Although several studies have investigated how well recently registered specialists felt prepared for clinical practice, none of them covered the full scope of independent practice. Moreover, these studies focused mainly on competencies related to the role of medical expert and relied on a variety of insufficiently validated outcome measures (Liebelt et al. Citation1993; Lenton et al. Citation1994; Roberts et al. Citation1997; Beckett et al. Citation2006; Benstead Citation2006; Card et al. Citation2006; Lieberman & Hilliard Citation2006). To remedy the lack of research data and to accommodate the shift from input-oriented to output-oriented training, it has been advocated that a systematic approach to programme evaluation be designed that is uniformly applicable to all PGME programmes (Higgins et al. Citation2005; Musick, Citation2006). The present study was designed to systematically develop a generic inventory of the tasks of specialists that can be used in evaluating how well specialists are equipped for professional practice after completing their training.

Task inventories are a common feature of job analysis in human resource studies (Oswald Citation2003; Cascio & Aguinis Citation2005; Singh Citation2008). Although widely used in vocational training (Blackmore Citation1999), job analysis has no tradition in the professions, partly because professional groups tend to resist this type of evaluation of their activities (Blackmore Citation2000). In vocational training, the primary aim of the job analysis is to align the content of training programmes with the competencies required for independent practice. Systematic job analysis of the work of specialists could be similarly helpful in evaluating and improving the content of PGME programmes.

Apart from its usefulness as an instrument for evaluating training programmes, there are other potential uses for a comprehensive inventory of the tasks of specialists. The introduction of competency-based training has led to widespread concern about how to tease apart and translate to clinical teaching practice the different competencies that play a role in complex professional task performance (Grant Citation1999; Leung Citation2002; ten Cate & Scheele Citation2007; Jones et al. Citation2011). Also, clinical work consists of tasks rather than competencies, and competency development is judged through observation and assessment of’ task performance in daily practice. To help bridge the gap between theory and practice, Entrustable Professional Activities (EPAs) have been introduced (ten Cate Citation2005; ten Cate & Scheele Citation2007). EPAs are defined as those professional activities that operationally define a profession and that a trainee can progressively be trusted to perform competently. Although very valuable, EPAs are limited to (critical) situations for specific specialties. In our opinion, in addition to rather specific EPAs and broadly defined competencies, an inventory of generic tasks of specialists could have value as an instrument to facilitate evaluation and improvement of curricula, training methods, coaching and assessment in specialty training.

Methods

Setting of the study

The study was conducted in the Netherlands among specialists representing all 28 medical specialties with accredited training programmes for surgical, medical, or supportive disciplines (). Training programmes are offered in eight, geographically defined ‘education regions’. Participants were recruited in the north-eastern region from the three different hospital settings known in the Netherlands: (1) University Medical Centres, which provide tertiary care, undergraduate medical education and postgraduate specialty training, with a strong involvement in research; (2) large (>500 beds) general teaching hospitals providing secondary care (with some referral functions) and undergraduate and postgraduate training and (3) small (<500 beds) district hospitals providing basic secondary care.

Table 1  Overview of three groups of Dutch disciplines

In line with established ethical standards (Eva Citation2009; ten Cate Citation2009), and the legal requirements in the Netherlands, all participants were informed that participation was voluntary and the data would be treated confidentially and anonymously.

Specificity of tasks

Our main aim in creating a task inventory was to design an instrument to evaluate whether postgraduate specialty training programmes succeed in preparing trainees for the full scope of tasks they are expected to perform in professional practice. Since we aimed to create one inventory for surgical, medical and supportive specialties, there were two requirements to fulfil: the tasks had to be formulated in a sufficiently generic way and tasks characteristic for surgical, medical, or supportive specialties had to be incorporated as well. So, on the one hand, we incorporated tasks that are not specifically discipline related, such as ‘keeping up to date with the professional literature’ or ‘collaborating with administrative support staff’. On the other hand we included tasks like ‘performing surgical procedures in the operating theatre’ and ‘performing a laboratory test or imaging procedure’, because these tasks are routinely performed by surgical and supportive disciplines, respectively. We took care, however, to exclude tasks that were only a differentiation of a more general task, such as ‘performing coronary bypass surgery’, which is limited to one surgical subspecialty only.

Item development, initial phase

Since there is no established way to perform a job analysis of the tasks of specialists, we conducted an exploratory study in separate phases (). Using methods from job analysis in human resource management (Cascio & Aguinis Citation2005), we first performed a pilot study involving document analysis, observation and interviews. To identify the specific and generic tasks of every discipline, we reviewed curricular documents of all specialties. Subsequently we analyzed field notes taken while observing a typical working day of 10 specialists from different specialties and transcripts of interviews with 17 specialists about their tasks in daily practice. During the whole procedure, in an iterative process, we used initial findings as input for further data collection and analysis, continuously integrating and differentiating tasks to arrive at an optimal level of aggregation. Finally, a web-based questionnaire survey was sent to 328 specialists in order to establish task frequencies, validate findings and elicit critical comments and suggestions. This led to an initial inventory consisting of 73 tasks.

Table 2  Stages of inventory development

Item development, advanced phase

After the initial item development phase, we concluded that two important adjustments had to be made. First, several tasks had to be reworded because some concepts were interpreted differently across disciplines. ‘Procedures’ and ‘surgery’, for example meant different things to surgeons and gastroenterologists. Second, the initial inventory needed further adaptation to improve applicability for the diagnostic/laboratory activities of the supportive disciplines. Therefore, we further observed typical representatives of the supportive disciplines, two specialists in pathology and two specialists in medical microbiology, during a typical working day, taking field notes to document tasks, activities and subjective impressions. Next, we conducted semi-structured interviews with the same specialists to verify our observations and gain further insight into their work. Subsequently 28 specialists from all specialties were interviewed by JP an BB for approximately one hour to explore the comments from the questionnaire concerning the initial inventory, determine whether new tasks should be added and seek confirmation that all the relevant tasks had been covered at an appropriate level of aggregation. In order to ensure that proper attention was paid to any differences between hospital settings, we purposively selected interviewees from each of the three settings. The interviews were guided by a topic list and by the preliminary inventory. Starting questions were:

  1. Which tasks are typical of your specialty and distinguish it from other specialties?

  2. Which of your tasks are typical of your work and distinguish it from the work of your colleagues from your own specialty?

  3. Which tasks are most distinctive for you in the following environments: department, hospital, region and country (the professional association of your specialty)?

  4. Do you wish to make any further comments or suggest additions to the task inventory?

Before each interview, the interviewer analysed how the field of activity of the specialty in question was described in curricular documents, in order to identify specialty-specific tasks. We carefully checked that the task descriptions in the inventory were clear and the wording easy to understand and unambiguous. The respondents were asked to give examples of tasks and we compared examples given by different respondents. The interviews were audio-recorded, transcribed and analysed using qualitative data analysis software (Atlas.ti, Berlin, Germany). In an iterative process, the results of the analyses were used as input for subsequent interviews.

Validation phase

The analysis of the field notes and the interviews resulted in an advanced task inventory that was submitted to respondents in a web-based questionnaire. To ensure that the inventory was relevant to all hospital settings, we asked each of the nineteen hospitals in the north-eastern education region to distribute the questionnaire among all specialists. Respondents were asked to indicate the frequency with which they performed each task on a seven-point scale: 1 = never; 2 = rarely; 3 = every year; 4 = every six months; 5 = every month; 6 = every week; 7 = every day. The questionnaire also collected demographic data and asked about respondents’ experience as a specialist. In order to promote reflection and evaluation, tasks with resembling content were distributed and presented in 12 clusters. For member validation we invited comments, suggestions and additions at the end of each cluster of related tasks and at the end of the questionnaire.

We assessed face, content and construct validity of the inventory and the internal consistency of the clusters of tasks. (Terwee et al. Citation2007) Face validity – a qualitative judgement of the extent to which an instrument measures what it is supposed to measure – was assessed in the semi-structured interviews. Content validity – the extent to which the concepts of interest are captured by the measurement – was evaluated by examining the number of additional tasks proposed by the respondents and by analysing their comments and remarks. To further satisfy the criterion of content validity, we made sure that there were no tasks included in the inventory that had no relevance according to any of the responding specialists. Moreover, clusters of similar tasks were expected to show internal consistency in terms of frequency scores. Finally, we determined the construct validity – the extent to which a score relates to other measures in a way that is consistent with predefined hypotheses – by determining whether differences in the frequency with which medical, surgical and supportive specialists performed specific (clusters of) tasks were in the expected direction, i.e. were in accordance with the following assumptions.

  1. Specialists in surgical and medical disciplines perform tasks directly related to patient care more frequently compared to specialists in supportive disciplines.

  2. There are no or only small differences in the frequency of performance of the generic tasks.

  3. Specialists in surgical disciplines perform tasks related to surgical procedures more frequently compared to specialists in medical and supportive disciplines.

  4. Specialists in supportive disciplines perform diagnostic tasks more frequently compared to specialists in surgical and medical disciplines.

Statistical analysis

Descriptive statistics were calculated to analyse task frequencies. Next, to determine whether the frequency scores reflected characteristic differences between the three disciplines and to establish construct validity, we compared the specialties that were the most typical representatives of their discipline. Over the years the boundaries between medical, surgical and supportive specialties have become increasingly blurred. Clinical genetics, for example was traditionally regarded as a supportive discipline, but today's clinical geneticists have regular contacts with patients, involving not only genetic counselling but also taking patient histories, performing physical examination and initiating further investigations. Accordingly, clinical genetics can no longer be considered to be a typical representative of supportive specialities. Consequently, this was not considered to be a suitable specialty to determine whether the frequency scores reflected characteristic differences between the three main disciplines. In order to circumvent this problem, the authors decided by consensus, which specialities were the most typical representatives of their discipline. The criterium for typical surgical specialties was: the relative importance of surgical procedures. Surgical specialties score high on both aspects, medical specialties score low on surgical procedures and high on direct patient contacts and supportive specialties score low on both aspects ().

To enable statistical comparisons, the tasks were grouped into twelve clusters based on homogeneity of content. Cronbach's alpha was calculated to determine the internal consistency of each cluster, with >0.7 being considered adequate (Bland & Altman Citation1997). To test whether the frequency scores of the task clusters differentiated between the three disciplines, Games-Howell's post hoc procedure was used to perform multiple comparisons in SPSS (Field Citation2009).

Results

As a result of the observations and interviews in the advanced item development phase, 26 new tasks were added to the initial inventory of 73 tasks. The special attention we paid to the supportive disciplines resulted in eight new tasks. Thirty-four task descriptions were reworded to improve precision and avoid ambiguity. For example, the ambiguity of the terms ‘procedures’ and ‘surgery’ was resolved by defining the different locations where they are performed, such as the operating theatre, office setting etc. Ten tasks were discarded as being further specifications of already included tasks. For example ‘taking a history’ and ‘performing a physical examination’ form an integral part of a patient consultation. Other tasks were discarded because they were interpreted as a skill or cognitive activity rather than a task (e.g. ‘identifying gaps in my own knowledge’). This resulted in an advanced inventory of 89 tasks, which was further tested using a web-based questionnaire.

Seventeen of the 19 hospitals in the education region we studied (two small district hospitals declined to participate) distributed the questionnaire to all 2728 medical specialists working in these hospitals. We received 706 (38%) completed questionnaires (), yielding 112 comments, which resulted in addition of two tasks: ‘e-mail correspondence with patients’ and ‘asking another medical specialist to perform a treatment as co-attending physician’. Four respondents proposed new tasks, including ‘discussing withdrawal of life support in newborns with severe congenital abnormalities’, which were considered too specific to warrant inclusion in the inventory. Similarly, several respondents proposed tasks which were already covered by the inventory. However the large majority of the comments concerned personal information (working part-time or in different settings), or referred to personal opinions regarding general developments in the healthcare system affecting the work of specialists. Examples of such comments are: ‘I am concerned that the present emphasis on competencies will go at the expense of clinical knowledge’ and ‘increasing budget cuts force medical specialists to do more and more scut work.

Table 3  Demographics

The average frequency of task performance ranged from 6.7 (every day) to 1.9 (rarely). At least 15% of respondents indicated that they performed each task every six months at least (the complete inventory and accompanying frequency scores can be found in ).

Table 4  Overview of all the tasks with mean frequency scores and percentage of respondents performing the task at least every six months

The internal consistency of the clusters of tasks was above the desired threshold of 0.70 (Bland & Altman Citation1997; Spiliotopoulou Citation2009) (). There were significant differences in frequency of performance of clusters of tasks between surgical, medical and supportive specialists ( and ). Tasks involving direct contact with patients (clusters 1–6) were performed more often by surgical and medical specialists than by supportive specialists. The only significant, albeit small, difference between surgical and medical specialists related to communication with patients and their families. The generic tasks (clusters 7–10) showed few and relatively small significant differences between the three groups of specialists, although supportive specialists performed fewer tasks related to education. Unexpectedly, surgical specialists performed significantly more management and research related tasks. To determine whether the latter finding might be due to oversampling of surgeons from the university medical centre, we performed an additional ANOVA with work setting (university medical centre, general hospital and district hospital) as covariate. Work setting proved to be significantly associated with the frequency of research tasks, F(1381) = 108.04, p < 0.001), but after controlling for the effect of work setting there remained a significant effect of specialty on the frequency of research tasks, F(2381) = 9.92, p < 0.001), suggesting that oversampling of surgeons in an academic setting was not the sole cause of the difference. In line with our hypotheses, surgical specialists performed significantly more surgical procedures than other specialists (cluster 11). Finally, supportive specialists performed significantly more diagnostic tasks compared to surgical and medical specialists (cluster 12).

Table 5  Comparison of frequency scores between disciplines: results from ANOVA and Cronbach Alpha

Table 6  Post hoc comparisons: 95% confidence intervals of mean differences between disciplines

Discussion

The outcomes of the study suggest that it is feasible to develop a generic inventory of the tasks of medical specialists that is applicable to surgical, medical and supportive specialties and has adequate face, content and construct validity as well as internal consistency.

The final inventory of 91 tasks met the predefined criteria of being applicable to all 28 specialties with training programmes in the Netherlands while also reflecting the major differences in tasks between typical medical, surgical and supportive specialties. Frequency of task performance ranged from every day to rarely. Although frequency of performing a task does not necessarily equate importance or relevance of the task, all tasks were performed regularly by at least 15% of the respondents and no tasks were denoted as irrelevant.

The significant differences in frequency of task performance were in the expected directions, and any differences in generic clusters were small. Supportive specialists reported lower frequencies of tasks related to education, which can be explained by the non-involvement of these specialties in undergraduate clinical training and by their limited role in patient education, due to the nature of their work (few contacts with patients). The significantly higher frequency of research and management tasks reported by surgical specialists was unexpected. We ruled out that it was solely attributable to oversampling of surgeons from an academic setting. Although this unexpected result may reflect reality, it remains to be fully explained. However, the comparisons of the inherent differences between medical, surgical and supportive disciplines in this study were not directed at defining the content of the disciplines but rather at using the typical differences between disciplines for establishing the validity of the task inventory. Given the generally good fit between results and expectations and the small size of the unexpected effects, it seems unlikely that the unexpected differences should give cause to question the validity of the inventory.

Strengths and limitations

To our knowledge, this was the first study to produce a comprehensive inventory of the tasks of specialists. The methodological rigour of the study, exemplified by triangulation of findings from document analyses, observations, interviews and questionnaires and the iterative process of data collection, is in accordance with the standard methods of job analysis in human resource management research, which are assumed to guarantee optimal comprehensiveness and objectivity (Tuckett Citation2005; Oliver-Hoyo & Allen Citation2006; Schwandt Citation2007). The comprehensiveness (content validity) of the inventory is supported by the fact that only two new tasks were introduced by the respondents to the questionnaire. The inclusion in the interviews of specialists from every specialty and the 706 questionnaires returned by respondents from all specialties and each of the different hospital settings suggest that the inventory is representative of all 28 specialties involved in specialty training in the Netherlands.

A general limitation of job analysis is that it inevitably relies on implicit and explicit conceptions that determine what is being measured and valued (Blackmore Citation2000). Our goal was to develop an instrument to evaluate 28 postgraduate training programmes, and we took great care to ensure that the inventory was comprehensive and relevant to all specialties. Consequently the scope and content of the inventory are specifically suitable for this purpose. The frequency scores showed that supportive disciplines score relatively low on a substantial amount of tasks, especially patient related tasks. Because we devoted extra attention to the task of the supportive disciplines during the advanced item development phase, we have no reason to doubt the completeness of the inventory. Ultimately, the inventory was designed to evaluate several programmes simultaneously by asking recently registered specialists to indicate how well their training programme has prepared them for practice. Respondents then have the option to indicate whether a specific tasks is a part of their routine work. This makes the inventory generically applicable because respondents would have deleted redundant tasks themselves. However, when the inventory is used for other purposes, e.g. within a single specialty, the applicability of the inventory might be enhanced by removing redundant tasks and adding tasks with special relevance to that specialty or by appending specialty specific Entrustable Professional Activities.

We believe, the inventory can be used internationally, since most tasks seem to be relevant to the work of specialists in other Western countries. Nevertheless, account should be taken of international differences in culture, legal system and the organisation of the healthcare system (Lynn Payer Citation1996). For example, tasks like ‘Avoiding litigation’ or ‘Setting up/running a business’, while not sufficiently relevant to warrant inclusion in the Dutch inventory, might have to be added to ensure validity in USA settings.

Future research

As a next step, the inventory needs to show whether it can meet its purpose. Several questions need to be addressed. Which tasks do recently registered specialists feel worst or best prepared for by their training programme? And will competency-bases training lead to better prepared specialists? Although we believe the inventory to be internationally applicable, a cross-national validation study needs to be conducted to support this claim.

Conclusion

We established a comprehensive, generic, representative and valid inventory of tasks of medical specialists in the Netherlands. As a component of an advocated systematic approach (Higgins et al. Citation2005; Musick Citation2006), this inventory can be used to evaluate outcomes and content of training programmes and as an aid in curriculum development, training, coaching and assessment.

Acknowledgements

The authors would like to acknowledge and thank Professor Janke Cohen-Schotanus, Abe Meininger, Professor Yvonne Steinert and Professor Bernard Nijstad for their help and advice. Moreover, we thank Wilko van den Bergs, Janne Allema and Anna Yedema for their invaluable contribution to the pilot studies. Dr Andre Schultz and Mereke Gorsira are greatly acknowledged for their assistance in translation. Finally, we thank all specialists who contributed to this study.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.

Notes

*For the purpose of this article, the term ‘specialist’ refers to a physician who has completed a postgraduate specialty training programme in a medical, surgical or supportive specialty.

References

  • Ashcroft RE. Searching for the good doctor. Br Med J (International Edition) 2002; 325(7366)719
  • Beckett M, Hulbert D, Brown R. The new consultant survey 2005. Emergency Med J 2006; 23(6)461–463
  • Benstead K. What is valuable for specialist registrars to learn in order to become good consultant clinical oncologists?. Clin Oncol (Royal College of Radiologists (Great Britain)) 2006; 18(7)549–554
  • Blackmore P. A categorisation of approaches to occupational analysis. J Vocational Educ Training Vocational Aspect Educ 1999; 51(1)61–76
  • Blackmore P. A conceptual framework for approaches to occupational analysis. Res Post-Compulsory Educ 2000; 5(3)289–304
  • Bland JM, Altman DG. Cronbach's alpha. BMJ (Clin Res Ed.) 1997; 314(7080)572–572
  • Card SE, Snell L, O'Brien B. Are Canadian general internal medicine training program graduates well prepared for their future careers?. BMC Med Educ 2006; 6: 56–56
  • Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: From flexner to competencies. Academic Medicine 2002; 77(5)361–367
  • Cascio WF, Aguinis H. Applied psychology in human resource management6th. Pearson/Prentice Hall, Upper Saddle River, NJ 2005
  • Cooper NA. Training doctors in the new English NHS. Political correctness or evidence based education?. BMJ (Clin Res Ed.) 2006; 333(7558)99
  • de Bont A. De goede dokter: Verstrikt in idealen. Medisch Contact 1997; 52(11)346–347
  • Eva KW. Research ethics requirements for medical education. Med Educ 2009; 43(3)194–195
  • Field A. Discovering statistics using SPSS3rd. SAGE Publications, London 2009
  • Frank J. The CanMEDS 2005 physician competency framework. Better standards. Better physicians. Better care. The Royal College of Physicians and Surgeons of Canada, Ottawa 2005
  • Grant J. The incapacitating effects of competence: A critique. Adv Health Sci Educ Theory Practice 1999; 4(3)271–277
  • Hays RB. Reforming medical education in the United Kingdom: Lessons for Australia and New Zealand. Med J Australia 2007; 187(7)400–403
  • Higgins R, Gallen D, Whiteman S. Meeting the non-clinical education and training needs of new consultants. Postgraduate Med J 2005; 81(958)519–523
  • Jones R, Higgs R, de Angelis C, Prideaux D. Changing face of medical curricula. Lancet 2001; 357(9257)699–703
  • Jones MD, Rosenberg AA, Gilhooly JT, Carraccio CL. Perspective: Competencies, outcomes, and controversy – Linking professional activities to competencies to improve resident education and practice. Acad Med J Assoc Am Med Colleges 2011; 86(2)161–165
  • Leach DC. Changing education to improve patient care. Quality Health Care 2001; 10(Suppl 2)54–58
  • Lenton SW, Dison PJ, Haines LC. A BPA survey of recently appointed consultants. Arch Dis Childhood 1994; 71(4)381–385
  • Letters. What's a good doctor & how do we make one. Br Med J 2002; 325: 711–716
  • Leung W. Competency based medical training: Review. Br Med J 2002; 325(7366)693–695
  • Liebelt EL, Daniels SR, Farrell MK, Myers MG. Evaluation of pediatric training by the alumni of a residency program. Pediatrics 1993; 91(2)360–364
  • Lieberman L, Hilliard RI. How well do paediatric residency programmes prepare residents for clinical practice and their future careers?. Med Educ 2006; 40(6)539–546
  • Lillevang G, Bugge L, Beck H, Joost-Rethans J, Ringsted C. Evaluation of a national process of reforming curricula in postgraduate medical education. Med Teach 2009; 31(6)e260
  • Lynn Payer. Medicine and culture: Varieties of treatment in the United States, England, West Germany and France5th. Holt, New York, NY 1996
  • Maudsley RF, Wilson DR, Neufeld VR, Hennen BK, DeVillaer MR, Wakefield J, MacFadyen J, Turnbull JM, Weston WW, Brown MG, et al. Educating future physicians for Ontario: Phase II. Acad Med J Assoc Am Med Colleges 2000; 75(2)113–126
  • Miles S, Leinster SJ. Identifying professional characteristics of the ideal medical doctor: The laddering technique. Med Teach 2010; 32(2)136–140
  • Musick DW. A conceptual model for program evaluation in graduate medical education. Acad Med J Assoc Am Med Colleges 2006; 81(8)759–765
  • Oliver-Hoyo M, Allen D. The use of triangulation methods in qualitative educational research. J College Sci Teach 2006; 35(4)42–47
  • Oswald FL. Job analysis: Methods, research, and applications for human resource management in the new millennium (book). Personnel Psychol 2003; 56(3)800–802
  • Reeves S, Fox A, Hodges BD. The competency movement in the health professions: Ensuring consistent standards or reproducing conventional domains of practice?. Adv Health Sci Educ 2009; 14(4)451–453
  • Roberts KB, Starr S, DeWitt TG. The university of Massachusetts medical center office-based continuity experience: Are we preparing pediatrics residents for primary care practice?. Pediatrics 1997; 100(4)E2 (1–6)
  • Scheele F, Teunissen P, Van Luijk S, Heineman E, Fluit L, Mulder H, Meininger A, Wijnen-Meijer M, Glas G, Sluiter H, et al. Introducing competency-based postgraduate medical education in the Netherlands. Med Teach 2008; 30(3)248–253
  • Schwandt TA. The SAGE dictionary of qualitative inquiry3rd. SAGE Publications, Thousand Oaks, CA 2007
  • Singh P. Job analysis for a changing workplace. Human Resour Manage Rev 2008; 18(2)87–99
  • Spiliotopoulou G. Reliability reconsidered: Cronbach's alpha and paediatric assessment in occupational therapy. Aust Occup Ther J 2009; 56(3)150–155
  • Talbot M. Monkey see, monkey do: A critique of the competency model in graduate medical education. Med Educ 2004; 38(6)587–592
  • ten Cate O. Entrustability of professional activities and competency-based training. Med Educ 2005; 39(12)1176–1177
  • ten Cate O. Why the ethics of medical education research differs from that of medical research. Med Educ 2009; 43(7)608–610
  • ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice?. Acad Med J Assoc Am Med Colleges 2007; 82(6)542–547
  • Terwee CB, Bot SDM, de Boer MR, van dW, Knol DL, Dekker J, Bouter LM, de Vet HCW. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol 2007; 60(1)34–42
  • Tuckett AG. Part II. Rigour in qualitative research: Complexities and solutions. Nurse Res 2005; 13(1)29–42

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.