1,993
Views
0
CrossRef citations to date
0
Altmetric
Clinical Education for papers

Can assessment be a barrier to successful professional development?

&

Abstract

In recent years, there has been a profound shift towards objectivity in high stakes clinical assessments, to reassure the regulators and the public that our graduating clinicians are competent. However, a recent report has suggested that since the introduction of high stakes assessment the relative levels of patient harm have increased. Although, direct causation has not been claimed it is incumbent upon responsible educators to reflect on the possibility that our students are passing the assessments but the assessments are not sophisticated enough to predict real-world competency. In addition, there has been an increasing focus of the student population on seeing the purpose of their clinical education being to pass assessments, rather than to treat patients. This unfortunate focus likely stems from their school education that develops a fixed task focused mindset. In this manuscript, the authors explore the potential issues over mindset and assessment sophistication, and offer possible solutions to tackle both problems.

Introduction

The need to establish that our health care professionals are competent to undertake the role for which they have been trained and registered for is universally recognised. It is also accepted that to demonstrate competency some form of assessment is needed. Therefore, it stands to reason that the use of competency-based assessment should address this need. However, the stark reality is that recent data suggests that since the introduction of high stakes objective assessments, the harm to patients has not decreased as intended, but in fact increased.Citation1,2 Cynics may argue that such statistics simply reflect the ‘blame culture’, and can be explained through the increase in patient complaints and ‘ambulance chasing’ legal practices. While, the authors accept that no causal link has been established, it is noteworthy that many of our colleagues who are directly involved with teaching undergraduate students have recognised changes in the ability of many to handle perceived failure, as well as the numerous students with a directive task approach towards learning, exemplified by the eternal question of ‘Is this going to be on the test?’ Therefore, the authors contend that these changes in student behaviour, combined with the failure to alter teaching and assessment practices, could be leading to a situation where students are passing the assessments set, but the teaching and assessments are falling short of developing and predicting real-word clinical competency. To overcome these issues, it is necessary to explore the origin of the student maladaptive behaviours, their impact on learning and the limitations of current approaches to the assessment of competency when undertaken in the current learner behavioural background.

Student behaviours and learning

Self-efficacy is an individual’s belief of their capabilities to exercise control over their own level of functioning,Citation3 and is of critical importance to personal goal setting. The ability to set personal goals is fundamental to personal developmentCitation3,4 because the more an individual believes that they can do something, the higher the goal they will set for themselves, and the harder they will try to achieve it.Citation3 Essentially, individuals with a high sense of efficacy visualise success, while those with a low sense of efficacy see only potential failure.Citation5 Therefore, the higher the efficacy of an individual the more likely they are to succeed academically, as well as maintain psychological health. It is also important to realise that the development of ‘a resilient sense of efficacy requires experience in overcoming obstacles through perseverant effort. Some difficulties and setbacks in human pursuits serve a useful purpose in teaching that success usually requires sustained effort’.Citation5

More recently, concepts around efficacy have been explored through the model of Mindsets.Citation6 In this model, two basic types of mindset are described, fixed and growth. The individual with the fixed mindset believes that their intelligence is fixed. This means that if intelligence cannot be expanded through endeavour then any failure is interpreted as a failure of oneself i.e. ‘I am a failure’. This is a destructive thought process, and if present means that every assessment is perceived as an opportunity for failure rather than an aid to future learning. The antithesis of the fixed mindset is the growth mindset. Here, individuals believe that their intelligence can be developed through endeavour and effort in response to feedback. Clearly, within the health profession, it is the growth mindset that is required in the areas related to the intended role, as it is fundamental to the reflective processes that drive changes to self-regulation,Citation7 which in-turn underpin the continual strive for improved performance that is essential for patient care.

However, data from a study conducted at the University of Liverpool suggest that 57% of students entering the university may have a fixed mindset.Citation8 Although these data did not relate directly to students on medical professional programmes, and further work is needed to establish the size of the problem in this context, it is a worrying finding, especially as the authors would speculate that the problem of fixed mindsets within medical programmes may be even greater as a proportion of the student population, due to the highly competitive nature of the entry into these programmes and the changes to our primary and secondary education systems. Over the last few decades, there has been an ever-growing appetite for standardised assessment outcomes, against which to measure the performance of both children and schools, to such an extent that assessment performance seems to have become the goal of education, rather than its bi-product. This has likely led to many of our children adopting a task focus, as well as potentially being labelled as a ‘success’ or ‘failure’ from a very young age. Many parents have responded by ‘check-boxing’ their children’s lives to ensure that their homework is done to perfection so that they get into the right schools, and ultimately gain a place at the ‘right’ university, doing the ‘right’ degree.Citation9 Schools have been subject to league tables, and many teachers have responded by teaching to the test to secure their jobs or to obtain promotion. Personal experience suggests that nowhere is this pressure likely to be more prevalent than upon those children who want to (or have been ‘encouraged’ to) enter the medical professions. Unfortunately, the continuous intervention from well-meaning parents and teachers to ensure that children get the top grades has resulted in students focussing purely on achievements rather than the learning process, and in some cases destroying the enjoyment gained from learning. In addition, data suggest that this over-involvement and micro-management of children, together with unrealistically high expectations, is leading to increased levels of depression amongst young people who are lacking both self-efficacy and internal motivation.Citation10 Children subject to these types of interventions are more likely to be easily discouraged by failure, develop low efficacy, gain a task focus, have an increased tendency to cheat, avoid difficult situations and ultimately establish a fixed mindset. In this archetype, it is not a case of assessment driving learning, but one of assessment driving behaviour.

Over the last two decades the authors and their colleagues have become all too familiar with the increase in these fixed mindset behaviours of undergraduates and their consequences, which include:

Considering the main purpose of the information delivered in lectures being to gain knowledge to pass assessments, rather than to support the treatment of patients;

Careful task and staff selection to ensure the Work-Based Assessments (WBAs) are passed, rather than them being a mechanism to inform the required personal development;

Challenge to feedback that is perceived as negative, rather than acceptance and endeavour to improve;

Collective efforts to memorise questions and place them into banks to learn ahead of written examinations, rather than using the assessment to gauge what they don’t know or understand;

Organised sessions to mechanistically practice Objective Structured Clinical Examinations (OSCEs), rather than using the feedback they receive to identify areas for deliberate practiceCitation4,11;

Regarding portfolios as a series of discrete tasks to complete, rather than an integrated longitudinal tool for personal reflection to drive future development; and,

Seeing patients as commodities from which to gain sufficient numbers of the certain treatments required to graduate, rather than people who need care.Citation12

Basically, in the continuum from high school education, students with a fixed mindset see the staff responsibility as being to teach them what is going to be on the assessment, and their responsibility to be to simply to learn how to pass it. This is a long way from the ideal traits we would like to see in our medical professions i.e. individuals with ‘an enduring inner passion for learning’, who are ‘highly self-motivated, and seek out opportunities and challenges to advance their knowledge and skills’.Citation13

Overall, the prevailing fixed mindset in many of our students conditions them to be resistant to our carefully constructed assessment for learning strategies,Citation14 and all the effort that has gone into planning formative and summative learning opportunities evaporates into the milieu of a series of ‘tasks to pass’, establishing the wrong educational impact.Citation15 However, being aware of the behavioural issues empowers educators to make change because for health care curricula to cultivate and demonstrate the requisite real-world competencies, they must also establish the growth mindset through having the right educational impact to drive appropriate holistic learner development.

Limitations of current approaches to assess competency

We have come a long way from the Aristotelian view that the assessment of competency simply requires ensuring that an individual possess sufficient knowledge.Citation16 It is now widely accepted that competency is a multidimensional construct that encompasses not only knowledge and skills but also ‘communication, clinical reasoning, emotions, values, and reflection’.Citation17 To enable the objective assessment of these dimensions of competency, highly objective tools have been developed, and supported by a plethora of literature, which include: Single Best Answer (SBA)Citation18; Script Concordance ItemsCitation19; Situational Judgement TestsCitation20; OSCEsCitation21; WBAsCitation22; and, reflective portfolios.Citation23 Although, there can be little doubt that underlying principles for these tools are well grounded, it has been suggested that they may lack sufficient sophistication to measure real-world competency, as well as have the wrong educational impact.Citation2,12,15,24 The reasons for this include:

An assumption that ‘competency’ is both context independent and stable. This assumption is misguided because competency is neither stable nor context independent.Citation2,12,24–26 Moreover, modern concepts over the meaning of competency recognise that it is the ability to flexibly adapt to changing work circumstances.Citation26,27 Therefore, isolated tests of competency are likely to be inadequate for the prediction of real-world competency.

The outcomes from competency assessments inform the successful students that they are now competent. This signals to the students that any areas of ‘failure’ within the assessment are not important and can be ignored, which is worrying considering that it is likely that there will be a wide range of marks associated with the passing students. Moreover, student success in a competency assessment also indicates to teachers that they can apply less effort in observation and feedback, as the student is now ‘good’.Citation2 In the extreme, a positive outcome from a competency assessment risks putting the student into a state of ‘automation’, which is known to arrest on-going development.Citation4,11

Competency cannot be established through simply measuring experience.Citation28–30 A situation that may undermine many current portfolio or target-based approaches, where the level of activity is used as an important gauge of competence.

Many of the assessment tools lend themselves to psychometric analysis, which unfortunately has too frequently become the wrong driver for assessment design because authenticity (assessment validity) has often been sacrificed for the sake of objectivity to ensure statistical reliability.Citation2,24

The quest for objectivity has also encouraged reductionist approaches to the design of assessment tools that risk their trivialisation by both teachers and students.Citation24 This is a situation that also applies to many reflective portfolios.Citation13

If the fixed mindset task focused behavioural background is superimposed on an intermittent series of ‘assessment packets’, each of which comprises a series of reductionist check-listed items, options lists or tasks that must be passed, then it is easy to see why we have a situation where there may be significant problems with developing holistic real-world clinical abilities of our graduates. In addition, we are also potentially damaging their approach to lifelong learning because students that enter our programmes with a fixed mindset will leave with a more entrenched one, creating a less adaptive clinician.

A move from assessment to continual professional development

Irrespective of any issues with learner’s mindsets, there has been a growing realisation that current approaches to competency assessment within medicine may be too superficial. This had led to calls for change at both the assessment and programme level for an increase in sophistication. At the assessment level innovative suggestions have included linked OSCE stations where errors could be followed across stations,Citation2 and ‘entrustability’ where WBAs are true to the holistic trans-domain nature of the real-world task and repeated until the learner is ‘trusted’ to complete the task with minimal supervision.Citation31,32 While, at the programme level there have been long-standing arguments towards embracing programmatic assessment, where all the assessments are carefully crafted together to form a longitudinal integrated structure to enable a better understanding of how each learning outcome is being developed on an individual student basis.Citation24,25 This approach also requires a move from a simple numeric interpretation of ‘pass/fail’ to the use of more qualitative data the requires panels of experts to act as an ‘interpretive community’,Citation26 a concept that reintroduces the element of professional judgement into the progression process.

To address the potential lack of sophistication these suggestions all have their attractions, especially programmatic approaches.Citation12 However, when programmatic approaches have been tried they have been subject to problems with aspects of Utility (a construct that considers assessment in terms of validity, reliability, educational impact, feasibility and acceptabilityCitation15), which include: students seeing the proximity of the test as the driver for learning, and not seeing the difference between formative and summative assessmentsCitation33; the ability of an assessment element to support or inhibit learning depends on how it is perceived by the individual learnerCitation33; variance in the ability or willingness, of staff to give feedback having a significant impact on learningCitation34; and, staff and student resistance to the perceived increase in workload, combined with difficulty in aggregating data from multiple different assessment formats to enable its meaningful interpretation.Citation34

From the authors perspective there is also the added problem that all of these approaches still suffer from assessment being considered as an ‘event’, and irrespective of how these events are operationalised or integrated, the fact they are events is not constructively alignedCitation35 with developing longitudinally reflective clinicians or with tackling the fixed mindset when present, which suggests that such approaches will still likely foster it. A conclusion that the available data would support for some students.Citation33,34 Therefore, if the assessment events are likely to be misaligned then the solution would seem to be to effectively remove them by transforming into a model for continuous professional development. In this paradigm, for each student, data within the workplace is continuously collected at high levels of granularity through daily observation by all staff so that any issues on the day can be identified and rectified through ‘coaching’ (telling the student what they need to change next time and how to do it, as opposed to simple feedback – what they did right/wrong this time). This daily data can then be appropriately integrated, triangulated, analysed with the appropriate learning analytics, and displayed in real time to show holistic real-world performance.Citation31,32 Large data-sets of this type can then be used by students and staff to identify longitudinal trends, which can then be used for personal/guided reflection to inform any need to change self-regulationCitation7 through embracing personal deliberate practiceCitation4,11 in combination with supported targeted training, where needed. In addition, the workplace data would be able to be integrated with data from knowledge assessment and simulated situations, which with the right design could also be a daily (or at least regular) event(s) contextually aligned to the requisite clinical activity (Figure ). In this model of continuous professional development, progression through the programme is measured against developmental goals that are evaluated by an expert panel to be at the point when the learner’s holistic performance across a defined range of capabilities has reached the requisite level of consistency, not simply when a series of isolated assessments are passed. This approach also recognises that different learners will progress at different rates, and that the development of the growth mindset requires endeavour that must be supported and recognised, and not ignored. Furthermore, part of the educational process would be to expect that the learner use the data to develop the essential insight to judge when they themselves are ready, so any panel evaluation becomes a confirmatory process not a ‘breaking news event’. This could be operationalised by the leaner identifying their personal developmental needs ahead of the panel meeting, and the panel confirming/challenging this judgement to establish the degree of insight being shown.

Figure 1 Illustrative diagram of a complete theoretical structure to support continuous professional development. A central database core (A) is populated with the relevant stakeholder outcomes that are aligned in a ‘one to many relationship’. Each of the modules sitting off the data base is fully mapped to it, so that all data can be triangulated through the relevant stakeholder outcomes.

Notes: Daily performance data is collected for each learner in the workplace (B) and aligned in the core to the relevant outcomes. In addition, the work place performance data informs the core of the current activity so that questions from the examination bank (D) can be ‘pushed’ to a student device (C), so that relevant knowledge can be contextually embedded and triangulated with the work-based performance in the core. Furthermore, through having all questions in the examination bank aligned to the stakeholder outcomes, it is possible to link all modalities of assessment to each outcome to form the longitudinal pattern of development (E), combined with a detailed understating of where the issues lie so that the learner can undertake deliberate practice. Through this approach, it is also possible to integrate multiple skills to understand the real-world performance, against the required outcomes, and display this in novel format such as a barcode that shows the periodicity of issues showing the progress towards appropriate independence.
Figure 1 Illustrative diagram of a complete theoretical structure to support continuous professional development. A central database core (A) is populated with the relevant stakeholder outcomes that are aligned in a ‘one to many relationship’. Each of the modules sitting off the data base is fully mapped to it, so that all data can be triangulated through the relevant stakeholder outcomes.

Overall, the authors are describing a model of development where the current assessment focus has been lost in a framework of continuous support informed by large data. The purpose of the approach is to develop a holistic clinician through coaching towards being able to achieve real-world developmental goals that are aligned to a series of capabilities, combined with progression that is evaluated at fixed points in time by both the learner and an expert panel. It is also a model that would hope to overcome the all too familiar problem of staff failing to fail,Citation36 which is a barrier to assessment approachesCitation7 because coaching someone how to improve is simply more pleasant than telling someone they are not good enough. However, it must be recognised that what is being suggested is not an all-encompassing panacea for success because it would not work for those students who cannot develop in the realistic time scales, as not everyone is able for whatever reason. Nevertheless, the approach would serve to identify such students sooner, so that they can be guided towards the appropriate path(s) for them.

To operationalise the model of continuous professional development described requires significant innovation in both curriculum design and the technology enhanced approaches to support it (Figure ). However, relatively advanced designs have already been created and successfully operationalised within our curriculum.Citation37,38 This has required cooperation between all departments of the school to ensure a unified approach to assessment and feedback so that both generic learning outcomes and those which are subject specific can be assessed within the construct of holistic development so that pertinent feedback is given. At our dental school, which has a non-modular programme we are now able to support the development of our students with the provision of over 5000 observations per student that can be holistically integrated to provide developmental coaching. Crucially, independent data from the UK National Student Survey suggest that there is a high level of student satisfaction (approx. 90%) in the domain of ‘assessment and feedback’ amongst our final-year students, which is remarkable because data suggest that programmatic approaches are not popular with students.Citation33,34 In addition, this amount of data supports defensible progression decisions because ultimately reliability is a function of data quantity, and validity a function of authenticity.Citation24,25 To move forward in this area, active research is underway to develop artificial intelligence algorithms to analyse the large data-sets to be able to predict future performance, as well as provide highly tailored coaching advice for every learner. In addition, the changes are subject to on-going evaluation through staff and student feedback, validity and reliability analysis, as well as reaching out to work with postgraduate colleagues to establish readiness for practice through establishing performance. However, this is outside the scope of the current manuscript.

In closing, it is the authors’ opinion that irrespective of what changes are made to learning design, their experience suggests that substantive improvements to student learning can be made by: (1) introducing training early in the programme of study to help affected students overcome their fixed mindsets and prepare them to learn appropriately; (2) identifying and training staff with a fixed mindset, so that they can engage with development rather than assessment; (3) putting regular time aside for staff training over the delivery of coaching, which also needs to be monitored; and, (4) embrace the ethos of holistic continuous professional development to support lifelong learning rather than multiple staged assessment events. This requires having longitudinal approaches to integrate data that transcend the constraints of individual and track across them because real-world skills are integrative and not separate.

Notes on contributors

Luke Dawson is Director of Undergraduate Education at the School of Dentistry and a senior lecturer in Oral Surgery. His current research interests relate to the development and measurement of competency using technology-enhanced approaches.

Kathryn Fox, PhD, is a senior lecturer and teaching lead in Restorative Dentistry at the School of Dentistry. Her current research interests relate to understanding the psycho-social factors that influence learning and the development of professional competence.

Disclosure statement

No potential conflict of interest was reported by the authors.

Acknowledgements

The authors would like to recognise all the staff working in the School of Dentistry at the University of Liverpool for their contributions to developing the approaches outlined in this manuscript.

References

  • James JT. A new, evidence-based estimate of patient harms associated with hospital care. J Patient Saf. 2013 Sep;9(3):122–8.10.1097/PTS.0b013e3182948a69
  • Eva KW, Bordage G, Campbell C, Galbraith R, Ginsburg S, Holmboe E, et al. Towards a program of assessment for health professionals: from training into practice. Adv Health Sci Educ Theory Pract. 2015 Nov;21:1–17.
  • Bandura A. Perceived self-efficacy in cognitive development and functioning. Educ Psychol. 1993;28(2):117–48.
  • Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004 Oct;79(10 Suppl):S70–81.10.1097/00001888-200410001-00022
  • Bandura A. Exercise of personal and collective efficacy in changing societies. In: Bandura A, editor. Self-efficacy in changing societies. New York (NY): Cambridge University Press; 1995. p. 3.10.1017/CBO9780511527692
  • Dweck CS. The development of ability conceptions. In: Wigfield A, Eccles JS, editors. Development of achievement motivation. San Diego (CA): Elsevier; 2002. p. 57–88.10.1016/B978-012750053-9/50005-X
  • Butler DL, Winne PH. Feedback and self-regulated learning: a theoretical synthesis. Rev Educ Res. 1995;65(3):245–81.10.3102/00346543065003245
  • Forsythe A, Johnson S. Thanks, but no-thanks for the feedback. Assess Eval Higher Educ. 2016 Jul 5;1:1–10.10.1080/02602938.2016.1202190
  • Lythcott-Haims J. How to raise an adult: break free of the overparenting trap and prepare your kid for success. New York (NY): Henry Holt and Company; 2015.
  • Luthar SS, Becker BE. Privileged but pressured? a study of affluent youth. Child Dev. 73(5):1593–610.
  • Ericsson KA. An expert-performance perspective of research on medical expertise: the study of clinical performance. Med Educ. 2007 Dec;41(12):1124–30.10.1111/(ISSN)1365-2923
  • Dawson LJ, Mason BG, Bissell V, Youngson C. Calling for a re-evaluation of the data required to credibly demonstrate a dental student is safe and ready to practice. Eur J Dent Educ. 2016 Mar 1;21:130–135.
  • Hodges BD. Sea monsters & whirlpools: Navigating between examination and reflection in medical education. Med Teach. 2015 Jun 12;37(3):261–66.10.3109/0142159X.2014.993601
  • Black P, Wiliam D. Assessment and classroom learning. Assessm Educ. 1998;5(1):7–74.10.1080/0969595980050102
  • Van Der Vleuten CP. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ Theory Pract. 1996 Jan;1(1):41–67.10.1007/BF00596229
  • Robb FC. Aristotle and education. Peabody J Educ. 1943;20(4):202–13.10.1080/01619564309535765
  • Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002 Jan 9;287(2):226–35.10.1001/jama.287.2.226
  • Case SM, Swanson DB. Constructing written test questions for the basic and clinical sciences. 3rd ed. Philadelphia, PA: National Board of Medical Examiners; 1998.
  • Charlin B, Roy L, Brailovsky C, Goulet F, Van der Vleuten C. The script concordance test: a tool to assess the reflective clinician. Teach Learn Med. 2000 Oct;12(4):189–95.10.1207/S15328015TLM1204_5
  • Patterson F, Zibarras L, Ashworth V. Situational judgement tests in medical education and training: Research, theory and practice: AMEE Guide No. 100. Med Teach. 2015 Dec 24;38(1):3–17.
  • Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. Br Med J. 1975 Feb 22;1(5955):447–51.
  • Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach. 2009 Jul 3;29(9–10):855–71.
  • Gordon J. Assessing students’ personal and professional development using portfolios and interviews. Med Educ. 2003;37(4):335–40.10.1046/j.1365-2923.2003.01475.x
  • van der Vleuten CPM, Schuwirth LWT, Driessen EW, Dijkstra J, Tigelaar D, Baartman LKJ, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205–14.10.3109/0142159X.2012.652239
  • van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Med Teach. 2005 Mar;39(3):309–17.10.1111/med.2005.39.issue-3
  • Govaerts M, van der Vleuten CPM. Validity in work-based assessment: expanding our horizons. Med Teach. 2013 Dec;47(12):1164–74.10.1111/medu.2013.47.issue-12
  • Fraser SW, Greenhalgh T. Complexity science: coping with complexity: educating for capability. BMJ. 2001 Oct 6;323(7316):799–803.10.1136/bmj.323.7316.799
  • Dawes R. House of Cards: psychology and psychotherapy built on myth. New York (NY): Free Press; 1996. p. 1.
  • Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care. Ann Intern Med. 2005 Feb 15;142(4):260–73.10.7326/0003-4819-142-4-200502150-00008
  • Butterworth JS, Reppert EH. Auscultatory acumen in the general medical population. JAMA. 1960 Sep 3;174(1):32–4.10.1001/jama.1960.03030010034009
  • ten Cate O. Entrustability of professional activities and competency-based training. Med Educ. 2005 Dec;39(12):1176–7.10.1111/med.2005.39.issue-12
  • ten Cate O. Nuts and bolts of entrustable professional activities. J Grad Med Educ. 2013 Mar;5(1):157–8.10.4300/JGME-D-12-00380.1
  • Heeneman S, Oudkerk Pool A, Schuwirth LWT, van der Vleuten CPM, Driessen EW. The impact of programmatic assessment on student learning: theory versus practice. Med Educ. 2015 Apr 28;49(5):487–98.10.1111/medu.2015.49.issue-5
  • Bok HG, Teunissen PW, Favier RP, Rietbroek NJ, Theyse LF, Brommer H, et al. Programmatic assessment of competency-based workplace learning: when theory meets practice. BMC Med Educ. 2013 Sep 11;13(1):123.
  • Biggs J. Enhancing teaching through constructive alignment. High Educ. 1996;32(3):347–64.10.1007/BF00138871
  • Bush HM, Schreiber RS, Oliver SJ. Failing to fail: clinicians’ experience of assessing underperforming dental students. Eur J Dent Educ. 2013 Nov;17(4):198–207.10.1111/eje.2013.17.issue-4
  • Dawson L, Mason B. Developing and assessing professional competence: using technology in learning design. In: Bilham T, editor. For the love of learning – innovations from outstanding university teachers. 19 ed. Basingstoke: Palgrave Macmillan; 2013. p. 135.
  • Dawson L, Mason B, Balmer C, Jimmieson J. Developing professional competence using integrated technology-supported approaches: a case study in dentistry. In: Fry H, Ketteridge S, Marshall S, editors. A handbook for teaching and learning in higher education enhancing academic practice Fourth edition. 27 ed. London: Routledge; 2015. p. 1–7.