361
Views
0
CrossRef citations to date
0
Altmetric
Regular Original Articles and Commentaries

On accreditation standards, competence assessments and gate-keeping: Houston, we have a problem!

ORCID Icon
Pages 193-197 | Received 11 Nov 2021, Accepted 20 Jan 2022, Published online: 24 Feb 2022

ABSTRACT

This piece is a commentary on an important article, “An examination of accreditation standards between Australian and US/Canadian doctoral programs in clinical psychology”. The commentary complements and extends the original article by providing additional data on clinical supervision and examination requirements for clinical psychology training in the US and Australia. Indications that end-of-placement supervisor assessments are less reliable than expected, extremely low fail-rates during training, and the absence of a comprehensive and rigorous final examination for Registration with AOPE together constitute a serious concern and raise the possibility of a compromised competence assessment system. Inadequate assessment matters especially in the context of reduced clinical supervision requirements within the new accreditation standards.

KEY POINTS

What is already known about this topic:

  1. The commentary analyses and comments on an important submission to the journal, “An examination of accreditation standards between Australian and US/Canadian Doctoral programs in clinical psychology.”

What this topic adds:

  1. APAC requirements for clinical supervision are much lower than the APA requirements.

  2. Unlike their US counterparts, clinical psychology trainees are not required to pass a final, bench-marked examination to gain registration with AOPE/licensure.

  3. Less than satisfactory validity of end-of-placement supervisor assessments, extremely low fail-rates during training, and the absence of a comprehensive and rigorous final examination for Registration with AOPE are indicators of a deficient system of competence assessment.

The current piece is a commentary on “An examination of accreditation standards between Australian and US/Canadian Doctoral programs in clinical psychology” (Norton et al., Citation2022), an article that addresses a topic that is important and has currency. The deliberate and accelerated move to restructure and realign training to a competency-based model is a major pedagogical shift with cascading effects at multiple levels of education, training and practice. Consequently, it is of value to compare how this approach is operationalised in terms of competency frameworks and accreditation standards, and how standards and guidelines are interpreted at the programme, internship and trainee-client levels. Whilst there are attempts to compare competency frameworks and accreditation standards (e.g., Gonsalvez, Shafranske, et al., Citation2021), the current article offers a helpful comparison of how standards may be interpreted at the programme level in clinical psychology. Norton et al. (Citation2022) compare two APAC accredited and two APA/CPA accredited doctoral programmes in clinical psychology in terms of coursework content, research and supervised placement hours. They consult relevant accreditation standards and programmes handbooks, identify similarities and differences among the programmes, highlight key observations and offer valuable comments. Their findings and comments are examined separately for coursework, research and supervised placement/internship experience.

Clinical psychology coursework

Norton et al. analyse coursework hours across competency domains for the four programs in terms of core, allied and elective coursework and present their findings in a Table (Norton et al., Citation2022, ). The Table clarifies overlaps and differences across the four PhD clinical psychology programmes. The data speak for themselves and, as the authors indicate, the coverage of content areas are fairly similar with regard to “core” domains. The obvious difference is that US programmes require a greater breadth of coursework. In fact, the two examples provide indicate substantially more coursework in both allied units (200+ hours in areas such as social and developmental psychology) and in elective units (200+ hours). It bears reiterating that unlike earlier versions, the new standards in both the US and Australia (APAC, Citation2019) do not actually prescribe the number of coursework hours within each domain, so the comparisons reflect the operational interpretation of the standards by the specific clinical psychology programs. Whilst the information provided by the authors may serve as useful illustrations, the extent to which the two “exemplar” programmes are truly representative is uncertain. There are at least 161 PhD programmes in clinical psychology in the US alone (Norcross et al., Citation2018), so any generalisations from n = 2 to the US and Canadian programmes will necessarily have to be tempered with extreme caution. Further, previous attempts to compare training across the US have asserted that “hundreds of diverse clinical programs … mask huge differences between types of programs” (Norcross et al., Citation2018, p. 388)

Table 1. Number of required hours for supervised practice (e.g., placements/internships) and clinical supervision (Sn) for Initial, pre-doctoral and post-doctoral settings for clinical psychology in the US and Australia (Aus). AOPE = Area of Practice Endorsement.

Supervised clinical work during training

The term “placement hours” will be used as a generic term to cover supervised clinical practice in pre-doctoral placements, internship, registrar and post-doctoral residency programs. Unlike coursework, both accreditation standards specify the number of mandatory placement hours required for programmes, making comparison between the two countries straight forward and protected against sampling errors. However, the number of pathways to General Registration and to General Registration with AOPE in Clinical Psychology in Australia make the mapping of requirements difficult to easily grasp. Second, most but not all jurisdictions in the US require post-doctoral internship (Schaffer et al., Citation2013). Finally, a comprehensive picture of clinical training between programs must include clinical supervision hours – an element not addressed by the article. The inclusion of clinical supervision is important because it both the most expensive component of training and may be the single most important contributor to training effectiveness (Gonsalvez & Milne, Citation2010). For these reasons, a supplementary table that outlines in greater detail placement and supervision requirements for the various pathways to licensure in the two countries complements the information provided by Norton et al. (Citation2022) and is presented below.

The information in clarifies that in an overall sense, the two APAC-accredited doctoral programmes are broadly comparable to requirements when post-doctoral residency experience is not required. They require fewer placement hours when post-doctoral practice experience is required for licensure. Importantly, reveals another key difference: The APAC prescription for clinical supervision for the three pathways to AOPE (DPsyc, Combined Masters-Phd, and Clinical Masters + 2-years Registrar programmes) is equivalent to no more than 50%, 45% and 52% of requirements mandated by US jurisdictions not requiring post-doctoral training for licensure (n = 307 supervision hours), and no more than 38%, 32% and 40% of requirements (n = 397 hours) when post-doctoral practice is required. In the past, clinical supervision prescriptions by the APS Clinical College (Australian Psychological Society, Citation2013) were identical to that required for pre-doctoral internships (180 hours). The reduction from 180 to 79 clinical supervision hours (reduction by 56%) was initiated by the new version of the APAC Evidence Guide (Citation2019). Whilst this change reduces supervision and training delivery costs, no pedagogic, theoretical or empirical grounds supporting this change has been offered to date.

Research

In comparing research requirements and outcomes for the doctoral programmes between the US/Canada and Australia, Norton et al. (Citation2022) highlight two differences: the lower number of input hours in statistics and research coursework required by APAC and “the similarity in terms of research output”. They also draw attention to research indicating “the field of clinical psychology continues to suffer from a substantial science-practice gap in that many clients receive suboptimal treatments despite effective treatments being available”. There is a growing body of evidence supporting this position, and also evidence suggesting “therapist drift” within psychology and other health disciplines. Therapist drift suggests that therapists may adhere more closely to best-practice guidelines early in their careers but drift away from these safeguards leading to poorer outcomes later in their careers (e.g., Waller & Turner, Citation2016). Unlike requirements in the US and Canada, doctoral training is not essential in Australia, so the large majority of practitioners who gain AOPE in clinical psychology do so via the Clinical Masters + 2-year Registrar pathway. In effect, there are more reasons to be vigilant in Australia, and because suboptimal treatments can be very expensive in terms of staffing and services, the recommendation of Norton et al. (Citation2022) “to routinely monitor clinical practice effectiveness after graduation” appears timely and relevant to all stakeholders – regulatory authorities, training institutions and the health sector.

The shift from inputs to demonstrated outcomes

A key principle of competency-based models of training is the shift from input-based metrics (e.g., coursework, practicum and supervision hours) to criterion-based outcomes including demonstrated attainments of competencies. As indicated by Norton et al. (Citation2022) this guiding principle is embraced by and, is indeed, a key feature of the new accreditation standards for the two countries. Expressed differently, competency-based models of training encourage flexibility in terms of how competencies may be acquired under the explicit assumption that a system of rigorous, objective, transparent and ecologically valid competency assessments during and at the end of training will ensure safe, competent and effective practice. From such a standpoint, any evaluation of the impact of accreditation standards should closely scrutinise the adequacy and processes governing competency assessments.

Reliability and validity of placement supervisor assessments

Currently, in Australia, the majority of programmes continue to rely on clinic and placement supervisor judgements to chart the trainee’s journey towards end-point competence, with end-of-placement assessments serving as a proxy for developmental milestones. The best data-set is from a large, multisite study that systematically examines end-of-placement supervisor assessments from 1449 placements (Gonsalvez, Terry, et al., Citation2021). Regrettably, these findings highlight serious problems with the reliance on supervisor judgements. Whilst these results are disappointing, they are not surprising. Outcomes from multiple studies within psychology and across disciplines have reiterated that placement supervisors’ assessments are seriously compromised by leniency, halo and other biases and fall short of the expected levels of reliability and validity (Gonsalvez, Terry, et al., Citation2021; Wolf, Citation2015).

Competency assessments at the final gate-post

There have been a range of initiatives in response to the problem that placement supervisor judgements, by themselves, are not fit for purpose. Several regulatory bodies have established final examinations before practitioners are certified competent. For instance, although wide variability is often the norm, it is remarkable that all 50+ states and territories in the US have made the satisfactory completion of the examination for professional psychology practice (EPPP) a mandatory hurdle for licensure (Association of State and Provincial Psychology Boards, Citation2019). In Australia, trainees who enrol in the 4 + 2-year or 5 + 1-year internship pathway are required to pass the National Psychology Examination. However, currently, their counterparts enrolled in AOPE pathways are not required to undertake the National examination. A legitimate criticism of such generic national or licensure examinations is that they focus largely on knowledge and knowledge-application competencies, whereas other equally important aspects such as skills and attitude-values may not be examined. New initiatives such as the EPPP-2 (Association of State and Provincial Psychology Boards, Citation2019) are more sophisticated and target knowledge and skills competencies by using clinical scenarios and video-based material. The EPPP-2 has been adopted by some US jurisdictions but currently is not mandatory for licensure. In any case, a fair-dinkum comparison of assessment processes by the two countries must acknowledge a salient difference, namely, unlike as required in the US, the Australian system currently lacks a final gate-post examination.

To improve the validity of competence assessments, some disciplines such as medicine have preferred a multi-method assessment approach and have included the objective and structured clinical examination (OSCE) or one of its variants to supplement other forms of assessments. In a positive development, some universities in Australia have commenced to use OSCE-type assessments to assess a trainee’s readiness to commence external placements or as part of competence attainment during their training. However, unlike our New Zealand neighbours who currently require all clinical psychology trainees to pass a comprehensive clinical examination at the finish line, the Australian regulatory system does not currently mandate or enforce such an assessment (see, Gonsalvez, Terry, et al., Citation2021; Keong et al., Citation2021). From a pedagogical perspective, the demonstration of core competencies at the criterion required for safe and effective practice constitutes a bedrock principle and an absolutely essential requirement. Demonstration that a trainee is progressing well during intermediate pit-stops (e.g., at intermediate placements) is helpful but insufficient if they are assessed against thresholds titrated below the competence mark required at the final gate. Among other possibilities, a carefully designed and adequately bench-marked (e.g., external trained raters) OSCE-type examination conducted immediately before final certification has the potential to satisfy fit-for-purpose criteria.

The extremely low fail-rates during training and placements is another indicator that current gate-keeping processes may not be sufficiently rigorous. In Australia, 12% of trainees fail the General Registration, National Psychology Examination; fail-rates in the EPPP in the US range from 18% (PhD applicants) to 31%; in psychiatry, 36% of registrars fail the final Australian and New Zealand College of Psychiatry examination (see, Gonsalvez, Terry, et al., Citation2021; Schaffer et al., Citation2012). In stark contrast, one percent (1.17%) of clinical psychology trainees fail their placements (Gonsalvez, Terry, et al., Citation2021). In summary, End-of-placement supervisor assessments that have poor reliability, extremely low fail-rates during training, and the absence of a comprehensive and rigorous final examination for Registration with AOPE are features of a flawed assessment system. A compromised assessment system is a serious concern especially when the new accreditation standards have substantively reduced previous safeguards such as “clinical supervision hours”. Houston, we have a problem!

Conclusions

The competency revolution is here to stay. The competency paradigm has led to valued insights in terms of the way practitioner competence is conceptualised, nurtured, assessed and researched. The move away from checklists of input criteria is designed to improve access, foster innovation, and enhance training efficiencies, and is a key merit of competency training. A commitment to a system of rigorous, comprehensive, reliable and valid assessment is an equally critical part of the competency equation. In fact, rigorous and comprehensive assessment is the inherent quality-control feature of the system and is meant to guarantee competent practice despite the input variability within the curriculum and flexible modes of delivery. The concern is that the Australian standards have rushed into dismantling the scaffolding safeguards (e.g., supervision hours) before establishing reliable and valid competency instruments and enforcing best-practice assessment processes (e.g., a final, bench-marked OSCE), whereas our US counterparts have favoured a more considered approach by retaining the weathered but time-tested “supervision input” and a demonstrated commitment to improved gate-keeping by investing in an improved final examination at licensure entry.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

References

  • American Psychological Association, Commission on Accreditation. (2019). Standards of accreditation for health service psychology and accreditation operating procedures. https://www.apa.org/ed/accreditation/about/policies/standards-of-accreditation.pdf
  • Association of State and Provincial Psychology Boards. (2009). ASPPB guidelines on practicum experience for licensure. https://cdn.ymaws.com/asppb.site-ym.com/resource/resmgr/guidelines/final_prac_guidelines_1_31_0.pdf
  • Association of State and Provincial Psychology Boards. (2019). The examination for professional practice in psychology (EPPP) candidate handbook. https://www.asppb.net/page/CandHandbook
  • Australian Psychological Society. (2013). College course approval guidelines for postgraduate profesional courses.
  • Australian Psychology Accreditation Council. (2019). Accreditation standards for psychology programs. https://www.psychologycouncil.org.au/sites/default/files/public/Standards_20180912_Published_Final_v1.2.pdf
  • Gonsalvez, C. J., & Milne, D. (2010). Clinical supervisor training in Australia: A review of current problems and possible solutions. Australian Psychologist, 45(4), 233–242. https://doi.org/10.1080/00050067.2010.512612
  • Gonsalvez, C. J., Shafranske, E. P., McLeod, H., & Falender, C. (2021). Competency-based standards and guidelines for psychology practice in Australia: Opportunities and risks. Clinical Psychologist, 25(3), 244–259. https://doi.org/10.1080/13284207.2020.1829943
  • Gonsalvez, C. J., Terry, J., Deane, F. P., Nasstasia, Y., Knight, R., & Hoong Gooi, C. (2021). End-of-placement failure rates among clinical psychology trainees: Exceptional training and outstanding trainees or poor gate-keeping? Clinical Psychologist, 25(3), 294–305. https://doi.org/10.1080/13284207.2021.1927692
  • Keong, Y., Sheen, J., Nedeljkovic, M., Milne, L., Lawrence, K., & Hay, M. (2021). Assessing clinical competencies using the Objective Structured Clinical Examination (OSCE) in psychology training. Clinical Psychologist 25(3), 260–270. https://doi.org/10.1080/13284207.2021.1932452
  • Norcross, J. C., Sayette, M. A., & Pomerantz, A. M. (2018). Doctoral training in clinical psychology across 23 years: Continuity and change. Journal of Clinical Psychology, 74(3), 385–397. https://doi.org/10.1002/jclp.22517
  • Norton, P. J., Norberg, M. M., Naragon-Gainey, K., & Deacon, B. J. (2022). An examination of accreditation standards between Australian and US/Canadian doctoral programs in clinical psychology. Clincal Psychologist. Advance online publication. https://doi.org/10.1080/13284207.2021.1949944
  • Psychology Board of Australia. (2019). Guidelines on area of practice endorsements.
  • Schaffer, J., Rodolfa, E., Hatcher, R., & Fouad, N. (2013). Professional psychology competency initiatives: Reflections, contrasts, and recommendations for the next steps. Training and Education in Professional Psychology, 7(2), 92–98. https://doi.org/10.1037/a0032038
  • Schaffer, J., Rodolfa, E., Owen, J., Lipkins, R., Webb, C., & Horn, J. (2012). The examination for profesional practice in psychology: New data - practical implications. Training and Education in Professional Psychology, 6(1), 1–7. https://doi.org/10.1037/a0026823
  • Waller, G., & Turner, H. (2016). Therapist drift redux: Why well-meaning clinicians fail to deliver evidence-based therapy, and how to get back on track. Behaviour Research and Therapy, 77, 129–137. doi: 10.1016/j.brat.2015.12.005.
  • Wolf, K. (2015). Leniency and halo bias in industry-based assessments of student competencies: A critical, sector-based analysis. Education, Research and Development 34(5), 1045–1059. https://doi.org/10.1080/07294360.2015.1011096