11,672
Views
3
CrossRef citations to date
0
Altmetric
Articles

Assessing competence in sport psychology: An action research account

, &

ABSTRACT

Competent practice in sport psychology is of utmost importance for the professional status of the field, and hence proper assessment of competence for sport psychology practice is needed. We describe three cycles of action research to improve the assessment of competence in a sport psychology education program. The cycles were directed at (a) empowering supervisors in their assessing role, (b) improving the assessment checklist, and (c) investigating an alternative assessment method. Although challenges remain (e.g., improve the still low interrater reliability), the action research has contributed to an improved quality and higher acceptability of the assessment in the education program.

Sport psychology consultants work in a “highly professional environment, often under the public eye and under high time pressure and efficiency requirements” (FEPSAC, Citation2006, p.1). Therefore, consultants need to be “on the highest level of competence and to maintain this level over time.” (FEPSAC, Citation2006, p. 1). Various other authors have also expressed that competent practice is of utmost importance for the field (e.g., Andersen, Van Raalte, & Brewer, Citation2000; Cropley, Hanton, Miles, & Niven, Citation2010; Fletcher & Maher, Citation2013). This cognizance of competence and competent practice raises the question of what competence in sport psychology actually is. In general terms, professional competence was defined as: “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values and reflection in daily practice for the benefit of the individual and community being served.” (Epstein & Hundert, Citation2002, p. 226). Competence can be considered to consist of subcomponents called competencies. Competencies are context-dependent ability constructs (Klieme, Hartig, & Rauch, Citation2008). More precisely, Fletcher and Maher (Citation2013, p. 267, Citation2014, p. 172) defined competencies as “complex and dynamically interactive clusters of integrated knowledge, skills, and abilities; behaviors and strategies; attitudes, beliefs, and values; dispositions and personal characteristics; self-perceptions; and motivations (Mentkowski & Associates, 2000) that enable an individual to execute a professional activity (Marrelli, 1998).”

Not unlike professional psychology (e.g., Nash & Larkin, Citation2012; Roberts, Borden, Christiansen, & Lopez, Citation2005), the field of sport psychology appears to be struggling with delineating competence for its practitioners (Fletcher & Maher, Citation2013; Practice Committee, American Psychological Association [APA], Division 47, Exercise and Sport Psychology, Citation2011). However, important efforts have been made to understand and define competence, for instance by studying characteristics of practitioners (e.g., Fifer, Henschen, Gould, & Ravizza, Citation2008; Sharp & Hodge, Citation2011), preferences of clientele (e.g., Anderson, Miles, Robinson, & Mahoney, Citation2004; Pain & Harwood, Citation2004), developmental stages (e.g., Tod, Citation2007; Tod, Andersen, & Marchant, Citation2011), and particularly novice consultants (e.g., Hutter, Oldenhof-Veldman, & Oudejans, Citation2015; Stambulova & Johnson, Citation2010; Tod, Andersen, & Marchant, Citation2009); by defining (effective) practice (e.g., Aoyagi, Portenga, Poczwardowski, Cohen, & Statler, Citation2012; Cropley, Hanton, Miles, & Niven, Citation2010; Practice Committee, APA, Division 47, Exercise and Sport Psychology, Citation2011); and by outlining competencies (e.g., American Psychological Association, Citation2005; Association for Applied Sport Psychology, Citation2012; Ward, Sandstedt, Cox, & Beck, Citation2005; see Fletcher & Maher, Citation2013 for a summary and critique of these competency outlines). Drawing on these efforts, Tod, Marchant, and Andersen (Citation2007) conceptualized competent service delivery as “a multidimensional process in which practitioners (a) meet clients' needs and expectations, (b) develop and maintain mutually beneficial relationships […] (c) understand psychological interventions and apply them to assist athletes in specific situations, (d) empathize with athletes' situations and interpret them through the lens of suitable theory […], and (e) reflect on how they (the practitioners) have influenced the interactions and outcomes of service provision” (p. 318).

From an educational or licensing perspective, the question of defining competence and delineating competencies should go hand in hand with the question of how to assess competence and/or competencies (e.g., Gonsalvez et al., Citation2013; Kaslow et al., Citation2007; Leigh et al., Citation2007). According to Kaslow (Citation2004), “the assessment of competence fosters learning, evaluates progress, assists in determining the effectiveness of the curriculum and training program, and protects the public.” (p. 778). Moreover, it was argued that assessment of competence is a prerequisite for empirical evaluation of protocols and interventions, because of the vital role that practitioners' competence plays in the delivery of these protocols and interventions (Muse & McManus, Citation2013). This seems of particular importance for sport psychology, because a firm evidence basis of sport psychological interventions is still a work in progress (e.g., Moore, Citation2007). Finally, Fitzpatrick, Monda, and Wooding (Citation2015) stated that the field will be advanced professionally if sport psychology graduates develop into productive professionals. Proper assessment of competence in training and at graduation will aid putting those candidates on the market that have the potential to become productive professionals.

Assessment of competence thus serves many functions that could directly or indirectly contribute to professional status and quality of sport psychology practice. In other fields (e.g., professional psychology, medicine, nursing, teaching), assessment of competence is a topic of study, debate, and development. In sport psychology, the literature and debate on the assessment of competence are limited at best. With this article we aim to contribute to a debate on assessment, encourage educators and institutions to share their views and practices, and in general bring the importance of assessment of competence to the attention of readers. We are in different roles responsible for assessment of competence of students in the post-master program in applied sport psychology in the Netherlands. The program's aim is to provide students with the knowledge and skills needed in sport psychology practice. Graduates are accredited as sport psychology practitioners by the national sport psychology association. To graduate, students are required to complete seven cases with athletes, coaches, and teams, during which experienced sport psychologists supervise them. The program's mission states that graduates should be highly qualified professionals, ready to work in the field of sports (Postacademische opleiding tot praktijksportpsycholoog, Citationn.d.). This implies a responsibility of the program to assess a sufficient level of competence of trainee sport psychologists at the time of graduation, a responsibility that should not be treated lightly, and one that has challenged us to critically reflect on the assessment methods applied in the program.

Here, we share our journey towards better assessment of competence as demonstrated in the casework of students. Our journey fits the purposes and framework of action research. Action research is participatory in nature; practitioners conduct research in their practical contexts, with the aim of improving both (Townsend, Citation2014). Coghlan and Brannick (Citation2014) described a cycle for action research, in which first the context and purpose of the action research are established, after which a cycle takes place of constructing an issue, planning action, taking action, and evaluating the action. This cycle may lead to a new construction of an issue, new planning of action, etc. (see ). This article follows Coghlan and Brannick's structure of action research. First the context and purpose are described, and then three cycles of our action research. In addition to our aim to contribute to the knowledge base on assessment of competence in sport psychology, we hope that the manuscript illustrates the merits of action research for sport psychology education.

Figure 1. Coghlan & Brannick's (Citation2014) spiral of action research cycles, retrieved from: https://staticssl.sagepub.com/sites/default/files/Figure%201.3.pdf (reprinted with permission).

Figure 1. Coghlan & Brannick's (Citation2014) spiral of action research cycles, retrieved from: https://staticssl.sagepub.com/sites/default/files/Figure%201.3.pdf (reprinted with permission).

Establishing context and purpose of our action research on assessing competence

Context of the action research

The context in which our action research takes place is the post-master program, and the applied framework for casework of the program. These include a number of distinct features:

  • a central role for supervisors in the guidance of the casework;

  • a facilitative role of the program management in the casework, that is facilitating both supervisees and supervisors in the execution of their respective tasks;

  • the use of external supervisors who are selected on the basis of specific criteria (i.e., an assessment using a competency profile for supervisors [see Hutter, Citation2014], the requirement to be currently practicing as a sport psychologist, have a minimum experience of 5 years as an applied sport psychologist and a minimum of 50 completed cases, and taking yearly training provided by the program);

  • a model of indirect supervision of supervisees, meaning that supervisees execute the casework without the supervisor directly observing their actions; and

  • assessment by both the supervisor and a more distant/objective assessor, that is, a member of the exam committee.

The competence assessment literature in professional psychology generally distinguishes three developmental levels: readiness for practicum, readiness for internship, and readiness for entry to practice (e.g., Fouad et al., Citation2009; Kaslow et al., Citation2009). The current study focuses on assessment of competence for entry to practice.

Purpose of the action research

Kemmis (Citation2009) stated that “action research aims at changing three things: practitioners' practices, their understandings of their practices, and the conditions in which they practice [sic].” (p. 463). The purpose of our action research was threefold, and aligns well with Kemmis' description. The purposes of our action research were to as follows.

  • Strive for optimal assessment of competence, as demonstrated in the casework of supervisees. More precisely, we strive for assessment that is valid, reliable, objective, and transparent (e.g., van Berkel & Bax, Citation2013; Kaslow et al., Citation2007), and that provides valuable feedback for professional development of supervisees (e.g., Hattie & Timperley, Citation2007; Roberts, Borden, Christiansen, & Lopez, Citation2005). This purpose relates to changing practitioners' practices.

  • Empower the assessors in fulfilling their assessing role. We aim to contribute to a better understanding and knowledgebase of assessment by the assessors, and the development of self-efficacy of the assessors for their assessing tasks (e.g., Kaslow et al., Citation2007; Roberts, Borden, Christiansen, & Lopez, Citation2005). This purpose relates to changing practitioners' understandings of their practice, and (thereby) the conditions in which they practice.

  • Develop a positive assessment culture, by which we mean a culture of acceptability and accountability. This purpose relates to changing the conditions in which practitioners practice. The assessment applied should be highly accepted by the people involved (e.g., van der Vleuten, Citation1996), in our context students, assessors, program management, and the local field of sport psychology practitioners. By accountability we mean that assessors should be able and willing to reflect on, clarify, and substantiate the outcome of their assessment (e.g., Gonsalvez et al., Citation2013; Roberts, Borden, Christiansen, & Lopez, Citation2005).

Three cycles of action research

So far three cycles of action research on assessment of competence have taken place in the post-master program. Parts of these have been reported in other publications (Hutter, Citation2014; Hutter, Pijpers, & Oudejans, Citation2016) and parts have only been reported internally, within the program and to its collaborators. In this overview each cycle is described in terms of Coghlan and Brannick's (Citation2014) cycle for action research.

Cycle 1

Constructing the issue

At the start of the program, the supervisors struggled with their role as assessors. Almost all were neophyte supervisors and were not familiar with judging who is “ready for the job” and who is not (yet). Moreover, supervisors feared that their role as assessor might impair the openness and honesty that is required for effective supervision. They were uncomfortable with combining the role of “helper/consultant” and the role of “examiner/judge.” To summarize, the supervisors felt awkward and unequipped in their role as assessors (see also, Hutter, Citation2014).

Assessments by supervisors are credible and have high ecological validity (Gonsalvez et al., Citation2013), but can indeed come with a number of challenges. First of all, assessors may need training to become effective, accountable evaluators (Roberts, Borden, Christiansen, & Lopez, Citation2005). Moreover, the combination of supervision and assessment may have a negative impact on three different levels: the supervisee, the supervisor, and the assessment. Collins, Burke, Martindale, and Cruickshank (Citation2015) warned that assessment may compromise learning, and argued that assessment may hinder criticality, openness, and experimenting on the part of the trainee (comparable to the fear of our supervisors that their assessment role inhibited openness of the supervisees). However, we argue (with Fletcher & Maher, Citation2014; Kaslow, Citation2004; Kaslow et al., Citation2007) that assessment can facilitate learning, as long as it is guided by a developmental perspective, and summative and formative assessments are appropriately integrated. Second, the combination of supervision and assessment requires the supervisor to take on dual roles: they perform both formative evaluation (ongoing, developmentally informed feedback during training to ensure learning and performance improvement) and summative evaluation (an end point or outcome measure; Roberts, Borden, Christiansen, & Lopez, Citation2005; Kaslow et al., Citation2007). Supervisors have to manage these dual roles (Kaslow et al., Citation2007). Third, the combination of supervision and assessment may bias the assessment. Halo and leniency biases have been reported to be a serious concern in assessment by supervisors (Gonsalvez et al., Citation2013).

Despite these challenges, it is recommended to include supervisors in the assessment of supervisees, among other reasons because of their professional qualifications and practice-expertise (Gonsalvez et al., Citation2013). Moreover, formative and summative evaluations are considered mutually informative processes, and therefore it is strongly recommended to integrate them (e.g., Kaslow, Citation2004; Kaslow et al., Citation2004, Citation2007; Roberts, Borden, Christiansen, & Lopez, Citation2005). The challenge thus is to equip supervisors optimally for their supervising and assessing role, and the combination of both.

Planning action

We explored ways to resolve the issues encountered by the supervisors in our program, by first talking to the supervisors to come to a better understanding of their perceived lacunas, barriers, and needs. We then turned to expertise from the field of educational sciences to learn more about the assessment role, and looked into the assessing role as fulfilled by teachers. As a result, we explicated the concepts of “assessing for progress” and “assessing for qualification” (similar to the concepts of assessment for learning and assessment of learning (Earl & Katz, Citation2006), and formative and summative evaluation as described above). We felt that these concepts could be useful to help supervisors manage dual roles, and planned to introduce them to the supervisors.

Taking action

A workshop was convened with the supervisors in which we introduced the concepts of “assessing for progress” and “assessing for qualification.” We explained that in the role of consultant, a supervisor continuously assesses the progress of a supervisee, to guide the developmental process in supervision. The supervisor will try to establish what the supervisee is already capable of, and what still needs development, to decide on the next step in supervision. This “assessing for progress” is meant to help the supervisee develop and is part of the job of the supervisor as consultant. In the role of examiner, the supervisor also tries to establish the competence of the supervisee, but in this case needs to determine whether the supervisee is competent enough to proceed or graduate. This is what is meant by “assessing for qualification.”

In the workshop, the supervisors discussed what knowledge, skills, attitudes, and responsibilities were needed for each concept (“assessing for progress” and “assessing for qualification”). Then they were asked to reflect on their self-efficacy concerning the outlined knowledge, skills, attitudes, and responsibilities listed, and look for potential conflicts. The supervisors discovered that they felt capable of executing both roles and saw virtually no conflicts between the roles as defined in the workshop.

Evaluating action

Within the workshop we checked whether the presentation, and the reflective discussion that followed, had been helpful to the participants. The supervisors appeared to feel more capable of executing and separating both roles. The elaboration in the workshop is thought to have helped the supervisors resolve (part of) their role conflict. Having resolved, at least partly, the matter of combining the supervision role with an assessment task, we evaluated which issues remained. This then led to the second cycle of action research.

Cycle 2

Constructing the issue

Although the supervisors were more comfortable with their role as examiners, they indicated that they still struggled with judging who is “ready for the job” and who is not. Supervisors were required to fill out an assessment checklist to assess the casework of their supervisees. Checklists or rating forms are commonly used to assess competence in the completing stages of training, for they are normally easy to use, inexpensive, and versatile enough (Gonsalvez et al., Citation2013). However, the supervisors found the assessment checklist hard to use, and perceived it as inadequate for proper assessment. This is not an uncommon problem in the field of sport psychology. Fletcher and Maher (Citation2014) summarized that the checklists in the existing training and development documentation lack individual and contextual sensitivity. Other authors have warned that checklist style assessments may fail to capture the intricacies of problem solving, professional judgment and decision making (e.g., Thompson, Moss, & Applegate, Citation2014). These were indeed the problems with the original checklist used for assessment: It was perceived to be too rigid to apply to the complex nature of service delivery, and failed to assess problem solving and decision making skills.

The exam committee and the program management shared this sentiment. There was a need for a better and easier to use assessment checklist. Fletcher and Maher (Citation2014) and Kaslow et al. (Citation2007) advocated collaboration between multiple organizations to develop assessment methods, instead of isolated initiatives. We agree that collaborative efforts could strongly advance assessment of competence in sport psychology, but in the absence of such collaborative initiatives progressed within our program.

Planning action

We decided to design a new assessment checklist, rather than to adapt the old one. In collaboration with an external expert on assessment methods we designed a two-step approach to design a new checklist. The first step was to have the exam committee compile a draft of a new assessment checklist. The second step was to discuss the draft with the supervisors, and adapt the draft accordingly. We scheduled two meetings with the exam committee, and one meeting with supervisors.

Taking action

Kaslow et al. (Citation2007), in their guiding principles for the assessment of competence, stated that assessment must reflect fidelity to practice. In addition, several authors have stressed that competence (and competencies) should be broken down into essential components (e.g., Fletcher & Maher, Citation2014; Fouad et al., Citation2009). Congruent with both these guidelines the first meeting of the exam committee was centered on the questions: What does “good casework” look like? The committee members discussed “what good practice looks like,” “what a good session looks like,” and “what a good case report looks like”; and listed all characteristics emerging from the discussion. Based on the outcomes of the discussions, two distinct steps were decided upon. First, to split the assessment form in two parts, one for the overall case description and one for the session reports. Second, to compose a list of conditional criteria, meaning that case reports would only be fully assessed when the conditional criteria were met. The conditional criteria outlined specifically which components had to be in the report; for instance, the demand that “the guiding principles are described and recognizable in the report” or “for each session, time, place and duration must be listed.” These conditional criteria enabled the program management to check if all required information was present in the reports, before the assessment by supervisors and exam committee proceeded.

In the second meeting of the exam committee, all the characteristics listed in the first meeting (i.e., the components of competence) were separated as they applied to two separate assessment checklists: the session checklist and the case description checklist. The characteristics on each of these forms was then clustered and categorized. From this categorization, the drafts of checklists emerged with higher order themes as main assessment areas, and lower-order themes as separate assessment criteria within the assessment areas.

Kaslow, Falender, and Grus (Citation2012) advocated transformational leadership to foster a culture shift towards assessment of competence. They recommended to involve all relevant parties in the process, and to ensure buy-in at all levels. We agree that the commitment of the supervisors to the assessment method and material is crucial, and their expertise invaluable, and therefore included them in the process of designing the assessment checklist. In a meeting with the supervisors, the structure and content of the drafts were discussed and criteria adapted (i.e., formulated differently, omitted, or added). The definite checklists were established, and subsequently used in the program (see http://www.exposz.nl/sport/checklists/).

With the checklist, we broke competence down into subcomponents and essential elements (i.e., the higher order assessment areas and lower-order assessment items on the checklists). The next step to be taken was to formulate benchmarks or behavioral anchors for the assessment of competence (e.g., Fletcher & Maher, Citation2013, Citation2014; Fouad et al., Citation2009; Muse & McManus, Citation2013). We attempted to collectively formulate behavioral anchors or operational definitions of when to evaluate each criterion as unsatisfactory, satisfactory, or good. By behavioral anchors we mean a description of what supervisees should demonstrate, or fail to demonstrate, to obtain a particular score. According to Kaslow et al. (Citation2007): “This entails careful analysis of which competencies and aspects of these competencies should be mastered at which stages of professional development (e.g., novice, intermediate, advanced, proficient, expert, master; p. 443). This will result in benchmarks, behavioral indicators associated with each domain that provide descriptions and examples of expected performance at each developmental stage. Such an analysis will incorporate an understanding of the gradations of competence at each level, ranging from competence problems, to minimum threshold of competence, to highly distinctive performance.”

The formulation of behavioral anchors turned out to be very challenging. Supervisors found it hard to describe explicitly what actions, reflections, or behaviors of the supervisee would lead to which score. They mainly attributed their struggle to the diversity of sport psychology practice and the importance of the specific context in determining what is good practice and what not (in line with the lack of individual and contextual sensitivity observed by Fletcher and Maher, Citation2013). They felt, therefore, that generalizable anchors or operational definitions were hard, or even impossible, to generate.

Because of the importance of behavioral anchors for proper assessment (e.g., Fletcher & Maher, Citation2013, Citation2014; Fouad et al., Citation2009; Kaslow et al., Citation2007, Citation2009) it was then decided to include an action research cycle within the current cycle. All assessors were sent the same case report and session report, and asked to score the reports using the new criteria lists and to substantiate their scores by explicating three things:

  • what the trainee showed in the reports that made them decide to give the score that they did;

  • an example or explanation of what the trainee could or should have shown to obtain a higher score (if the highest score of “good” was given this question could be ignored); and

  • an example or explanation of what the trainee could have shown that would have resulted in a lower score (if the lowest score of “unsatisfactory” was given this question could be ignored).

We had hoped to use the answers of the supervisors to supplement the new checklists with descriptions of what constituted unsatisfactory, satisfactory, and good performance on each criterion. Such descriptions may help standardize scoring between assessors. Moreover they would be beneficial for supervisees to better understand what actually constitutes competent practice at their level, and as such could strongly support the learning and feedback function of assessment. According to Hattie and Timperley (Citation2007), feedback should address the three questions of where am I going, how am I going, and where to go next. The combination of obtained scores and descriptors of insufficient, sufficient, and good performance may provide supervisees with answers to these questions, thus providing valuable feedback.

Unfortunately, only a few supervisors completed this exercise, even though all supervisors that were present at the workshop agreed upon this step. The reasons that were given for not completing the exercise were lack of time, and not seeing the feasibility, benefit, or importance.

Evaluating action

We were successful in designing a new assessment checklist, or rather two new checklists. The collaborative approach to designing the checklists is thought to have contributed to the quality and acceptability of the new checklists. Moreover, the conditional criteria for the case and session reports were perceived to work well. The program management (i.e., the assistant of the program manager) was able to check at a glance whether the reports met the conditional criteria and assessors were relieved from evaluating incomplete reports. They felt, therefore, that they were better able to assess the quality of the work, instead of giving feedback on information that had to be added to the reports. In addition, the conditional criteria provided the supervisees with a template or structure for their reports. This has been perceived as both a pro and a con: although supervisees welcomed a clear structure for the report, some shared that the conditional criteria were too directive or rigid.

We were unsuccessful in establishing anchors for the different scores of unsatisfactory, satisfactory, and good. This lack of operationalization of the criteria scores led to concerns about the validity and interrater reliability of the assessment checklists. This concern was strengthened over time, when we gained more experience with the use of the new checklists by supervisors and exam committee members. Together, this led us to undertake Cycle 3 of our action research.

Cycle 3 (also reported in Hutter, Pijpers, & Oudejans, Citation2016)

Constructing the issue

The issue for the third cycle stemmed partly from Cycle 2, and partly from additional experiences with assessment of casework in the post-master program. Moreover, we acknowledge the call of Kaslow et al. (Citation2007) that education programs should provide evidence about the validity of the methods being used. They recommended to investigate the development of assessment methodologies that are psychometrically sound and comprehensive; and to investigate fidelity, reliability, validity, utility, and cost-benefit balance of various methods. The impetus for the third cycle was our wish to take a critical look at the assessment method applied in the program, and to investigate an alternative way of assessing competence.

At the time of this cycle of our action research, the casework of students was assessed by means of a written case report. Both students and assessors had the impression that the written reports do not completely capture the how, what, and why of the students' professional actions (see also Hutter, Citation2014). This concern may partly emerge from the fact that not all information is included in the reports (e.g., Kaslow et al., Citation2009), but may also be inherent to the assessment of written case reports (e.g., Muse & McManus, Citation2013). In some cases (wide) discrepancies occurred between the assessment of the supervisor and the exam committee. The available literature suggests that over 50% of score variability may stem from measurement error, and stresses that assessors need considerable practice to be able to produce a reliable score (see Muse & McManus, Citation2013). On a pragmatic level, both students and assessors perceived the written reports to be time consuming and tedious.

Although the previous action research cycles had improved some aspects of the assessment, room for improvement remained. Particular issues of concern that persisted were the acceptability, validity, and reliability of the written case report assessment.

Planning action

We planned to take two simultaneous actions. The first refers to our growing concern on the interrater reliability of the checklists. We planned to select a number of cases that were assessed by the supervisor and a member of the exam committee, and to calculate interrater reliability (see Hutter, Pijpers, & Oudejans, Citation2016). The second action we planned was to explore different ways of assessing casework of supervisees. We discussed the needs, challenges, and available methods for assessment with stakeholders (such as students, assessors, and supervisors). In addition, we conducted a study of literature on competency assessment in sport psychology (e.g., Fletcher & Maher, Citation2013, Citation2014; Tashman, Citation2010), professional psychology (e.g., Fouad et al., Citation2009; Gonsalvez et al., Citation2013; Kaslow et al., Citation2009; Muse & McManus, Citation2013; Newell, Newell, & Looser, Citation2013; Petti, Citation2008; Schulte & Daly, Citation2009; Yap, Bearman, Thomas, & Hay, Citation2012), and medicine (e.g., Andrews, Violato, Ansari, Donnon, & Pugliese, Citation2013; Dijkstra, van der Vleuten, & Schuwirth, Citation2009; Epstein, Citation2007; McMullan et al., Citation2003; Schuwirth & van der Vleuten, Citation2011).

As a result, we decided to try out the structured case presentation assessment (SCPA) as described by Petti (Citation2008). In SCPA, cases are assessed on the basis of a combination of a written report and a structured case presentation meeting between assessor(s) and trainee. Assessors first read the written presentation of the case. Next, a 60 min meeting with the students takes place to discuss the case in more detail, after which the final evaluation is completed. This assessment method was first described by Swope (1987, as cited in Petti, Citation2008). Dienst and Amstrong (Citation1998) stated that a written report combined with an interview would render an assessment with high fidelity and validity. Recently, Goldberg, DeLamatre, and Young (Citation2011) compared SCPA to two other assessment methods for the performance of interns in clinical psychology. They concluded that SCPA was the superior method; SCPA provided most clarity, was simplest, and had high fidelity. Finally, it was stated that case presentations are helpful to evaluate several different competencies, such as case conceptualization, metaknowledge, and reflective skills (Hadjistavropoulos, Kehler, Peluso, Loutzenhiser, & Hadjistavropoulos, Citation2010).

Based on the evidence base of SCPA, we hoped and expected that SCPA would improve some of the troublesome aspects with assessment of competence in our program. Moreover, SCPA fitted well with the existing assessment logistics within our program. We agree with Kaslow et al. (Citation2007) that assessment methodologies should be practical and feasible in terms of administration, cost, and burden; and SCPA seemed both practical and feasible. To put this cycle of action research in motion, the approval was sought and obtained from the steering committee of the post-master program to assess a number of cases with both SCPA and assessment of written report only (WRA, which was the method of assessment applied thus far).

Taking action

A number of 18 cases were assessed with both SCPA and WRA. In each SCPA meeting the assessed students were asked about their experience of the meeting and invited to give feedback to the assessors. In addition, assessors often discussed (informally and among themselves) how the meeting went. They reflected typically on the communication flow of the meeting, and were able to give each other feedback on style of questioning, timekeeping, etc. After the SCPA an online questionnaire was sent to assessors and assessed students to obtain information on (perceived) transparency, (perceived) validity, and feedback function of SCPA and WRA.

Evaluating action

We evaluated the assessments methods applied in this cycle of action research on two aspects: interrater reliability and the perception of the methods by assessors and supervisees. Interrater reliability was calculated for WCR assessment by supervisor and exam committee, and for SCPA assessment by the exam committee members. The interrater reliability of the original method (WCR) was indeed problematic. That is, the evaluation by the supervisor and the evaluation of a member of the exam committee of the same report varied widely. When members of the exam committee conducted a SCPA, their assessment was still not consistent with the WCR assessment by the supervisor, but interrater reliability between members of the exam committee improved significantly with SCPA. Therefore we concluded that SCPA improved interrater reliability of assessment by the exam committee. However, interrater reliability was still fairly low, and thus remains an issue of concern, as also reported elsewhere in the literature (e.g., Hutter, Pijpers, & Oudejans, Citation2016; Jonsson & Svingby, Citation2007; Muse & McManus, Citation2013).

For evaluation of the assessors' and supervisees' perception of the assessment methods, we asked supervisors, supervisees, and exam committee members for their opinion on the assessment methods. They rated the applied assessment methods on transparency, (perceived) validity, and feedback function, and expressed their preference for assessment methods. For assessment by the exam committee, both students and assessors rated the transparency, validity, and feedback function of SCPA higher than WRA. In addition, they generally expressed a higher preference for SCPA. In the introduction of this manuscript the importance of acceptability of assessment methods was highlighted. We argue that the preference for, and the higher perceived transparency and validity of SCPA contributes to the acceptability of this assessment method. In addition, we wish to emphasize the importance of the feedback function of assessment. We strongly agree with the guideline that assessment of competence should be built on a developmental perspective (Kaslow et al., Citation2007). Epstein and Hundert (Citation2002) aptly stated that “good assessment is a form of learning and should provide guidance and support to address learning needs” (p. 229). Proper assessment of competence has the ability to inform supervisees about their strengths and weaknesses, and thus contribute to their professional development (e.g., Gonsalvez et al., Citation2013; Muse & McManus, Citation2013), particularly when combined with remediation and learning plans (Epstein & Hundert, Citation2002; Fletcher & Maher, Citation2013, Citation2014). Thus, the higher rating of the feedback function of SCPA compared to WRA was an important finding to us. Overall, we concluded that structured case presentations was the preferable method for assessment by the exam committee, and therefore SCPA is now applied in the post-master program (Hutter, Pijpers, & Oudejans, Citation2016).

Where to next?

With our post-master program in applied sport psychology

The evaluation of the actions has led to a number of changes in the assessment of casework in the post-master program. In assessments in which both the supervisor and the exam committee are involved, assessment by the exam committee will be done by SCPA. However, the interrater reliability of the assessments is still fairly low, also with SCPA. The next step that will be taken and evaluated is to adapt the use of the criteria lists from analytic to semi-holistic assessment, meaning that instead of scoring each criterion on the assessment lists separately, scores will be given for clusters of criteria on the lists (for an explanation of analytic and holistic assessment see, e.g., Sadler, Citation2009). In fact, a fourth action research cycle is already in motion in which we address the issue of the interrater reliability of the SCPA, have planned and taken action by switching to the semi-holistic assessment, and will evaluate whether this switch successfully raises the interrater reliability in assessment further. With this fourth cycle of action research we continue our journey towards high-quality assessment in terms of validity, reliability, objectivity, transparency, and feedback function; empowerment of the assessors; and a positive assessment culture of acceptability and accountability.

As a concluding point of this section, we would like to briefly reflect on the action research methodology adopted. In our strivings for better assessment of supervisees we have found action research a highly valuable, and very practical methodology to direct our efforts. Action research is commonly applied to educational research (see for example the journal Educational Action Research), and based on our experiences we recommend educators and training institutions to consider action research as a method to improve aspects of training.

With the field of applied sport psychology

Fletcher and Maher (Citation2013) suggested that the field of sport psychology should follow the lead taken in professional psychology towards competency-based training and professional development. More particularly, they suggested to adopt the cube model of competencies in professional psychology (Rodolfa et al., Citation2005), to organize an international conference to discuss competence and competencies for applied sport psychology, to break down competence and competencies in essential components, and to define behavioral anchors for each, and to discuss assessment of competence. We strongly agree that these recommendations would contribute to a focus on competence in training and education for sport psychology and would advance the field. In addition to these recommendations, we suggest to also include the criticism that has been uttered in professional psychology (see next paragraph), and, in line with the scope of this manuscript, particularly draw attention to the assessment of competence.

Authors have warned against overoptimistic views on available assessment methods and their ability to inform decisions on competence (e.g., DeMers, Citation2009; McCutcheon, Citation2009; Schulte & Daly, Citation2009). Schulte and Daly (Citation2009) make an appealing case to first analyze the specific decisions that have to be made in training, and then match or develop appropriate assessment methods for each decision. For sport psychology this could entail establishing different professional development levels at which competence should be assessed, and to establish whether these assessments serve a formative or summative function. Summative assessment would, for example, be required for the selection of students to enter a sport psychology training program. Fletcher and Maher (Citation2013) briefly discuss that training may not be able, or designed, to remediate specific deficiencies of students at the onset of training, underlining the importance of appropriate assessment for admission of students. As another example, summative assessment of competence would be required for licensing purposes. For licensing, typically a minimum level of competence is established, and assessment would have to ensure that the minimum level is warranted in the assessed person. Fletcher and Maher (Citation2013, Citation2014) aptly contrast the summative assessment of minimum requirements with the more expertise-directed goal of “optimal” practice. They contend that professionals should, throughout their career, strive for a goal that will never be fully achieved. This requires formative, rather than summative, assessment of competence and the decisions involved (by either the professionals themselves, training institutions, sport psychology or other (licensing) organizations) are markedly different from the previous examples. The example of formative assessment of competence throughout the career hopefully illustrates that the benefits of a culture of competence and competence assessment are not limited to initial training. Rather, assessment of competence also has the potential to inspire and direct continued professional development efforts of practitioners.

To summarize, we suggest with Schulte and Daly (Citation2009) that analysis of the decisions to be made in training and professional development for sport psychology practice is an important starting point for better assessment of competence. Next, appropriate assessment methods should be developed to fulfill the outlined functions. Several authors have made the call for psychometrically sound instruments (e.g., Kaslow et al., Citation2009; DeMers, Citation2009). In line with DeMers (Citation2009), we recommend to negotiate which assessment methodologies fit which purposes. To be able to do so, more has to be known about assessment practices in sport psychology. We therefore hope this manuscript will inspire others to share their views and practices on assessment of competence, and would like to support the call of Fletcher and Maher (Citation2013) to convene an international conference directed at competence in sport psychology, and the assessment of competence of sport psychology students and practitioners.

References

  • American Psychological Association. (2005). Sport psychology: Knowledge and skills checklist. Retrieved from: http://www.apadivisions.org/division-47/about/resources/checklist.pdf
  • Andersen, M. B., Van Raalte, J. L., & Brewer, B. W. (2000). When sport psychology consultants and graduate students are impaired: Ethical and legal issues in training and supervision. Journal of Applied Sport Psychology, 12, 134–150. doi:10.1080/10413200008404219
  • Anderson, A., Miles, A., Robinson, P., & Mahoney, C. (2004). Evaluating the athlete's perception of the sport psychologist's effectiveness: What should we be assessing? Psychology of Sport & Exercise, 5, 255–277. doi:10.1016/S1469-0292(03)00005-0
  • Andrews, J. J. W., Violato, C., Al Ansari, A., Donnon, T., & Pugliese, G. (2013). Assessing psychologists in practice: Lessons from the health professions using multisource feedback. Professional Psychology: Research and Practice, 44, 193–207. doi:10.1037/a0033073
  • Aoyagi, M. W., Portenga, S. T., Poczwardowski, A., Cohen, A. B., & Statler, T. (2012). Reflections and directions: The profession of sport psychology past, present, and future. Professional Psychology: Research and Practice, 43, 32–38. doi:10.1037/a0025676
  • Association for Applied Sport Psychology. (2012). Standard application form: Certified consultant association for applied sport psychology. Retrieved from https://www.appliedsportpsych.org/site/assets/files/1039/cc-aasp_standard_application_form_2015-02.pdf
  • Coghlan, D., & Brannick, T. (2014). Doing action research in your own organization (4th ed.). London, England: Sage.
  • Collins, D., Burke, V., Martindale, A., & Cruickshank, A. (2015). The illusion of competency versus the desirability of expertise: Seeking a common standard for support professions in sport. Sports Medicine, 45, 1–7. doi:10.1007/s40279-014-0251-1
  • Cropley, B., Hanton, S., Miles, A., & Niven, A. (2010). Exploring the relationship between effective and reflective practice in applied sport psychology. Sport Psychologist, 24, 521–541.
  • DeMers, S. T. (2009). Real progress with significant challenges ahead: Advancing competency assessment in psychology. Training and Education in Professional Psychology, 3, S66–S69. doi:10.1037/a0017534
  • Dienst, E. R., & Armstrong, P. M. (1998). Evaluation of students' clinical competence. Professional Psychology: Research and Practice, 19, 339–341.
  • Dijkstra, J., van der Vleuten, C. P. M., & Schuwirth, L. W. T. (2009). A new framework for designing programmes of assessment. Advances in Health Sciences Education, 15, 379–393. doi:10.1007/s10459-009-9205-z
  • Earl, L., & Katz, S. (2006). Rethinking classroom assessment with purpose in mind: Assessment for learning, assessment of learning, assessment as learning. Winnipeg, Canada: Manitoba. Retrieved from: http://www.edu.gov.mb.ca/k12/assess/wncp/full_doc.pdf
  • Epstein, R. M. (2007). Assessment in medical education. New England Journal of Medicine, 356, 387–396. doi:10.1056/NEJMra054784
  • Epstein, R. M., & Hundert, E. M. (2002). Defining and assessing professional competence. Jama-Journal of the American Medical Association, 287, 226–235.
  • FEPSAC. (2006). Quality of applied sport psychology services, 1–2. Retrieved from: http://www.fepsac.com/index.php/download_file/-/view/37
  • Fifer, A., Henschen, K., Gould, D., & Ravizza, K. (2008). What works when working with athletes. Sport Psychologist, 22, 356–377.
  • Fitzpatrick, S. J., Monda, S. J., & Wooding, C. B. (2015). Great expectations: Career planning and training experiences of graduate students in sport and exercise psychology. Journal of Applied Sport Psychology, 1–14. doi:10.1080/10413200.2015.1052891
  • Fletcher, D., & Maher, J. (2013). Toward a competency-based understanding of the training and development of applied sport psychologists. Sport, Exercise, and Performance Psychology, 2, 265–280. doi:10.1037/a0031976
  • Fletcher, D., & Maher, J. (2014). Professional competence in sport psychology: Clarifying some misunderstandings and making future progress. Journal of Sport Psychology in Action, 5, 170–185. doi:10.1080/21520704.2014.965944
  • Fouad, N. A., Grus, C. L., Hatcher, R. L., Kaslow, N. J., Hutchings, P. S., Madson, M. B., et al. (2009). Competency benchmarks: A model for understanding and measuring competence in professional psychology across training levels. Training and Education in Professional Psychology, 3, S5–S26. doi:10.1037/a0015832
  • Goldberg, R. W., DeLamatre, J. E., & Young, K. (2011). Supplemental material for intern final oral examinations: An exploration of alternative models of competency. Training and Education in Professional Psychology, 5, 185–191. doi:10.1037/a0024151.supp
  • Gonsalvez, C. J., Bushnell, J., Blackman, R., Deane, F., Bliokas, V., Nicholson-Perry, K., et al. (2013). Assessment of psychology competencies in field placements: Standardized vignettes reduce rater bias. Training and Education in Professional Psychology, 7, 99–111. doi:10.1037/a0031617
  • Hadjistavropoulos, H. D., Kehler, M. D., Peluso, D., Loutzenhiser, L., & Hadjistavropoulos, T. (2010). Case presentations: A key method for evaluating core competencies in professional psychology? Canadian Psychology/Psychologie Canadienne, 51, 269–276. doi:10.1037/a0021735
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112. doi:10.3102/003465430298487
  • Hutter, R. I. (2014). Sport psychology supervision in the Netherlands: Starting from scratch. In J. G. Cremades & L. S. Tashman (Eds.), Becoming a sport, exercise, and performance psychology professional: A global perspective (pp. 260–267). New York: Routledge, Taylor & Francis Group.
  • Hutter, R. I. V., Oldenhof-Veldman, T., & Oudejans, R. R. D. (2015). What trainee sport psychologists want to learn in supervision. Psychology of Sport & Exercise, 16, 101–109. doi:10.1016/j.psychsport.2014.08.003
  • Hutter, R. I. (V.), Pijpers, J. R., & Oudejans, R. R. D. (2016). Assessing competency of trainee sport psychologists: An examination of the ‘Structured Case Presentation’ assessment method. Psychology of Sport & Exercise, 23, 21–30. doi:10.1016/j.psychsport.2015.10.006
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2, 130–144. doi:10.1016/j.edurev.2007.05.002
  • Kaslow, N. J. (2004). Competencies in professional psychology. American Psychologist, 59, 774–781. doi:10.1037/0003-066X.59.8.774
  • Kaslow, N. J., Borden, K. A., Collins, F. L., Forrest, L., Illfelder-Kaye, J., Nelson, P. D.,... Willmuth, M. E. (2004). Competencies Conference: Future directions in education and credentialing in professional psychology. Journal of Clinical Psychology, 80, 699–712.
  • Kaslow, N. J., Falender, C. A., & Grus, C. L. (2012). Valuing and practicing competency-based supervision: A transformational leadership perspective. Training and Education in Professional Psychology, 6, 47–54. doi:10.1037/a0026704
  • Kaslow, N. J., Grus, C. L., Campbell, L. F., Fouad, N. A., Hatcher, R. L., & Rodolfa, E. R. (2009). Competency Assessment Toolkit for professional psychology. Training and Education in Professional Psychology, 3, S27–S45. doi:10.1037/a0015833
  • Kaslow, N. J., Rubin, N. J., Bebeau, M. J., Leigh, I. W., Lichtenberg, J. W., Nelson, P. D., et al. (2007). Guiding principles and recommendations for the assessment of competence. Professional Psychology: Research and Practice, 38, 441–451. doi:10.1037/0735-7028.38.5.441
  • Kemmis, S. (2009). Action research as a practice‐based practice. Educational Action Research, 17, 463–474. doi:10.1080/09650790903093284
  • Klieme, E., Hartig, J., Rauch, D. (2008). The concept of competence in educational contexts. In J. Hartig, E. Klieme, & D. Leutner (Eds.), Assessment of competencies in educational contexts (pp. 3–22). Cambridge MA: Hogrefe & Huber Publishers.
  • Leigh, I. W., Smith, I. L., Bebeau, M. J., Lichtenberg, J. W., Nelson, P. D., Portnoy, S., et al. (2007). Competency assessment models. Professional Psychology: Research and Practice, 38, 463–473.
  • McCutcheon, S. R. (2009). Competency benchmarks: Implications for internship training. Training and Education in Professional Psychology, 3, S50–S53. doi:10.1037/a0016966
  • McMullan, M., Endacott, R., Gray, M. A., Jasper, M., Miller, C. M., Scholes, J., & Webb, C. (2003). Portfolios and assessment of competence: A review of the literature. Journal of Advanced Nursing, 41, 283–294.
  • Moore, Z. E. (2007). Critical thinking and the evidence-based practice of sport psychology. Journal of Clinical Sport Psychology, 1, 9–22.
  • Muse, K., & McManus, F. (2013). A systematic review of methods for assessing competence in cognitive–behavioural therapy. Clinical Psychology Review, 33, 484–499. doi:10.1016/j.cpr.2013.01.010
  • Nash, J. M., & Larkin, K. T. (2012). Geometric models of competency development in specialty areas of professional psychology. Training and Education in Professional Psychology, 6, 37–46. doi:10.1037/a0026964
  • Newell, M. L., Newell, T. S., & Looser, J. (2013). A competency-based assessment of school-based consultants' implementation of consultation. Training and Education in Professional Psychology, 7, 235–245. doi:10.1037/a0033067
  • Pain, M. A., & Harwood, C. G. (2004). Knowledge and perceptions of sport psychology within English soccer. Journal of Sports Sciences, 22, 813–826. doi:10.1080/02640410410001716670
  • Petti, P. V. (2008). The use of a structured case presentation examination to evaluate clinical competencies of psychology doctoral students. Training and Education in Professional Psychology, 2, 145–150. doi:10.1037/1931-3918.2.3.145
  • Postacademische opleiding tot praktijksportpsycholoog. (n.d.). Studiegids [Study guide; brochure]. Amsterdam, the Netherlands: VU University Amsterdam.
  • Practice Committee, Division 47, Exercise and Sport Psychology, American Psychological Association. (2011). Defining the practice of sport and performance psychology. Retrieved from http://www.apa47.org/pdfs/Defining%20the%20practice%20of%20sport%20and%20.performance%20psychology-Final.pdf
  • Roberts, M. C., Borden, K. A., Christiansen, M. D., & Lopez, S. J. (2005). Fostering a culture shift: Assessment of competence in the education and careers of professional psychologists. Professional Psychology: Research and Practice, 36, 355–361. doi:10.1037/0735-7028.36.4.355
  • Rodolfa, E. R., Bent, R. J., Eisman, E., Nelson, P. D., Rehm, L., & Ritchie, P. (2005). A cube model for competency development: Implications for psychology educators and regulators. Professional Psychology: Research and Practice, 36, 347–354.
  • Sadler, D. R. (2009). Transforming holistic assessment and grading into a vehicle for complex learning. In G. Joughin (Ed.), Assessment, learning and judgement in higher education (pp. 49–64). Dordrecht, The Netherlands: Springer.
  • Schulte, A. C., & Daly, E. J. (2009). Operationalizing and evaluating professional competencies in psychology: Out with the old, in with the new? Training and Education in Professional Psychology, 3, S54–S58. doi:10.1037/a0017155
  • Schuwirth, L. W. T., & van der Vleuten, C. P. M. (2011). General overview of the theories used in assessment: AMEE Guide No. 57. Medical Teacher, 33, 783–797. doi:10.3109/0142159X.2011.611022
  • Sharp, L.-A., & Hodge, K. (2011). Sport psychology consulting effectiveness: The sport psychology consultant's perspective. Journal of Applied Sport Psychology, 23, 360–376. doi:10.1080/10413200.2011.583619
  • Stambulova, N., & Johnson, U. (2010). Novice consultants' experiences: Lessons learned by applied sport psychology students. Psychology of Sport & Exercise, 11, 295–303. doi:10.1016/j.psychsport.2010.02.009
  • Tashman, L. S. (2010). Be a performance enhancement consultant: Enhancing the training of student sport psychology consultants using expert models. Electronic Theses, Treatises and Dissertations. Paper 1683.
  • Thompson, G. A., Moss, R., & Applegate, B. (2014). Using performance assessments to determine competence in clinical athletic training education: How valid are our assessments? Athletic Training Education Journal, 9, 135–141. doi:10.4085/0903135
  • Tod, D. (2007). The long and winding road: Professional development in sport psychology. Sport Psychologist, 21, 94–108.
  • Tod, D., Andersen, M. B., & Marchant, D. B. (2009). A longitudinal examination of neophyte applied sport psychologists' development. Journal of Applied Sport Psychology, 21, S1–S16. doi:10.1080/10413200802593604
  • Tod, D., Andersen, M. B., & Marchant, D. B. (2011). Six years up: Applied sport psychologists surviving (and thriving) after graduation. Journal of Applied Sport Psychology, 23, 93–109. doi:10.1080/10413200.2010.534543
  • Tod, D., Marchant, D., & Andersen, M. B. (2007). Learning experiences contributing to service-delivery competence. Sport Psychologist, 21, 317–334.
  • Townsend, A., (2014). Collaborative action research. In D. Coghlan & M. Brydon-Miller (Eds.), The sage encyclopedia of action research (pp. 116–119). London, England: Sage.
  • van Berkel, H., & Bax, A. (2013). Toetsen: Toetssteen of dobbelsteen [Assessment: Acid test or dice]. In H. van Berkel, A. Bax, & D. Joosten-ten Brinke (Eds.), Toetsen in het hoger onderwijs (3rd ed., pp. 15–27). Houten, The Netherlands: Bohn Stafleu van Loghum.
  • van der Vleuten, C. P. M. (1996). The assessment of professional competence: Developments, research and practical implications. Advances in Health Sciences Education, 1, 41–67.
  • Ward, D. G., Sandstedt, S. D., Cox, R. H., & Beck, N. C. (2005). Athlete-counseling competencies for US psychologists working with athletes. Sport Psychologist, 19, 318–334.
  • Yap, K., Bearman, M., Thomas, N., & Hay, M. (2012). Clinical psychology students' experiences of a pilot objective structured clinical examination. Australian Psychologist, 47, 165–173. doi:10.1111/j.1742-9544.2012.00078.x