3,130
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Factors affecting speech-language pathologists’ language assessment procedures and tools – challenges and future directions in Sweden

ORCID Icon & ORCID Icon
Received 21 Apr 2022, Accepted 08 Dec 2022, Published online: 28 Dec 2022

Abstract

Purpose: National surveys of speech-language pathologists’ (SLP) practices play an important role in professional development, and previous research show that many challenges faced by the profession are similar across the globe. This study aims to describe Swedish SLP assessment practices, examine factors that may affect this practice, and discuss the results in the light of international studies.

Methods: Data from 584 SLPs were collected through an online questionnaire with multiple choice and open-ended questions. A mixed-method design was deployed where a deductive qualitative analysis of free-text responses complemented quantitative data.

Results: In line with previous results from English-speaking countries, both standardized discrete skill tests and contextualized assessments were used by the respondents but fewer used language sample analysis and dynamic assessment procedures, despite international recommendations. There were few differences based on experience, work setting, proportion of multilingual assessments and socio-economic status of the health catchment area. Main challenges reported were lack of time and difficulty prioritizing, and assessment and/or diagnosis of multilingual/L2 children, which is similar to challenges faced by SLPs in other countries. Swedish SLPs also reported lack of national clinical guidelines as a main challenge. Factors contributing to better assessments included experience, and the combination of many sources of information, including professional and interprofessional discussions.

Conclusions: The accumulated evidence from this and previous studies show that to address challenges and build on strengths, changes on a systemic level are needed. This includes more time and resources for continuing education and implementation of recommended assessment methods, as well as professional and interprofessional collaborations.

INTRODUCTION

Difficulties with speech, language and communication is one of the most common developmental concerns in children and adolescents, affecting more than 10% of children worldwide [Citation1,Citation2]. Even though conditions for speech-language pathologists (SLPs) vary across countries, comparisons between countries and a description and discussion of factors that both contribute to and hinder best practice may contribute to the professional development of SLPs globally. In addition, many challenges related to the evaluation and assessment of children’s language abilities are similar across the world. Most published studies are from English-speaking countries, and the present study is the first to describe the language assessment practices of Swedish paediatric SLPs and factors that may influence practice. The results are discussed in the light of previous studies to further the SLP profession and form a base for future studies and policy work.

Swedish is a relatively small language with around 10 million speakers, and there are less than 2000 SLPs employed in health care settings [Citation3]. Most paediatric SLPs work in clinics/hospitals with children with primary speech and language difficulties, or in habilitation centres serving children with more complex needs regarding speech, language and/or communication, such as autism or intellectual disability. In contrast to, for example, the United States (US), SLPs in preschools or schools are quite uncommon in Sweden, with a total of around 250 SLPs employed by a municipality (the lower-level local government entity responsible for preschools and schools) [Citation4]. In all work settings, assessments of children from culturally and linguistically diverse (CALD) backgrounds are common: around one fourth of all children in Sweden have a CALD background (51,400 children) [Citation5]. Similar to other countries in Europe (and beyond), this number has been and is still increasing with increasing global migration, with almost one million children seeking asylum in the European Union between 2015 and 2017 [Citation6], which affects speech-language pathology practice in many countries. There is a monolingual tradition and norm in Sweden, and currently most Swedish SLPs are monolingual speakers of Swedish. Thus, there is a lack of multilingual SLPs, especially those who speak the languages of the large migrant groups.

In Sweden, health care, including SLP-services, for all children younger than 18 is tax funded with no co-pay (including asylum-seeking children), but access to SLP services varies, resulting in long waiting times for SLP evaluations and less access to treatment in some parts of the country. The total number of Swedish SLPs now corresponds to around 25 SLPs/100,000 people but there is a large variability between different regions [Citation3]. Thus, the overall density of SLPs is about the same in Sweden as in the United Kingdom (UK) but only about half of the density found in the US. Swedish SLPs have a relatively large autonomy: after obtaining an SLP degree you get a permanent license from the Swedish National Board of Health or Welfare, which does not require continuing education, and can only be revoked due to, for instance, malpractice [Citation7]. The combination of rapidly changing patient demographics, no mandated continuing education, and a lack of Swedish nationally adopted clinical guidelines or recommendations for paediatric speech, language, or communication assessments risk disparities in the assessment process and the clinical decisions following an SLP assessment. This was confirmed in a recent survey study of 131 Swedish SLPs on the assessment practices of speech sound disorders (SSDs) in 4–6 year old children, which concluded that assessment procedures and tools vary widely among Swedish SLPs, and that Swedish SLPs do not always follow international research and recommendations for assessment of children with SSDs [Citation8].

Previous research on SLP assessment practices

In this study, we will use a taxonomy for describing language assessment procedures and tools presented in Denman et al. [Citation9]. According to this taxonomy, a language assessment can be described according to I) its modality and domains targeted, II) assessment purpose, III) service delivery (direct testing, proxy-reported or software-assisted, clinical or community context), and IV) assessment form. The aspect of assessment form classifies tests in four complementing categories, each containing mutually exclusive aspects of test content, procedures, and data: discrete skill or contextualized/activity-focused assessment, static vs dynamic assessment, standardized vs non-standardized administration procedures, and the test data that can be described as norm-referenced, criterion-referenced or descriptive.

Several recent studies, mainly from English-speaking countries, have investigated what assessment tools and procedures SLPs use for different assessment purposes and populations. In general, static, standardized norm-referenced tests have been reported as the predominant tool of English-speaking paediatric SLPs [Citation10–13], despite the recommendation of combining different sources of information in language assessment using dynamic, descriptive and contextualized tasks to a larger extent [Citation14,Citation15]. Examples of these tasks are structured observations of functional communication, dynamic assessment (DA) and language sample analysis (LSA). DA involves investigating a child’s learning potential, and what support the child needs in order to learn, in contrast to static tests who focus on current level of performance [Citation16]. Regarding LSA, studies have found that even though a majority of paediatric SLPs in Australia and school-based SLPs in the US collect language samples [Citation17,Citation18], only around half record the language samples, and few transcribe and analyse the samples in detail.

The assessment of children from CALD and lower SES backgrounds

Several studies have investigated the assessment of multilingual children and/or children from CALD backgrounds. In the CATALISE consensus studies [Citation15,Citation19] on identifying language disorders (LDs) in children, the importance of assessing all languages in multilingual clients was stressed, but the authors acknowledged that this might not always be possible [Citation15], for example, when the SLP is monolingual or does not speak the language or the dialect of the client, and interpreters are not available. International recommendations stress that the use of norm-referenced tests in the majority language is not recommended for CALD children and the use of contextualized and dynamic assessments are encouraged [Citation20,Citation21]. However, previous survey studies have found an over-reliance on standardized norm-referenced tests in the majority language when assessing multilingual children [Citation10,Citation12,Citation13,Citation22–24]. In addition, Australian paediatric SLPs and US school-based SLPs with a higher proportion of multilingual and L2-children on their caseload, did not use DA to a greater extent [Citation10,Citation25].

The studies referenced above have not investigated whether parental socioeconomic status (SES) affects SLPs choices regarding assessment, although SES is a factor that might be associated with language development [Citation26]. A recent Swedish study found that multilingualism and SES explained 54% of the variance in index scores from the Swedish standardized, norm-referenced test battery Clinical Evaluation of Language Fundamentals (CELF-4: [Citation27]) in a sample of primary school children, but that multilingualism only explained 9% unique variance [Citation28]. The sample included children from two school districts with different SES, 47% were multilingual, and their CELF-4 index scores were lower than expected based on published norms. Thus, SES might have a stronger association with CELF-4 test performance than multilingualism, with an increased risk of overidentifying language disorders in groups with lower SES. The results highlight the importance of including dynamic and contextualized assessments also when assessing language of monolingual children from lower SES backgrounds.

Factors affecting SLP assessment practices

How can we understand SLP’s clinical practices in relationship to recommendations and evidence on the one hand, and factors such as client background and SLP knowledge on the other hand? It is important to consider what factors might affect practice, since (for example) non-adherence can have many different underlying reasons. In a systematic review, Flottorp et al. [Citation29] presented a taxonomy of seven factors that may either enable or prevent best practice in healthcare professions: Guideline factors (e.g. quality of evidence, clarity, feasibility), Individual health professional factors (e.g. knowledge and skills, attitudes), Patient factors (e.g. needs, beliefs), Professional interactions (e.g. team processes, influence from professional opinions), Incentive and Resources (e.g. resources, continuing education system, incentives), Capacity for organizational change (e.g. regulations and rules, mandate), and Social, political and legal factors (e.g. economic priorities, malpractice liability, funder policies).

Previous SLP surveys have primarily investigated the effect of Individual health professional factors (e.g. work experience, work setting), Resources (e.g. time, tool availability, continuing education) and Guideline factors (e.g. recommendations regarding CALD children or children with complex needs) on assessment practices. In an overview of previous surveys, Denman [Citation11] notes that the work experience of SLPs might affect assessment practice: SLPs with longer work experience seem to use their own judgements and create own assessment protocols to a larger extent [Citation10,Citation18,Citation30]. In her own study, Denman [Citation11] surveyed 407 Australian SLPs with experience assessing 4–12 year old children. She found that SLPs with longer work experience used contextualized tasks more than those with two years or less of work experience. In addition, DA was used significantly more by those with 3–5 years of work experience and less for those with shorter and longer work experience. The top challenges reported by her sample group were lack of time for assessment and limited time to meet with parents, limited assessment materials and lack of skills or confidence assessing children with CALD backgrounds. Other studies have reported limited time, limited access to training and budget constrictions major challenges [Citation12,Citation17,Citation18,Citation22].

Studies have also found that SLPs who work in disability services or with children with complex needs use norm-referenced tests less frequently [Citation11], and contextualized and criterion-based assessments more frequently [Citation31] than those working in other settings. A survey of SLPs working with children with autism from 35 different countries (n = 852) found that a majority (68%) reported using standardized norm-referenced tests, but that the use of criterion-referenced and dynamic assessment tools was also high (78% and 76%) [Citation32].

Aims and research questions

This study aims to describe Swedish SLP assessment practices, examine factors that may affect this practice, and compare the results to previous international studies, including English-speaking SLPs, with the goal to inform the SLP profession at large. We will answer the following research questions through both quantitative and qualitative analyses:

  1. What assessment procedures and tools do Swedish paediatric SLPs use?

  2. Is there a difference in assessment procedures and tools depending on

    1. SLP factors: the work setting (hospital/clinic/private practice, habilitation centres, schools/municipality) or the professional experience of the SLP?

    2. Client factors: The estimated proportion of multilingual assessments or the socioeconomic status (SES) of the health catchment area of the SLP?

  3. What are some additional challenging or enabling factors affecting language assessment tools and procedures that SLPs themselves identify?

Methods

We created an online, anonymous web-based survey using Qualtrics XM software (Version 09/2019, Qualtrics, Provo, UT) with 28 questions in five parts: Background data, Your assessments and evaluations, Speech and language sample analysis, Dynamic assessment, and Final comments. In this paper, background data, questions from the second part related to our research questions, and the final comments are analysed (15 questions total, a mix between multiple-choice questions and free text responses).

According to Swedish law [Citation33], ethical approval is only needed for research that involves physical intervention, biological materials, genetic or biometric data, or sensitive personal data. Thus, no ethical review was required for this survey study.

Before data collection, the questionnaire was piloted on six SLPs working in different settings, who gave feedback on questions and questionnaire length, and the questionnaire was somewhat adjusted and shortened based on their feedback. The survey took 15–25 min to complete (based on data from Qualtrics). Before the first part, the potential respondents were given information about the purpose of the questionnaire, how long it would take, how data would be handled and used, and that participating was anonymous and voluntary. They gave informed consent before the first question.

Participants were recruited during four weeks in November and December 2019. Potential respondents were contacted via e-mail, through 1) personal contacts of the first author, 2) local/national networks of SLPs working in different areas (e.g. dyslexia-SLPs, school-SLPs), and 3) the e-mail list of the Swedish Association of Speech and Language Pathologists. In addition, information about the study was posted in 4) professional networks in social media (Swedish SLP-only groups on Facebook), and 5) the professional online forum for Swedish SLPs (“Logopedforum”).

Analysis

Both descriptive and inferential statistics were used to analyse the quantitative data. Chi-square tests of homogeneity (2xC) were used to determine if there existed a difference between the binomial proportions of three independent groups (three groups with varying work setting, work experience, SES in health catchment area and multilingual population, respectively) regarding the use of different assessment procedures and tools according to the research questions. Post-hoc pairwise comparisons using the z-test of two proportions with a Bonferroni’s correction were used to examine where differences pertained.

Data from the free text questions were analysed with qualitative content analysis. A coding system to describe the content in categories was established, and the categories were then collapsed into descriptive themes to interpret and summarize the data [Citation34,Citation35].

Results

We received 815 responses to the survey. Respondents who did not consent to participate (n = 5) or who did not respond past part 1 (Background data, n = 226) were excluded from the analyses. The final sample comprised of 584 Swedish SLPs, representing about 25% of all SLPs in Sweden. Most respondents had earned their SLP-degrees at one of the six universities with SLP-programs in Sweden: University of Gothenburg (n = 130), Lund University (n = 127), Karolinska Institutet, Stockholm (n = 119), Linköping University (n = 87), Uppsala University (n = 72), and Umeå University (n = 39). Ten respondents had a combination of different Swedish universities or an international degree.

Demographics

A description of the respondents regarding number of years of work experience, current main work setting, and which age-groups they work with, is found in . We decided to group SLPs in clinics, hospitals or private practice together based on knowledge of the Swedish health care system in general, and SLPs in Sweden in particular. Those workplaces share many similarities in general in terms of scope of practice regarding language assessments of children, and the differences between different clinics and hospitals can be more different than the differences between private practice SLPs and clinics.

Table 1. Characteristics of participants.

Many respondents (37.5%) had worked for five years or less since completing their degree, which is in line with data showing that SLPs in Sweden have increased by 22% in 2014–2019 [Citation3]. There was an overrepresentation of SLPs working in municipalities/preschools/schools in the study sample, comprising 26% of the study sample, while only about 7.5% of all SLPs in Sweden work in these settings.

Respondents estimated how many percent of their working hours they used for assessment (on a sliding scale 0–100%) and reported a mean of 44.3% (SD = 25.9). They also reported the purposes of their assessments, where 68% reported that they assessed children with the purpose to diagnose, 72.8% assessed to follow development or evaluate intervention effects, and 76.5% that they assessed to plan intervention. Almost all respondents, 93.2%, reported that one purpose of assessment was to give recommendations to other relevant parties, such as parents/caregivers, teachers, or other professionals.

SLPs who work in clinics, hospitals, or private practice (55%, n = 321) reported that they use on average 55.2% (SD = 24.7) of their total work time for assessments. Diagnosing and give recommendations were the most common purpose of assessment in this group with 93.6% and 93.3% of respondents respectively (follow-up: 79.0%, plan intervention: 78.4%). SLPs working at habilitation centres/disability services (17%, n = 101), estimated that 35.2% (SD = 18.3) of their time is used for assessments, and in this group fewer reported that they assess with the purpose of diagnosing with 48.5% of respondents, while almost all, 98.0%, reported that they assessed to plan intervention (give recommendations: 91.1%, follow-up: 72.3%). The group who reported the least time spent on assessments, 27.3% (SD = 20.4), worked in pre-schools, schools or municipalities, and the most reported purpose of assessment (94.8%) was to give recommendations (diagnosing: 26.6%, follow-up 59.7%, plan intervention 58.4%).

Respondents estimated that on average 41% (SD = 26.5) of their assessments concerned multilingual children (on a sliding scale 0–100%). This is higher than population estimates of multilingual children (25%), which might reflect an over-referral of multilingual clients to SLP evaluations, or an overrepresentation of respondents in health catchment areas with a higher proportion of multilingual clients. Participants were divided into three groups 1) Low: lower than the proportion of multilingual children in Sweden (0–20%, 22.5% of respondents), 2) Mixed: around population estimations up to half of all assessments (21–50%, 38.5% of respondents) and 3) High: a majority of assessments (51–100%, 39.1% of respondents).

The respondents also reported the socio-economic status (SES) in their health catchment area. In the survey question, SES was defined as (client) parental level of education and income, and socio-geographic structure of housing. The respondent could choose one of five categories: low SES (11.5% of respondents), mainly low SES (8.7%), mixed SES (69.4%), mainly high SES (7.0%) and high SES (2.5%). For analyses responses were collapsed into three groups 1) Low/mainly low, 2) Mixed, and 3) Mainly high/high.

Quantitative analyses

What assessment procedures and tools do Swedish SLPs use?

shows the assessment procedures and tools that the respondents report they regularly use (multiple choice question, several choices possible). 82% reported that they used four or more procedures/tools, and only 3.8% reported two or less procedures/tools. Almost all SLPs reported using at least one type of contextualized assessment (standardized or non-standardized observations, CBA or “other” non-standardized materials, which were exemplified in the survey with sequence picture series and non-standardized narrative retell tasks). In addition, almost all SLPs collected background information as well as using standardized tests. In a follow-up question all participants were asked to report what standardized published tests they use on a regular basis (free text response which was subsequently sorted and analyzed by the first author who has clinical experience using most available Swedish test materials). All tests used by 15% or more of respondents were static, discrete skill tests or test batteries, except for the narrative test Renfew Bus Story Test (see supplementary materials for a full list). The two most used standardized norm-referenced tests (used by 70% or more of respondents) were the comprehension assessment Test for Reception of Grammar (TROG-2: [Citation36]), and the test battery Clinical Evaluation of Language Fundamentals (CELF-4: [Citation27]).

Table 2. Assessment procedures and tools used by Swedish paediatric SLPs.

Assessment procedures and tools by SLP and client factors

shows assessment procedures and tools by SLP factors (work setting and professional experience) and by client factors (proportion of multilingual assessments and SES of the health catchment area).

Table 3 (a). Swedish paediatric SLPs’ assessment procedures and tools by workplace and work experience (short: 0–5 years, medium: 6–15 years, long: 16 years and longer).

Table 3 (b). Swedish paediatric SLPs’ assessment procedures and tools by SES (low/mixed/high) and proportion of multilingual assessments (high: 51–100%, mixed: 21–50%, low: 0–20% of assessments).

Based on our review of previous studies we expected differences in standardized tests and CBA between work settings, and differences in contextualized tools and procedures based on work experience, which we compared using two chi-square tests of homogeneity (2xC). All expected cell counts were greater than five, and the results showed that three independent binomial proportions were significantly different for both standardized tests (p = .05) and CBA (p = .01) in the three work setting groups (Clinical, Habilitation and Municipal), while the chi-square test concerning differences in use of contextualized assessments in the three professional work experience groups (short, medium, long) was not significant. Post-hoc pairwise comparisons of the use of standardized tests using the z-test of two proportions with a Bonferroni’s correction showed that differences were not statistically significant, however. Post-hoc pairwise comparisons of the use of CBA using the z-test of two proportions with a Bonferroni’s correction showed that the municipal group used CBA to a significantly higher degree compared to the clinical and habilitation groups. It is worth noting that in the habilitation group (who do diagnostic assessments to a lesser extent and work with children with complex needs), 100% of respondents report using contextualized assessments.

For the client factors (multilingualism and SES of health catchment area, ) we expected differences in the use of DA and LSA based on our review and international recommendations. Inspection of average percentages warranted a statistical analysis of the use of CBA as well. All expected cell counts were greater than five, and chi-square tests of homogeneity (2xC) were conducted. The results showed that three independent binomial proportions were not different for the three groups with different proportion of multilingual assessments. In contrast, the comparisons were significant for comparisons of DA and CBA in the three SES groups. Post-hoc pairwise comparisons using the z-test of two proportions with a Bonferroni’s correction showed that the use of DA was significantly more common in SES group 1 (low/mainly low SES) compared to SES group 3 (mainly high/high SES) (p = .013). Post-hoc pairwise comparisons using the z-test of two proportions with a Bonferroni’s correction of the use of CBA in the three SES groups showed that the use of CBA was also significantly more common SES group 1 (low/mainly low SES) compared to SES group 2 (mixed SES) and SES group 3 (mainly high/high SES). This seems to be a consequence of which work settings were represented in the different SES groups, however, since 47.4% of the SLPs reporting low/mainly low SES (SES group 1) were in the municipality group.

Qualitative analyses

Factors affecting assessment practices

Respondents were invited to comment on how they thought that their assessment practices in general worked, after being presented with the different purposes of assessment and asked to reflect on and rate their satisfaction for their own assessments assessment practice. At the end of the survey, there was a final opportunity to add additional comments (if any), and after reviewing responses to both questions, we decided to analyse those two questions together. The first author added up to four category codes to each respondent’s comment, and the categories were discussed between the authors until agreement was reached. Comments not related to assessment practices were not coded or included in this analysis. 527 respondents (90%) added a comment that received at least one category code. Out of those 527 comments, 46% received one category code, 28% received two codes, 17% received three codes and 9% received four codes. In total, 1076 codes were assigned, and of those 28% were coded as positive (factors that contribute to better assessments) and 72% were description of challenges.

The categories were sorted into five descriptive themes [Citation35] where all but one included both positive categories and categories regarding challenges: “Assessment procedures and tools” (38.5% of all category codes, 22.3% regarding challenges), “Scheduling and Resources (time, money, education, experience)” (20.6% of all category codes, 16.3% regarding challenges), “Challenges assessing specific groups of children” (17.5% of all category codes). “Professional interactions” (16.0% of all category codes, 9.5% regarding challenges), “Guidelines and consensus” (7.4% of all category codes, 6.8% regarding challenges).

The most frequent category (from “Scheduling and Resources”) was lack of time and difficulties prioritizing (24% of the respondents who added at least one comment): “Unfortunately, there is not always time for an assessment that is as thorough and meticulous as I would like.” The second most frequent category (from “Challenges assessing specific groups of children”) was the assessment and/or diagnosis of multilingual/L2 children (19% of respondents): “It is always complex to assess the multilingual children. The evaluations feel blunt and the test results seem less reliable.” The third most frequent category (from “Guidelines and consensus”) was the need of guidelines to aid the assessment and diagnostic process (11% of respondents): “Weak consensus regarding SLP diagnoses. There is a lack of clear guidelines how severe difficulties must be for a child to receive a diagnosis. Currently, it is mostly based on subjective estimations by the assessing SLP.”

In terms of positive comments, 9% of respondents stressed that their tools and assessment practices work well for their specific area or purpose (from “Assessment procedures and tools”): “My assessments works well for my defined area of practice,” “I am aware that several of the tests I have are not optimal (…) but I feel that I do the best given the circumstances and that I continuously try to increase my knowledge to improve my assessments.” Furthermore, 8% mentioned that the assessment practices works well thanks to their own experience and knowledge (from “Scheduling and Resources): “My experience together with the methods I use make me feel satisfied with my assessment practices.” Finally, 7% of respondents stressed the importance of combining several sources of information, including team cooperation and discussion with colleagues (from “Assessment procedures and tools”): “A combination of standardized testing, informal assessment and observation/participation in everyday situations give a more just picture of the language level of the child,” “Important to have colleagues to discuss with when needed.

Discussion

This study presented survey data from 584 Swedish paediatric SLPs with a focus on language assessment practices and the factors that may affect their assessment procedures and tools. The study deployed a mixed-method design with significance testing of group differences (N = 584) and a deductive qualitative analysis of responses to open-ended questions (n = 527).

Results showed that Swedish SLPs employ a considerable variety of tools and sources of information in their assessments, which is in line with current international best practices and recommendations [Citation15,Citation21]. The results also showed that Swedish SLPs use contextualized assessments, including observation, curriculum-based assessments, and the use of other non-standardized materials to elicit contextualized language to a large extent, but in line with survey studies from other countries, standardized tests were also a very common assessment tool overall, after interviews and background questionnaires to parents/clients [Citation10–13,Citation17]. The frequent use of a small range of standardized tests was an expected result, since there is a lack of published standardized test tools in Sweden [Citation37], a problem that is not present in the largest English-speaking countries where the challenge rather is to choose among a large number of tests and find the ones with the best psychometric properties [Citation38,Citation39]. The scarcity of standardized and norm-referenced tests in Swedish is most likely due to the cost of developing tests and collecting normative data, in combination with that Swedish is a small language, and that there are relatively few SLPs (also compared to e.g., psychologists). The few available standardized tests might also be part of the explanation of why the SLPs in this sample report using contextualized non-standardized tasks (including observations) to a higher degree than SLPs from English-speaking countries in some studies [Citation11,Citation13]. Dynamic assessment (DA) was used by about a fifth of the sample, which is comparable with results from Australian SLPs [Citation11], but less than in the survey of international SLPs by Teoh et al. [Citation13]. In addition, transcription and analysis of language samples was only used by a third of our sample, less than in previous studies [Citation17,Citation18]. However, these previous studies showed that many paediatric SLPs report collecting language samples, but few record, transcribe and analyze them in detail. In our survey, we explicitly asked if SLPs “transcribe and analyse elicited spontaneous speech with the purpose of analyzing anguage, e.g. MLU or grammatical complexity”, which implies both recording, transcription and structured analysis. Thus, even though the use of LSA has long been recommended as a part of any child language assessment [Citation40] it is still relatively rare among both Swedish paediatric SLPs, and SLPs in major English-speaking countries.

The effect of work setting and work experience

There was a clear effect of work setting on total time devoted for assessments and purpose of assessments. In habilitation centres, the most common purpose was to plan intervention, which also might reflect that many clients in habilitation centres already have a diagnosis when they are added to the SLP’s case load, and focus is on habilitation services. Consequentely, the use of standardized tests significantly differed between work settings, and even though post-hoc comparisons were non-significant, the clinical group reported the highest average use, which can be explained by fact that they perform more diagnostic evaluations. There was a trend that SLPs working in habilitation centres used observations to a larger extent than other work settings, and standardized tests to a slightly lesser extent, which was also in line with previous studies [Citation11,Citation31]. Contrary to the international survey study by Gillon et al. [Citation32], DA was not more common among SLPs working in habilitation centres, although contextualized assessments were very common. Gillon et al. [Citation32] specifically surveyed SLPs who worked with children with autism, however, while Swedish SLPs in habilitation centres typically work with children with a wider range of diagnoses, which could explain the differences. DA procedures were only used by a fifth of the sample total, and the low use of DA by SLPs in habilitation centers is most likely reflective of a general lack of training in DA, but this has to be followed up with additional research. As expected, the use of curriculum-based assessment (CBA) was significantly more frequently used by municipality-employed SLPs than other groups.

The results showed no significant differences in reported assessment procedures and tools based on years of work experience. Previous studies have reported that longer work experience is associated with more use of contextualized and/or non-standardized procedures and tools [Citation10,Citation11,Citation18,Citation30]. This difference might be explained by the fact that procedures and tools for assessments with different purposes could not be separated in this study. Differences between what is considered “long” or “short” work experiences, as well as “contextualized” or “non-standardized” assessments could also have contributed to the different findings.

The effect of proportion of multilingual assessments and SES

Contrary to our expectations, a greater use of DA was not associated with a higher proportion of multilingual assessments. This is similar to findings in Denman [Citation11] and Caesar and Kohler [Citation10]. Previous studies have reported that SLPs use norm-referenced standardized tests in the majority language with multilingual and L2 clients, against recommendations [Citation10,Citation12,Citation13,Citation22–24]. The results from the present survey cannot reveal whether Swedish SLPs follow the recommendations of not using standardized norm-referenced tests with multilingual children, but the qualitative analysis revealed that almost one fifth of respondents mentioned assessing multilingual children as a challenge, and many mentioned that that new or alternative assessment tools for this group are needed. The limited use of alternative procedures that are stressed in international recommendations such as LSA and DA [Citation20,Citation21] is most likely a consequence of lack of training and lack of nationally adopted guidelines, but lack of time might also be a contributing factor which will be discussed further below. Finally, we do not know how many of the SLPs who responded to the survey were multilingual themselves, but there is a known lack of multilingual SLPs in Sweden. Importantly, the same challenges will still apply for multilingual SLPs in many cases, since they will meet children with many different language backgrounds.

In line with our expectations, DA was significantly more used by SLPs working in areas with low or mainly low SES than those in mainly high/high SES. In addition, the use of CBA was also higher in this group, and inspection of the data showed that municipality-based SLPs were overrepresented in the group that reported that they worked in areas with low or mainly low SES. This might indicate that municipalities with lower SES hire SLPs to a larger extent, given that around 60% of all municipality-employed SLPs in Sweden responded to the survey. On the one hand, it is encouraging that DA was used more in areas with low SES, given the results reported in Andersson et al. [Citation28], which showed that there is a risk of over-diagnosing language disorder in children from low SES backgrounds if norms from the Swedish version of CELF-4 are used. On the other hand, this result was driven by SLPs employed by municipalities, who was the group reporting that they spent the least time on assessment (27.3% of total working time) and rarely with the purpose of diagnosing (26.6% of respondents in this group). In addition, even though the difference was significant, less than one third of respondents in the Low SES group reported using DA, and less than a quarter of respondents in the group with a proportion of 50% or more multilingual assessments. To our knowledge, no previous studies have investigated the relationship between client SES and SLP assessment practices, but in many countries, groups with lower SES and L2-learners overlap (e.g. newly arrived migrants), and as Andersson et al. [Citation28] showed, a low SES may be associated with a lower performance on standardized norm-referenced tests. Therefore, this is an important avenue for future research.

Additional factors affecting assessment practices

In order to surface larger trends within the material, all categories that rose from the deductive qualitative analysis of the comments regarding own assessment practices were collapsed into five descriptive themes, and four of those contained both challenges and positive aspects. The two most frequent categories of positive factors in the present survey were the importance of experience and the combination of several sources of information, including professional and interprofessional discussions. It should be noted that professional interactions can work both ways: seven percent of respondents reported this as a positive factor, but slightly more (ten percent) reported this as a challenge, especially regarding finding enough time and effective structures for interprofessional collaboration. Lack of time was one of the main challenges reported in this survey, in addition to lack of resources, the assessment and/or diagnosis of multilingual/L2 children, and lack of guidelines. A closer look at the responses regarding lack of time, showed that a high tempo and a (too) high case load is mentioned by many respondents. The additional stress of long waiting times and a shortage of SLPs in some areas, adds to the stress and feeling of lack of time. Even though there are differences in design and methodology between studies, these challenges correspond well with previous results from 407 Australian paediatric SLPs [Citation11], as well as other survey studies in US and Australia [Citation12,Citation17,Citation18,Citation22]. This shows that these challenges by no means are limited to one country or one work setting, and these results strengthen the evidence of the challenges the professional field are up against.

One difference compared to studies from English-speaking countries, is that Swedish SLPs point out lack of nationally-adopted guidelines as a major challenge. With lack of time, and without mandated continuing education or national recommendations, SLPs themselves express that they would like national consensus and direction regarding the assessment of child language and the diagnosis of language disorders (LD). The main points of the CATALISE consensus studies on identifying LDs in children [Citation15,Citation19] have been translated into Swedish [Citation37], but this document is not adopted as official recommendations. Its use is further complicated by the disparities between CATALISE and the classification system ICD-10 [Citation41], which all SLPs employed in health care settings in Sweden use (the Swedish version of ICD-11 is not finalized). One example is that the Swedish version of ICD-10 includes nine different subtypes of LD, of which few have a solid research base [Citation37], while the CATALISE panel advised against subtypes.

To summarize the results of the qualitative analysis through the lens of the seven factors by Flottorp et al. [Citation29], we can conclude that Incentive and Resources and Guideline factors risk affecting Swedish paediatric SLPs’ assessment practice negatively. Since the assessment of certain groups of children are also viewed as a challenge, this could be interpreted as a Patient factor, but many respondents noted the lack of suitable assessment materials and difficulty interpreting assessment results for these groups, again highlighting the need for more resources, continuing education, and clear guidelines. Factors that respondents listed as positive for assessments practices were primarily related to Individual health professional factors and Professional interactions. Thus, the challenges belong to a more systemic level, while the aspects that enable better SLPs assessments according to the practitioners themselves are individual-based and based around experience, self-reflection, and successful professional collaboration.

Limitations and future directions

This study included a larger sample of SLPs than in many previously published survey studies from other countries, but corroborate many of the previous results in showing similar patterns of factors that affect assessment practices. The way some questions were worded in the present study might have obscured the potential relationship between the use of different assessment procedures and tools and other factors, since no specific questions regarding what specific tools SLPs use for different purposes of assessment were asked. Both the quantitative and qualitative analysis revealed differences between work settings, as well as challenges in line with previous studies, however, and the fact that the deductive analysis revealed similar results as previous studies with pre-defined categories strengthen both previous results and the results in the present study.

Future studies should continue investigating both positive and negative factors affecting speech-language pathology practice; many previous studies have mainly focused on negative factors, challenges, or the (non-)adherence to existing recommendations or guidelines, and did not ask the professional why, nor included existing enabling factors. Future studies on language assessment practices should also ask more specific questions regarding the assessment of for example, different populations and different purposes and the associated enabling factors and challenges, which could lead to a more fine-grained quantitative analysis than what was possible in the current study. Research on how to best implement transcription and analysis of language samples (LSA) and dynamic assessment practices (DA) in paediatric SLP practice is also a very important future direction.

Conclusions and clinical implications

We want to highlight two main conclusions. Firstly, similar to SLPs in other countries, Swedish paediatric SLPs use many different sources of information in their assessments, with few differences between SLPs working in different work settings, duration of work experience, proportion of multilingual clients, or working in areas with different socioeconomic status. The use of tools that international guidelines recommend as complement to discrete skill standardized tests, such as dynamic assessment and transcription and analysis of language samples, are not widely used in Sweden either, and not more used among SLPs with a higher proportion of multilingual clients or who work in areas with a lower SES. Secondly, the factors affecting the language assessments practices of Swedish SLPs positively and negatively are similar to those reported by SLPs in other countries: having enough time and resources for best-practice assessments, as well as time for professional and interprofessional collaborations. Thus, to develop and improve assessment practices, clear recommendations, education, and more time, are much needed.

We believe that the results of this study should be used by SLPs and SLP educators, clinic and habilitation centre directors, school principals, and policy makers as a source of discussion and a push for development – both in Sweden and internationally. Many of the challenges SLPs face need to be addressed from the top, at a systemic level, while at the same time building on and encouraging the already existing enabling factors that promote the development of paediatric SLP’s language assessment practices.

Supplemental material

Supplementary_materials__most_used_standardized_tests_.docx

Download MS Word (20.2 KB)

Acknowledgements

The authors want to thank all respondents and the SLPs who generously provided feedback on the first version of the survey: Julia Andersson, Matilda Andersson, Maria Archenti, Anna Nyman, Sofia Strömbergsson and Gerda Widarsson.

Disclosure statement

The authors report no conflict of interests.

References

  • Norbury CF, Gooch D, Wray C, et al. The impact of nonverbal ability on prevalence and clinical presentation of language disorder: evidence from a population study. J Child Psychol Psychiatry. 2016;57(11):1247–1257.
  • Eadie P, Morgan A, Ukoumunne OC, et al. Speech sound disorder at 4 years: prevalence, comorbidities, and predictors in a community cohort of children. Dev Med Child Neurol. 2015;57(6):578–584.
  • Statistikdatabas för hälso- och sjukvårdspersonal (1995-2018) [Statistics data base for health care workers, Electronic Resource]. Stockholm: Socialstyrelsen. 2020. cited 22 November 2020. https://sdb.socialstyrelsen.se/if_per/val.aspx.
  • Slof. Kommunlogopedi på frammarsh [Municipality SLP services are increasing]. 2020. 5 May 2021. https://logopeden.se/kommunlogopedi-pa-frammarsch/.
  • Statistics Sweden. Uppväxtvillkor för barn med utländsk bakgrund [Living conditions for children with a foreign background]. Demografiska rapporter. 2020.
  • Hjern A, Kadir A. Health of refugee and migrant children: technical guidance. World Health Organization; 2018.
  • SFS 2010:659. Patientsäkerhetslag kap 8 3§ [Swedish code of Statutes, The Patient Safety Act]. 2010. 659.
  • Wikse Barrow C, Körner K, Strömbergsson S. A survey of Swedish speech-language pathologists’ practices regarding assessment of speech sound disorders. Logopedics Phoniatrics Vocology. 2021;1–12. DOI: 10.1080/14015439.2021.1977383.
  • Denman D, Kim J-H, Munro N, et al. Describing language assessments for school-aged children: a delphi study. Int J Speech Lang Pathol. 2019;21(6):602–612.
  • Caesar L, Kohler P. A survey of language assessment procedures used by school-based speech-langauge pathologists. Communication Disorders Quarterly. 2009;30(4):226–236.
  • Denman DA. Language assessment of school-age children: examining the evidence and describing clinical practice. 2019.
  • Fulcher-Rood K, Castilla-Earls AP, Higginbotham J. School-based speech-language pathologists’ perspectives on diagnostic decision making. Am J Speech Lang Pathol. 2018;27(2):796–812.
  • Teoh WQ, Brebner C, McAllister S. Bilingual assessment practices: challenges faced by speech-language pathologists working with a predominantly bilingual population. Speech, Language and Hearing. 2018;21(1):10–21.
  • World Health Organization. International Classification of Functioning, Disability, and Health: children & Youth Version: ICF-CY. World Health Organization; 2007.
  • Bishop D, Snowling M, Thompson P, CATALISE consortium, et al. CATALISE: a multinational and multidisciplinary delphi consensus study. Identifying language impairments in children. PLoS One. 2016;11(7):e0158753.
  • Hasson N, Joffe V. The case for dynamic assessment in speech and language therapy. Child Language Teaching and Therapy. 2007;23(1):9–25.
  • Westerveld MF, Claessen M. Clinician survey of language sampling practices in Australia. Int J Speech Lang Pathol. 2014;16(3):242–249.
  • Pavelko SL, Owens RE, Ireland M, et al. Use of language sample analysis by school-based SLPs: results of a nationwide survey. Lang Speech Hear Serv Sch. 2016;47(3):246–258.
  • Bishop DVM, Snowling MJ, Thompson PA, and the CATALISE-2 consortium, et al. Phase 2 of CATALISE: a multinational and multidisciplinary delphi consensus study of problems with language development: terminology. J Child Psychol Psychiatry. 2017;58(10):1068–1080.
  • Orellana CI, Wada R, Gillam RB. The use of dynamic assessment for the diagnosis of language disorders in bilingual children: a meta-analysis. Am J Speech Lang Pathol. 2019;28(3):1298–1317.
  • Scharff Rethfeldt W, McNeilly L, Abutbul-Oz H, et al. Common questions by speech and language therapists/speech-language pathologists about bilingual/multilingual children and informed, evidence-based answers. 2020.
  • Arias G, Friberg J. Bilingual language assessment: contemporary versus recommended practice in American schools. Lang Speech Hear Serv Sch. 2017;48(1):1–15.
  • McLeod S, Baker E. Speech-language pathologists’ practices regarding assessment, analysis, target selection, intervention, and service delivery for children with speech sound disorders. Clin Linguist Phon. 2014;28(7-8):508–531.
  • Skahan S, Watson M, Lof G. Speech-language pathologists’ assessment practices for children with suspected speech sound disorders: results of a national survey. Am J Speech Lang Pathol. 2007;16(3):246–259.
  • Denman D, Cordier R, Kim J-H, et al. What influences speech-language pathologists’ use of different types of language assessments for elementary school–age children? LSHSS. 2021;52(3):776–793.
  • Purpura DJ. Language clearly matters; methods matter too. Child Dev. 2019;90(6):1839–1846.
  • Semel EM, Wiig EH, Wayne SA. CELF-4, clinical evaluation of language fundamentals. - fourth edition. Swedish version (2013) (C. Miniscalco & A. Frylmark). Stockholm: Pearson Assessment. 2003.
  • Andersson K, Hansson K, Rosqvist I, et al. The contribution of bilingualism, parental education, and school characteristics to performance on the clinical evaluation of language fundamentals: fourth edition. Swedish. Frontiers of Psychology. 2019;10:1586.
  • Flottorp SA, Oxman AD, Krause J, et al. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implementation Sci. 2013;8(1):1–11.
  • Roulstone S. Exploring the relationship between client perspectives, clinical expertise and research evidence. Int J Speech Lang Pathol. 2015;17(3):211–221.
  • Watson RM, Pennington L. Assessment and management of the communication difficulties of children with cerebral palsy: a UK survey of SLT practice. Int J Lang Commun Disord. 2015;50(2):241–259.
  • Gillon G, Hyter Y, Fernandes FD, et al. International survey of speech-language pathologists’ practices in working with children with autism spectrum disorder. Folia Phoniatr Logop. 2017;69(1-2):8–19.
  • SFS 2003:460. Om etikprövning av forskning som avser människor. Swedish code of Statutes 2003. 460. About ethical Applications for research regarding human beings.
  • Creswell JW, Klassen AC, Plano Clark VL, et al. Best practices for mixed methods research in the health sciences. Bethesda (Maryland). National Institutes of Health. 2011;2013:541–545.
  • Graneheim UH, Lindgren BM, Lundman B. Methodological challenges in qualitative content analysis: a discussion paper. Nurse Educ Today. 2017;56:29–34.
  • Holmberg E, Lundälv E. TROG-2 svensk manual [TROG-2 Swedish Manual]. Specialpedagogiska Institutet Läromedel. 2002.
  • Sandgren O, Hedenius M. CATALISE Svensk sammanfattning [CATALISE Swedish summary]. 2017.
  • Dockrell JE, Marshall CR. Measurement issues: assessing language skills in young children. Child Adolesc Ment Health. 2015;20(2):116–125.
  • Denman D, Speyer R, Munro N, et al. Psychometric properties of language assessments for children aged 4–12 years: a systematic review. Front Psychol. 2017;8:1–28.
  • Evans JL, Miller J. Language sample analysis in the 21st century. Semin Speech Lang. 1999;20(2):101–115; quiz 116.
  • World Health Organization. The ICD-10 classification of mental and behavioural disorders. Vol. 2. Genève, Switzerland: World Health Organization; 1993.