855
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Measuring the Perceived Importance of Indicators of the Quality of Social Care Services

ORCID Icon, ORCID Icon, ORCID Icon, & ORCID Icon

ABSTRACT

Purpose

We aimed to develop and validate an instrument measuring perceived importance of indicators of the quality of social care services in different groups of stakeholders.

Method

671 respondents (249 representatives of public administration, 217 social service providers and 205 service users) completed a 40-item questionnaire developed on the basis of relevant quality assessment models.

Results

Item analysis was carried out and it was found that data were suitable for further analysis and no item removal was needed. Exploratory factor analysis (EFA) revealed 5-factor solution which was subsequently affirmed by confirmatory factor analysis. Scales derived from EFA exhibited high internal consistency with Cronbach’s alpha ranging from 0.83 to 0.95.

Discussion and Conclusion

The study offers a novel approach to the development of a measure of the quality of social care service based on separate investigation of perceived importance and service satisfaction. Hence, it has relevance for ongoing scientific debate on importance weighting in service satisfaction research.

Improving the assessment processes in social services seems to be of interest in many countries. For example, in March 2021, the European Commission published the new European Strategy on the Rights of Persons with Disabilities 2021–2030. This strategy included the aim of developing a European Framework for Social Services of Excellence for Persons with Disabilities by 2024 (European Commission, Citationn.d). In the US, there is a requirement for States to measure the impact and quality of the services they deliver to people with disabilities, in order to receive their funding. Several different frameworks and tools have been developed to measure service quality and the outcomes of service users, with different measures used in different ways across states and service providers. Although the conceptualization of quality of social services is an important area of research especially in the field of disability, in policy and practice, conceptualization and agreement on what constitutes good outcomes is still a subject of debate or lack of clarity (Šiška et al., Citation2021).

Significant attention has been devoted in the last decade to measuring and operationalization of quality of social services, for persons with disabilities in particular. Quality has been treated as multidimensional construct involving different kinds of potentially relevant quality indicators (Malley & Fernández, Citation2010). Although it can be assessed in different ways and several models for such assessment can be found in the literature (Dowling, Citation2008), user satisfaction with services is widely accepted as a fundamental indicator, in line with a client-centered approach (Fraser & Wu, Citation2016). Thus, it is not surprising that many satisfaction-related instruments have been developed and used in practice (Fraser & Wu, Citation2016). However, the measurement of satisfaction has faced several challenges especially in terms of the multidimensionality of this construct (Hsieh, Citation2012). One of the most commonly discussed issues in this context is whether an importance weighting is needed in such measures (Wu, Citation2008). The concept of importance weighting is based on the idea that different items should contribute differently (based on their perceived importance) in the construction of the total score of the tool. It means that the total score is not simple sum of the points from the individual items (which is typical approach) but sum of products of the points and weights attributed to these items. Basically, it is similar concept as weighting average routinely used in many different fields such as education etc. In addition, it is important for social services to acknowledge the need for “triangulation” of different sources of evidence on quality of social services for persons with disabilities. It is critical to combine subjective measures that look at what is important to people and how they experience life and the services they receive, with objective indicators (that will usually require observation) that ensure that basic needs are being met and that people have the same rights and opportunities as people without disabilities (Šiška, Beadle-Brown, Citation2022).

The scientific discussion about importance of weighting originates from the life satisfaction and job satisfaction research. There are many similarities between these constructs and that of service satisfaction (e.g., multidimensionality and the possibility of assessing global satisfaction by a single item – for details see, C-M. Hsieh (Citation2018)). Some researchers have stated that any rating of satisfaction includes the concept of importance (Locke, Citation1976) and thus importance weighting is redundant. This concept is called implicit weighting (Hsieh, Citation2017; McFarlin et al., Citation1995) and, together with the idea that importance of the given domain or item is guaranteed by its inclusion in the scale (Trauer & Mackinnon, Citation2001), constitutes the basis of the arguments put forward by opponents of importance weighting (Hsieh, Citation2017). On the other hand, the proponents of importance weighting state that importance should not be treated as a dichotomic variable (important/not important) but rather as a continuum with some domains potentially more relevant than others (Hsieh, Citation2013). Similarly, the concept of implicit weighting speaks to the relationships between discrepancy, importance and satisfaction within a single facet while importance weighting is focused especially on the relationships between satisfaction with many other factors (C-M. Hsieh, Citation2018). More detailed discussion about the relevance of importance weighting in the field of life, job or service user satisfaction can be found in the work of Wu (Citation2009) or C-M. Hsieh (Citation2018).

Even in the studies accepting the relevance of importance weighting, the typical approach is to develop subscales (corresponding to the satisfaction domains) based, for example, on the psychometric analysis of satisfaction data and then to calculate subscale scores based on individual items or total scores based on subscale scores. Typically, the setting of individual importance weights is carried out directly with the respondents of the study providing simultaneously information about satisfaction and importance of the individual items (Cummins, Citation1997; Felgoise et al., Citation2009; Lindner et al., Citation2013), for example, asking “How many friends do you have? Do you have enough friends/are you satisfied with the number of friends you have? How important to you is having lots of friends?” Sometimes the universal importance weights are derived indirectly from the statistical regression model carried out in the validation study (Cramer et al., Citation1998; Saris-Baglama et al., Citation2010). Note that the latter approach is used quite often in the field of health-related quality of life and that several issues with this approach have been discussed in the literature. These issues include 1) change in the individual weights after an important event such as surgery (referred to as “reprioritization response shift,” see, Sajobi et al., Citation2014) and 2) issues with reporting total scores computed contrary to the recommendations of the authors of the instrument (Lins & Carvalho, Citation2016).

In the field of satisfaction with social care services, the dimensionality of the investigated concept remains unclear due to insufficient focus on validity of the relevant tools and use of the appropriate psychometric techniques (Fraser & Wu, Citation2016). If applied, these techniques are used for the data obtained from the satisfaction survey (typically among service users), not for perceived importance of the individual items from the point of view of service users, providers of social care services and/or policy makers. Nevertheless, balancing different stakeholders’ perspectives (especially providers of the services, service users, representatives of public administration and in some cases proxies of the service users) in the assessment of the quality of social care services is crucial in order to make possible a sound judgment of quality as emphasized by Blom and Morén (Citation2012). The relevance of the items in the satisfaction survey is often ensured together with their identity and observability by the qualitative part of the research including panel of experts and/or focus groups (Verdugo et al., Citation2010).

However, satisfaction with social services is influenced by the needs and expectations of the individual service user as well as by the characteristics of the particular care provider and the environment. On the other hand, perceived importance is, from our point of view, a more general construct because (i) it is applicable without restrictions even for the participants of the process who could not for various reasons easily express their (dis)satisfaction with social services (i.e. policy makers), (ii) it should not depend strongly on characteristics of the provider and environment. In factor analysis, the construction of the factors (which usually become subscales or domains) is based on the correlations between the individual items. However, these correlations may, in the case of service user satisfaction, strongly vary across particular social services, activities or elements of support (for example, on one hand, service users satisfied with service A might be also satisfied with service B because A and B are provided by the same people and in the same manner; on the other hand, there could be a negative correlation because of the approach used in A and B is completely different and service users who are satisfied with A, are typically dissatisfied with B). This is not the case for perceived importance because there is no reason to expect that people assessing A as very important would also assess B as very important at one provider and as unimportant at second provider. Thus, in our opinion, it would be more useful (in contrary to the current practice) to develop a measure of service quality and determine its individual subscales based on the perceived importance, rather than based on satisfaction reported. The bottom line is that perceived importance and service satisfaction are two partially related but distinct constructs. Perceived importance could be taken as the key construct in the first stage of the development of the service satisfaction questionnaire in order to (i) derive dimensions largely independent on a particular social service and specifics of the groups of respondents and (ii) to determine importance weights if importance weighting should be included. In the next stage, the developed tool could be adapted directly for the measurement of service satisfaction.

In this study, we aimed to (i) develop an instrument measuring perceived importance of indicators of quality in social care services which would be applicable for individual groups of stakeholders (i.e. service users, service providers, policy makers), (ii) to investigate reliability and validity of the developed tool using sophisticated analytical techniques such as exploratory and confirmatory factor analysis. The validated instrument is intended to be used in the next step for the assessment of the quality of social care services.

Method

Development of the instrument

The construction of our tool was based on the concept of social services as a process of interaction between provider and user, which includes production/provision and consumption taking place at the same time (Blom & Morén, Citation2012). The measurement of the quality of the service must include, in addition to the quality of the service provision process, the quality of life of the service user. According to Bjorn and Morén, “a comprehensive judgment of quality in social-work practice must consider both what is going on inside the agency or organization (ie, quality of service) and the effects of the services on a client’s life-situation (ie, quality of life)” (p. 77, Malley & Fernández, Citation2010). As frequently noted, a comprehensive approach to measuring quality should also be sensitive to the perspectives of stakeholders such as providers, service users and their families or indirectly as regulators or politicians (Brown, Citation2007; Malley & Fernández, Citation2010; McGlynn, Citation1997).

Such a complex conceptualization of quality requires a multi-level approach and indicators would usefully include all three elements of Donabedian’s conceptualization of service quality – i.e. structures, processes and outcomes (Donabedian, Citation1966). In our approach, structural indicators refer to the stable characteristic of a service including internal and external conditions; processes refer to the nature and quality of the interaction between carers and users, including rules, procedures and working practices. According to Malley and Fernández (Citation2010) there is a close and intimate interaction between carer and user which underlines the collaborative aspect of social care as fundamental to assessing quality. Indicators related to the behavior of staff, their accountability, trustworthiness and security as perceived by users are essential to include. In this case, how users experience the service, the support from staff, the environment etc are seen as the key indicator of outcomes.

These factors informed the selection of items, focused on aspects of the structure, process and outcomes of the service from the perspectives of three groups of stakeholders: the regulator represented by public administration, providers and users.

After the measurement framework has been defined, the next step involved the thematic analysis of relevant domestic and international models of quality assessment, which in turn was also used to inform the selection of items, based on the assumption that the content of these models had already been validated and therefore had appropriate content validity. These existing models and measures included: 1) the national Quality Standards of social services (Czech Republic, Citation2006) which represent the legal regulative framework with indicators focused mostly on processes, structures and users’ rights; and 2) the Quality Standards for Residential Facilities (Association of Social Service Providers, Citation2011) and the Quality Mark (Association of Social Service Providers, Citation2014), which covers the same three elements as the National Quality standards of social services (i.e. processes, structures and users rights) but adds the elements of environment and health/physical well-being. Indicators for the wider context in which services are provided were drawn from the European model (The Social Protection Committee, Citation2011). The other two models that provided a broad spectrum of indicators were “The international residential care” model (Hoffmann & Leichsenring, Citation2010) and the British study for Quality Watch (Pittam et al., Citation2015).

The analysis of the content of items identified from existing models and measures resulted in a pool of 166 items. This pool was subsequently reviewed by four reviewers – two academics with substantial experience in the field of social work, one representative of Association of Social Service Providers and one representative of Ministry of Labor and Social Affairs of the Czech Republic. In the selection process, items were grouped in a way that covered the main areas of service provision and the impact of services on users’ quality of life. In the next step seven core domains were defined. Three of these domains are thematically similar to these which are in some way present in the national quality standards – Social Care, Principals and Processes, Ethics. Following a critical review of the rather narrow approach taken in the national quality standards, the remaining four domains were chosen to supplement and enrich the breadth of areas covered by the measure: Health Care, Users Perception of Quality, Environmental Quality and Context.

In the final steps, items within each domain were critically analyzed, and duplicates or thematically related items were excluded. The final list contained 77 items. However, some of these items were only relevant for particular groups of stakeholders (typically service providers and public administration) but not for others (e.g., service users) and some items related to social prevention services, as large of a wider study, and not to social care services. As such the analysis presented here relates just to the 40 items which were presented to three groups of stakeholders (public administration, service providers and service users) in social care services. The questionnaires for service users included an easy read version to support the inclusion of those with intellectual disabilities or older people with cognitive disabilities or communication difficulties. Please note that with accordance with the information presented in the Introduction, all items were focused on perceived importance by stakeholders not on their satisfaction with a particular service.

Procedure and sample

The questionnaires in Czech language were piloted with a sample of 10 respondents. Six were service providers or public administration representatives and four were service users. This pilot study resulted in minor re-wording of some items in order to increase understandability. The final versions were made available online through Google Forms for an eight-week period in spring 2019. Service providers and public administration representatives were contacted by representatives of Ministry of Labor and Social Affairs of the Czech Republic (based on its database) with the invitation to participate via an e-mail and the link to the survey was contained in the invitation. 1,000 providers from the total number of 2,610 registered services providing long-term care were selected and contacted. The selection was random but regional distribution of the services was taken into consideration. In the case of public administrations, the social services departments of all 14 Czech regional authorities and 204 municipalities with extended competences were contacted. Please note that in opposite to the group of service providers, more than one representative of each authority/municipality could participate independently in the survey.

Services users were recruited through service providers who were contacted initially through research team networks and then snowball sampling. We aimed to recruit service users across all 14 regions of Czechia from the main types of social services in approximately the same proportion as found in the total sample of registered services and also to have similar numbers of older adults and people with disabilities. For the group of service users, their needs had to be taken into consideration in administering the questionnaire – many of the users recruited by providers had mild intellectual disabilities and/or cognitive or communication difficulties. Thus, the service user questionnaire was administered face-to-face by six specially trained research assistants who visited the service users where they lived and supported them to complete the questionnaire.

Respondents were asked to assess the importance of each item for the quality of a service on a five-point scale: with 1 = not at all important, 2 = of little importance, 3 = I’m not sure, 4 = important and 5 = very important. The principles of good ethical practice were adhered to during the research study which was approved by corresponding authorities at Ministry of Labor and Social Affairs of the Czech Republic. All participants participated voluntarily and were informed about the objective of the study and how anonymity of the data would be ensured during analysis and reporting.

In total, 671 responses were received with approximately equal representation of each stakeholders group (249 public administration representatives, 217 service providers and 205 service users). The basic characteristics of the sample are given in . It can be seen that women significantly prevailed in all three groups of stakeholders involved. Age distribution was comparable for service providers and public administration while the group of service users was significantly older on average due to the high percentage of users of services for elderly people in the sample. The vast majority (almost 95%) of service users used residential services, only a few receive in-home support.

Table 1. Basic characteristics of the respondents from the individual groups of stakeholders.

Data analysis

Overall, the data were analyzed and presented in line with the recommendations given by Cabrera-Nguyen (Citation2010). Missing data were rare (less than 1% of cases in the all groups of stakeholders) and missing at random (Sterne et al., Citation2009) and they were estimated by mean (average of the observed data). The sample was randomly divided into two approximately equal halves, in line with the recommendation Anderson and Gerbing (Citation1988). A descriptive analysis of the items and an exploratory factor analysis (EFA) were performed using the maximum likelihood method on half of the sample (337 subjects in total; 101 service users, 107 service providers and 129 representatives of public administration) to identify the factorial structure of the scale. MS Excel with inbuilt accessory XLSTAT was used for these calculations. The EFA assumptions (sample size, factorability, linearity; for details see, Beavers et al., Citation2013) were tested by appropriate methods (e.g., Kaiser-Mayer-Olkin criterion in the case of factorability) and none of them were significantly violated. A confirmatory factor analysis (CFA) using the second half of the sample (334 subjects in total; 104 service users, 110 service providers and 120 representatives of public administration) was conducted to confirm the factorial structure derived from EFA. CFA was carried out using software LISREL 9.2. Cronbach’s alpha was calculated to determine the internal consistency of the individual scales of questionnaire. Results for the tests of significance for the correlation coefficients were given with p-values. A test p-value of less than 0.05 was considered statistically significant. Dataset and the detailed results of the computations such as reproduced and residual correlation matrices are available upon request.

Results

Descriptive statistics and item analysis

The descriptive statistics (mean, median and standard deviation) for the items of the questionnaire were computed. As can be seen from , the mean and median values were in all cases higher than 3 (the middle of the Likert scale used) demonstrating higher than moderate perceived importance of the issues for stakeholders involved. Standard deviations values were close to unity suggesting the discriminatory value of the items. The Cronbach’s alpha computed from polychoric correlation coefficients for all 40 items was 0.957 indicating very good internal consistency. The Kaiser–Meyer–Olkin (KMO) value was 0.777 for the whole questionnaire, exceeding the recommended value of 0.6 even for all individual items (see, ). It suggests the factorability of the correlation matrix. As expected for this type of questionnaire (Fleury-Bahi et al., Citation2013), all items were negatively skewed (with skewness coefficient ranging from −0.9 to −2.3, see, ).

Table 2. Descriptive statistics of the individual items of the questionnaire (n = 337).

Exploratory factor analysis

EFA based on polychoric correlations was performed and the number of factors was not restricted. Initial communalities were squared multiple correlations and the iteration process was stopped with the maximum change in communality decreasing under 0.0001 which occurred at 29th iteration. Number of factors was estimated from the scree plot (see, ) using commonly used methods such as Kaiser-Guttman rule (Kaiser, Citation1960), Cattel´s Scree Test (Cattell, Citation1966) and Scree Test Acceleration Factor (Raîche et al., Citation2013). In all cases, five-factor structure accounting for 53.5% of total variance (see, ) was suggested. Hence, we rotated five factors using the Oblimin method more suitable for correlated factors.

Figure 1. Scree plot and cumulative variance from exploratory factor analysis.

Figure 1. Scree plot and cumulative variance from exploratory factor analysis.

shows factor loadings for the five-factor solution. It may be seen that the highest factor loading was higher than 0.45 in all cases. The same is true for communalities (sum of squares of five factor loadings). It suggests in line with recommendations of Comrey and Lee (Citation1992) that all items fit well into the factor structure. There are some cross-loaded items (e.g., items 1, 8, 16 with factor loading higher than 0.4 for more than one factor), which were candidates for deletion in accordance with the recommendation of Worthington and Whittaker (Citation2006). However, re-computation of EFA without these items did not result in clearer factor structure. Thus, we preferred to retain all 40 items and take the cross-loading into consideration in the interpretation of the results in line with the recommendations of Hair et al. (Citation2009). Based on the content analysis of the items contributing to the individual factors, they were named as follows: Social care (F1), Subjective quality of life (F2), Health care (F3), Quality of the environment (F4) and Ethics (F5). The more detailed information about the individual domains and their hierarchy in the individual groups of stakeholders are presented in a separate paper (Šiška et al., Citation2021).

Table 3. Factor loading and communalities for the individual items of the questionnaire (Oblimin rotation).

Confirmatory factor analysis

We conducted CFA to determine whether the data fit the five-factor model derived from EFA with each variable linked to only one latent variable corresponding to the factor with the highest factor load (see, ). No concurrent models were tested because of it is newly developed questionnaire and currently there is no theoretical or empirical reason to expect that any other factor solution should be reasonable. CFA was carried out using LISREL 9.2 software and method of maximum likelihood was applied. For simplicity, no error covariances between items were considered. Correlations between all factors were taken into account. The results of CFA are typically interpreted using fit indices. We followed the recommendations to report one absolute fit index, one relative fit index and one index expressing model parsimony (Hooper et al., Citation2008). Root-mean square error approximation (RMSEA) was selected from a number of absolute indices. In RMSEA, a smaller value indicates better data-model fit. Comparative fit index (CFI) was chosen from a number of relative indices. This index has a range of 0–1, with higher values corresponding to better data-model fit. Parsimony normed fit index (PNFI) was selected from indices reflecting model parsimony. Lower values correspond to higher level of parsimony of the model.

The results show that all the correlations among the latent variables are positive and significant, ranging from to 0.45 to 0.91 (p < .001). Relatively high and significant (p < .001) standardized factor loadings ranging from 0.42 to 0.85 were found for the individual items. The 5-factor model had 90 estimated parameters (10 correlations between latent variables, 40 factor loadings and 40 error covariances) and saturated model would have 820 parameters (number of items multiply by number of items plus one divided by two). It corresponds to 730 degrees of freedom of the model. Chi-square was 1736.56 (p-value lower than 0.001), RMSEA was 0.064 (95% confidence interval between 0.057 and 0.071). CFI and PNFI was 0.873 and 0.578, respectively. These values of fit indices suggest acceptable fit (Hooper et al., Citation2008) and CFA thus confirm the relevance of 5-factor model derived from EFA. No hierarchical model involving second-order factors were assessed because of nature of the questionnaire focusing exclusively on perceived importance (an overall factor for the perceived importance would be meaningless in opposite to overall score for quality of life or quality of service which is very often reasonable to determine).

Internal consistency

The internal consistency of the scale as indicator of its reliability was estimated by the calculation of Cronbach’s alpha coefficient for the whole sample (n = 671) and the individual groups of stakeholders. Cronbach’s alpha was computed for the five factors produced by the factor analysis. As can be seen from , the Cronbach alpha statistic ranged from 0.86 to 0.97 across the individual domains for the group of service providers; from 0.82 to 0.95 for the public administration and from 0.71 to 0.82 across four of the five factors in the group of service users. Internal consistency was substantially lower (0.42) for this group and Factor 4. Quality of the environment. For the whole sample, Cronbach´s alpha ranged from 0.83 to 0.95 suggesting high internal consistency of the tool developed.

Table 4. Internal consistency (Cronbach´s alpha) for the individual factors of the questionnaire and groups of stakeholders.

Discussion and conclusions

This study takes an innovative approach to the development of a measure of perceived quality of social care services. The findings are expected to initiate debate and possibly consent on how service quality can be conceptualized and how it should be assessed. Such consent would bypass a frequent bias lined by policy systems, which tend to prioritize the view of public administrators or service providers. The proposed instrument, however, positions central questions for how the subjective quality would/should be assessed/measured. Mansell (Citation2011) observational methodologies could be inspiring to develop our approach further. He sets out the importance of such observational methodologies as a research tool for gathering information about people’s lived experiences and particularly in relation to people with severe and profound intellectual disabilities.

The focus of this paper has been on the first step of this process – the development of the perceived importance of the individual aspects (covered by relevant questionnaire items) by different stakeholders (service providers, service users and public administration). The relationship between perceived importance and perceived quality of care (service satisfaction) is explained in detail in the Introduction). From the data obtained, we derived a five-factor questionnaire structure by EFA, confirmed it by CFA and demonstrated its high reliability in terms of internal consistency of the individual scales.

The study had a number of limitations. Firstly, test-retest reliability of the measure was not assessed and therefore the stability of perceived importance over time has not been ascertained. This may be particularly relevant for the service user version – the impact of cognitive difficulties on stability of importance is not known. Secondly, the group of service users consisted almost exclusively of users of residential social care settings with only a few people receiving in-home services included. Perceived importance could potentially differ for those using residential social care settings (who also tended to be older) compared to those being supported in their own home. In addition, only the views of those who could complete the measure with trained support were obtained – as such what is important to those with more severe cognitive and/or communication difficulties may not be captured. Relatively low internal consistency (α = 0.42) was observed in the group of service users in the Quality of environment domain. It was partly due to lower values in the subset of respondents with mild intellectual disabilities (easy-to-read version) suggesting a potential problem with understanding of some items here. However, this lower value could be affected also by lower number of items here (only four) and different importance perception of the items from this domain focusing on quality of accommodation in terms of privacy and equipment of areas for activities and items related to quality of the additional services provided (such as laundry and catering services) by service users. Finally, the response rate for service providers was relatively low (only approximately 22% of 1,000 invited providers) which could bias the results in that those who participated were likely to be more interested in issues related to the quality of social care services.

Despite these limitations, this study delivers a tool that can now be used to develop a measure of service quality including elements such as user satisfaction. There are only a limited number of psychometrically sound instruments focusing on users’ satisfaction with their life and the care services. Verdugo et al. (Citation2010) developed and confirmed 8-factor GENCAT Scale with Cronbach´s alpha for the individual scales ranging from 0.47 (physical well-being) to 0.88 (self-determination). The development of a psychometrically sound importance measure, increases the possibility of developing a robust measure of the quality of social services including the satisfaction of users with their services they receive and the impact these have on their quality of life. In addition to the advantages of assessing importance and satisfaction separately as noted in the introduction, significant mutual coupling of importance and satisfaction and the existence of an importance-satisfaction discrepancy has been reported in previous studies. However, the relationship between these two constructs can vary in different conditions and situations. For example, Rohrer and Schmukle (Citation2018) found that respondents who rated the individual domains as more important also said they were more satisfied with these domains (significant correlation ranging from 0.21 to 0.45 were observed). On the other hand, Larsson et al. (Citation2007) reported that patients with endocrine gastrointestinal tumors who assigned a higher importance than satisfaction rating to an aspect reported a significantly lower quality of life for the same aspect and that importance–satisfaction discrepancy was a valid indicator of patient distress.

This paper has focused on testing a measure of importance as the first stage of developing a measure of the quality of social care services. The findings from the measure and discussion of what was rated as important by different groups are discussed in separate paper (Šiška et al., Citation2021). The differences based on age, gender and type of service are also presented in detail in this study. Only very little effect of these variables was observed suggesting stable psychometric properties of the tool across different demographic groups. The next step is to develop and test a tool for the assessment of the quality of social services. The findings of this study make it possible to easily derive importance weights for the individual items if importance weighting should be included in the final tool focusing on service satisfaction. In this scale, the domain scores would be computed as sum of the products of the satisfaction points and importance weights attributed to the individual items. Further research could also usefully test the importance rating with more people living in community-based settings and potentially with other stakeholders such as family members, especially with respect to those with more severe disabilities. Finally, it will be important to ensure that any measure of service quality is sensitive enough to be used to explore the situation for those who may not be able to complete a questionnaire even with support.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

Data are available on request at the corresponding author of the manuscript.

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

References

  • Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modelling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. https://doi.org/10.1037/0033-2909.103.3.411
  • Association of Social Service Providers. (2011). Working Version of Recommended Quality Standards for Residential Services. Czech Republic. Retrieved September 10, 2021, in Czech, from https://www.apsscr.cz/files/files/Doporu%C4%8Den%C3%BD%20standard_FINAL(2).pdf
  • Association of Social Service Providers. (2014). Quality mark: Overviews of the assessed areas for ambulatory services, care services, homes for the elderly. Retrieved September 10, 2021, in Czech, from http://www.znackakvality.info/manual-zq/prirucka
  • Beavers, A. S., Lounsbury, J. W., Richards, J. K., & Huck, S. W. (2013). Practical considerations for using exploratory factor analysis in educational research. Practical Assessment, Research, and Evaluation, 18(1). Article 6.
  • Blom, B., & Morén, S. (2012). Evaluation of quality in social-work practice. Nordic Journal of Social Research, 3(1), 71–87. https://doi.org/10.7577/njsr.2062
  • Brown, C. R. (2007). Where are the patients in the quality of health care? International Journal for Quality in Health Care, 19(3), 125–126. https://doi.org/10.1093/intqhc/mzm009
  • Cabrera-Nguyen, P. (2010). Author guidelines for reporting scale development and validation results in the journal of the society for social work and research. Journal of the Society for Social Work and Research, 1(2), 99–103. https://doi.org/10.5243/jsswr.2010.8
  • Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1(2), 245–276. https://doi.org/10.1207/s15327906mbr0102_10
  • Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed.). Erlbaum.
  • Cramer, J. A., Perrine, K., Devinsky, O., Bryant‐Comstock, L., Meador, K., & Hermann, B. (1998). Development and cross‐cultural translations of a 31‐item quality of life in epilepsy inventory. Epilepsia, 39(1), 81–88. https://doi.org/10.1111/j.1528-1157.1998.tb01278.x
  • Cummins, R. A. (1997). Comprehensive quality of life scale – Adult: Manual. University Australia.
  • Czech Republic. (2006). Quality standards of social services, Annex 2, Social Services Act 108/2006.
  • Donabedian, A. (1966). Evaluating the quality of medical care. The Milbank Memorial Fund Quarterly, 44(3), 166–206. https://doi.org/10.2307/3348969
  • Dowling, M. (2008). Client empowerment and quality assurance. The Innovation Journal: The Public Sector Innovation Journal, 13(1), 3.
  • European Commission. (n.d.). Disability rights strategy for 2021-30. Retrieved August 11, 2021, from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12603-Disability-rights-strategy-for-2021-30_en
  • Felgoise, S. H., Stewart, J. L., Bremer, B. A., Walsh, S. M., Bromberg, M. B., & Simmons, Z. (2009). The SEIQoL-DW for assessing quality of life in ALS: Strengths and limitations. Amyotrophic Lateral Sclerosis, 10(5–6), 456–462. https://doi.org/10.3109/17482960802444840
  • Fleury-Bahi, G., Marcouyeux, A., Préau, M., & Annabi-Attia, T. (2013). Development and validation of an environmental quality of life scale: Study of a French sample. Social Indicators Research, 113(3), 903–913. https://doi.org/10.1007/s11205-012-0119-4
  • Fraser, M. W., & Wu, S. (2016). Measures of consumer satisfaction in social welfare and behavioral health: A systematic review. Research on Social Work Practice, 26(7), 762–776. https://doi.org/10.1177/1049731514564990
  • Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2009). Multivariate data analysis (7th ed.). Pearson Prentice Hall.
  • Hoffmann, F., & Leichsenring, K. (2010). Quality management by result-oriented indicators: Towards benchmarking in residential care for older people. European Centre for Social Welfare Policy and Research.
  • Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modelling: Guidelines for determining model fit. Electronical Journal of Business Research Methods, 6(1), 53–60.
  • Hsieh, C. M. (2012). Incorporating perceived importance of service elements into client satisfaction measures. Research on Social Work Practice, 22(1), 93–99. https://doi.org/10.1177/1049731511416826
  • Hsieh, C. M. (2013). Issues in evaluating importance weighting in quality of life measures. Social Indicators Research, 110(2), 681–693. https://doi.org/10.1007/s11205-011-9951-1
  • Hsieh, C. M. (2017). A client satisfaction measure of homecare services for older adults. Journal of Social Service Research, 43(4), 487–497. https://doi.org/10.1080/01488376.2017.1307308
  • Hsieh, C.-M. (2018). Importance weighting in client satisfaction measures: Lessons from the life satisfaction literature. Social Indicators Research, 138(1), 45–60. https://doi.org/10.1007/s11205-017-1664-7
  • Kaiser, H. F. (1960). The application of electronic computer to factor analysis. Educational and Psychological Measurement, 20(1), 141–151. https://doi.org/10.1177/001316446002000116
  • Larsson, G., von Essen, L., & Sjödén, P. O. (2007). Are importance–satisfaction discrepancies with regard to ratings of specific health‐related quality‐of‐life aspects valid indicators of disease‐and treatment‐related distress among patients with endocrine gastrointestinal tumours? European Journal of Cancer Care, 16(6), 493–499. https://doi.org/10.1111/j.1365-2354.2007.00781.x
  • Lindner, P., Andersson, G., Öst, L.-G., & Carlbring, P. (2013). Validation of the Internet-administered quality of life inventory (QOLI) in different psychiatric conditions. Cognitive Behaviour Therapy, 42(4), 315–327. https://doi.org/10.1080/16506073.2013.806584
  • Lins, L., & Carvalho, F. M. (2016). SF-36 total score as a single measure of health-related quality of life: Scoping review. SAGE Open Medicine, 4, 2050312116671725. https://doi.org/10.1177/2050312116671725
  • Locke, E. A. (1976). The nature and causes of job satisfaction. In M. D. Dunnette (Ed.), Handbook of industrial and organizational psychology (pp. 1297–1349). Rand McNally.
  • Malley, J., & Fernández, J. L. (2010). Measuring quality in social care services: Theory and practice. Annals of Public and Cooperative Economics, 81(4), 559–582. https://doi.org/10.1111/j.1467-8292.2010.00422.x
  • Mansell, J. (2011). Structured observational research in services for people with learning disabilities. NIHR School for Social Care Research. Retrieved January 20, 2022, from https://www.sscr.nihr.ac.uk/wp-content/uploads/SSCR-methods-review_MR010.pdf
  • McFarlin, D. B., Coster, E. A., Rice, R. W., & Cooper, A. T. (1995). Facet importance and job satisfaction: Another look at the range-of-affect hypothesis. Basic and Applied Social Psychology, 16(4), 489–502.
  • McGlynn, E. A. (1997). Six challenges in measuring the quality of health care. Health Affairs, 16(3), 7–21.
  • Pittam, G., Dent, M., Hussain, N., Griffin, M., Hovard, L., & Blackwood, R. (2015). A multi method study to inform the development of qualitywatch: Consensus on quality. Solutions for Public Health.
  • Raîche, G., Walls, T. A., Magis, D., Riopel, M., & Blais, J. G. (2013). Non-graphical solutions for Cattell’s scree test. Methodology, 9(1), 23–29. https://doi.org/10.1027/1614-2241/a000051
  • Rohrer, J. M., & Schmukle, S. C. (2018). Individual importance weighting of domain satisfaction ratings does not increase validity. Collabra Psychology, 4(1), 6.
  • Sajobi, T. T., Fiest, K. M., & Wiebe, S. (2014). Changes in quality of life after epilepsy surgery: The role of reprioritization response shift. Epilepsia, 55(9), 1331–1338.
  • Saris-Baglama, R. N., Dewey, C. J., Chisholm, G. B., Plumb, E., King, J., Kosinski, M., Bjorner, J. B., and Ware, J. E. (2010). QualityMetric health outcomes™ scoring software 4.0. QualityMetric Incorporated.
  • Šiška, J., & Beadle-Brown, J. (2022). (tech.). Innovative Frameworks for measuring the quality of services for persons with disabilities (in print). European Association of Service providers for Persons with Disabilities (EASPD).
  • Šiška, J., Čáslava, P., Kohout, J., Beadle-Brown, J., Truhlářová, Z., & Holečková, M. K. (2021). What matters while assessing quality of social services? Stakeholders’ perspective in Czechia. European Journal of Social Work, 24(5), 864–883.
  • The Social Protection Committee. (2011). A voluntary european quality framework for social services SPC/2010/10/8. Retrieved November 18, 2020, from https://ec.europa.eu/social/main.jsp?catId=1169&langId=en
  • Sterne, J., White, I., Carlin, J., Spratt, M., Royston, P., Kenward, M., & Carpenter, J. (2009). Multiple imputation for missing data in epidemiological and clinical research: Potential and pitfalls. BMJ: British Medical Journal, 339(7713), 157–160.
  • Trauer, T., & Mackinnon, A. (2001). Why are we weighting? The role of importance ratings in quality of life measurement. Quality of Life Research, 10(7), 579–585. https://doi.org/10.1023/A:1013159414364
  • Verdugo, M. Á., Arias, B., Gómez, L. E., & Schalock, R. L. (2010). Development of an objective instrument to assess quality of life in social services: Reliability and validity in Spain. International Journal of Clinical and Health Psychology, 10(1), 105–123.
  • Worthington, R. L., & Whittaker, T. A. (2006). Scale development research: A content analysis and recommendations for best practices. The Counseling Psychologist, 34(6), 806–838. https://doi.org/10.1177/0011000006288127
  • Wu, C. H. (2008). Can we weight satisfaction score with importance ranks across life domains? Social Indicators Research, 86(3), 469–480. https://doi.org/10.1007/s11205-007-9180-9
  • Wu, C. H. (2009). Weight? Wait! Importance weighting of satisfaction scores in quality of life assessment. In L. B. Palcroft & M. V. Lopez (Eds.), Personality assessment: New research (pp. 109–139). Nova Science Publishing.