1,064
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Open comparisons of social services in Sweden—Why, how, and for what?

| (Reviewing Editor)
Article: 1404735 | Received 25 Apr 2017, Accepted 03 Nov 2017, Published online: 21 Nov 2017

Abstract

Open Comparisons is a soft power governance method using data collected from the municipalities for benchmarking their activities. Open Comparisons in Sweden may have different national goals, such as to measure quality, policy development and democratic openness. The overall aim of this paper is to describe and analyze the use of Open Comparisons to monitor the social welfare services—social assistance, child and youth care, and treatment of alcohol and drug abuse. Four focus group interviews were carried out with professionals from eight municipalities. Results show that Open Comparisons measures the technical quality of activities by reviewing what is available, such as routines and agreements, but says little about trickle down to client level. The findings are discussed in a wider context of overall goals of Open Comparisons related to soft power governance and governmentality which includes how the social services may internalize this way of thinking.

Public Interest Statement

Open Comparisons is a soft power governance method using data collected from the municipalities for benchmarking their activities. The overall aim of this paper from Sweden is to describe and analyze the use of Open Comparisons to monitor the social welfare services—social assistance, child and youth care, and treatment of alcohol and drug abuse. Four focus group interviews were carried out with professionals from eight municipalities. Results show that Open Comparisons is not a simple tool for quality assessment in the social services. Open Comparisons measures the technical quality of activities by reviewing what is available, such as routines and agreements, but says little about trickle down to client level. By understanding what kind of results one could expect from Open Comparisons, it becomes easier for the municipalities to have a realistic goal for what can be achieved through this method.

1. Open Comparisons—A new tool for evaluation and steering

New requirements constantly arise for the monitoring and evaluation of public sector activities. A relatively new form of benchmarking is Open Comparisons, also known as public comparisons, which is now becoming something of an international trend. Within the OECD a number of reports have already appeared (see for example The Society at a Glance, Citation2014) and in the European Union (EU) the so-called Open Method of Coordination (OMC) has been launched, which aims to help countries to see and learn from each other through systematic comparisons of what works best. It is thought that countries will be able to improve their social policies by formulating goals, comparing results and adopting favorable solutions (see e.g. Berg & Spehar, Citation2011).

Open Comparisons is not just an evaluation tool, however, but also a means of control. This has attracted considerable interest from political scientists because Open Comparisons is a soft form of administrative control, unlike, for example, legislation or regulations (see e.g. de la Porte & Pochet, Citation2012), which means that it can be introduced and implemented in countries with very different traditions and political systems. In recent years, OMC has increasingly been incorporated into the EU’s overall strategy Europe 2020 including the platform against poverty and social exclusion (European Commission, Citation2016).

In political science research soft power is a means to govern without coercion or persuasion. Lead word is co-opting rather than commanding and the concept of soft power was originally used internationally as a mean to influence other countries. This way of thinking goes back to Joseph Nye (e.g. Nye, Citation1990; further elaborated in later publications). Other scientists, such as Michel Foucault are interested in how governments could influence citizens’ thinking in a disciplining process without using violence, sometimes referred to as governmentality (Foucault, Citation1991).

In Sweden, Open Comparisons is used at national level for local government evaluation and as this monitoring is basically intended for administrative purposes there is as yet little research. This study examines how Open Comparisons is perceived and received by social services staff in a number of Swedish municipalities. In the discussion part the findings are included in a broader context of overall goals and soft power governance.

1.1. The Swedish administrative structure

A brief description of the relevant Swedish administrative structure is in order here. At national level is the Ministry of Social Affairs, with much of its executive powers performed through Socialstyrelsen (the National Board of Health and Welfare—NBHW).

The other major organization in this context is the Swedish Association of Local Authorities and Regions (SALAR), which is both an employers’ organization and local government advocate with—as presented on their website—“the mission is to provide municipalities, county councils and regions with better conditions for local and regional self-government. The vision is to develop the welfare system and its services” (http://skl.se/tjanster/englishpages.411.html). There are 290 municipalities and 20 counties/regions in Sweden (2016).

1.1.1. Open Comparisons in Sweden

The first Open Comparisons effort on Swedish conditions was published by the NBHW in June 2006 and concerned the quality of health care. But there is a pre-history to Open Comparisons. The SALAR predecessor, Svenska kommunförbundet (Swedish Association of Local Authorities), managed several different projects comparing expenses between municipalities in the early 1990’s. An important difference, however, is that Open Comparisons is an unambiguously top-down approach to monitoring intended to steer the municipalities. What this means is that problems identified at national level may not necessarily correspond to local needs.

Open Comparisons in health care is largely based on health care quality registers—records previously available only to the profession. The new use of these registers means that they may be used differently from how they originally were intended to be used (see e.g. Blomgren & Waks, Citation2011). For example, registers developed for certain types of administrative analyses are now used as a tool for ranking counties (for a more in-depth debate on new or expanded use of health registers see e.g. Järholt, Engström, & Lindström, Citation2008). Open Comparisons in the social services context is mainly based on NBHW questionnaires to the municipalities.

1.1.2. Open Comparisons in the municipalities—A method that tends to swell

According to SALAR’s website (November 2015) Open Comparisons is performed in the following areas:

Business environment.

Health care: public health, health care, pharmaceuticals, oncology.

Planning and Security: Public transportation; environment, energy and climate; safety and security.

Schools: primary, secondary.

Social services: crime, economic assistance, homelessness, abuse and dependency care, child welfare, elderly and disability care.

SALAR has also introduced Open Comparisons in the labor sector on its own initiative. There is thus a wide range of important social services involved.

1.2. Motives for Open Comparisons

Various arguments have been put forward in current discourses for making use of Open Comparisons. There is no explicit definition of Open comparisons but at least four different motives can be found on the NBHW and SALAR websites. These motives are: to

promote democracy

measure quality

aid policy development

achieve an evidence-based practice

1.2.1. Democracy

The NBHW writes that with Open Comparisons, “you can compare the quality of social services and health care in Sweden …” (http://www.socialstyrelsen.se/oppnajamforelser). Here there is direct address to the citizen, “you can compare the quality …”, a formulation intended to encourage citizens to be more active.

1.2.2. Quality measurement and policy development

In its introductory statement on Open Comparisons SALAR writes that its purpose is to encourage counties and municipalities to analyze their operations, learn from each other, and improve standards of quality and efficiency. The NBHW has also listed the goals the authority sees as most important: to

create and improve transparency in publicly funded health care

provide a basis for improvement, monitoring, analysis and learning activities

initiate local, regional and national analyses and discussions on the quality of activities and efficiency

provide a basis for management and control

be part of the information and evidence on health and other care options.

1.2.3. Open Comparisons—For an evidence-based practice

In recent years a discussion has emerged dealing with knowledge management of social services (see, for example, Alm, Citation2015; NBHW – National Board of Health & Welfare, Citation2015). The core of the discussion is about increasingly providing an evidence-based practice through the dissemination and application of quality-assured knowledge. The spread of knowledge is presented as an implementation chain where superordinate bodies provide support and guidance to organizations, mainly local governments and their activities, which are responsible for the actual implementation. The National Liaison for Knowledge Management in the Social Services (NSK-S) believes that knowledge management is the system needed to achieve an evidence-based practice—that is, where individual interventions on behalf of clients (service users) are based on the professional’s appraisal of the best available scientific knowledge, own personal experience, the user’s experience and wishes, and the surrounding context (SALAR 2014). The collaboration brings up efforts at national, regional and local level to achieve this. At the local level decision-making can be based on statistics, registry data, Open Comparisons (author’s italics) and monitoring. In this document Open Comparisons is inserted in a strategy to achieve an evidence-based practice.

1.2.4. Reporting of open comparisons

The results of Open Comparisons can be presented in different ways. In addition to the figures, the NBHW produces compilations where the traffic light colors red-yellow-green are used to highlight the good examples (“name”) or to emphasize the bad examples (“shame”). Presenting color schemes that can be easily “ranked” also increases media interest (see e.g. SALAR, 2014). However, there are no formal sanctions against municipalities and county councils that show many red fields in their results.

1.3. The study’s framework

1.3.1. Aim of the research

Overall, there are different kinds of expectations linked to Open Comparisons. It should promote democracy, improve quality, contribute to social services development and provide the basis for an evidence-based practice in social work. How is this vision perceived at local level?

The overall aim of this article is to describe and analyze the use of Open Comparisons with examples from the three fields of social welfare—social assistance (means tested economic assistance), child and youth care, as well as treatment of alcohol and drug abuse, where the emphasis is on social services professionals’ reasoning about and attitudes towards Open Comparisons while trying to implement it.

Research questions:

How do professionals judge Open Comparisons and what role do they believe that Open Comparisons plays in the local development of social services?

How do professionals view the opportunities to implement the Open Comparisons results to achieve concrete effects in their own administration?

Does the use of Open Comparisons fulfill the objectives to promote democracy, improve quality, contribute to social services development and provide the basis for an evidence-based practice in the municipalities involved?

With professionals here is meant primarily the personnel responsible for completing the NBHW surveys in the municipalities.

1.3.2. Methodology and material

The study, which emanated from a development project about Open Comparisons in the social services, was carried out in eight municipalities of the greater Stockholm area in 2015–2016. The project was led by FoU-Nordväst, a regional social services R & D unit owned by the municipalities involved, and its goal was to improve the use of Open Comparisons and to find common issues to continue working with in all eight municipalities.

The Open Comparisons model is structured as an annual cycle of three meetings arranged by FoU-Nordväst. At the first meeting the municipalities responding to the NBHW survey come together to agree on how the indicator questionnaires should be interpreted, the purpose to increase the quality and comparability of the answers. The NBHW provides information about the indicators and there is also a library of indicators available. A question might concern the number of children living in families with long-term social assistance. The indicator library specifies that this indicator is important for understanding the child perspective, and also offers technical information defining e.g. “child” (below 18 years) and ‘long-term support’ (10–12 months during the year), etc.

The second meeting is held when the NBHW has compiled the municipal data and presented the results. These results are reviewed and a first round of discussions concerning possible areas of improvement/development takes place. During the third meeting the ambition of the municipalities is to suggest, based on the results, common goals that they can continue to develop together. This is then decided upon by the municipal heads of social services.

The role of the researcher was to follow the work done in the municipalities and to formulate research questions. Besides attending meetings when the municipal staff came together, basically as participatory observer, the author performed focus group interviews with the group members when the formal meetings ended.

The study was performed according to the schedule below.

Focus group interviews were carried out in connection with the interpretation of indicators and the completion of questionnaires (only social assistance as shown in Table ) and at the third meetings, which is the time when municipalities discuss how the Open Comparisons results can be used. The first interview with social assistance staff lasted for one hour. Recording time for the other three interviews was shorter, from 33 to 48 min. 7–9 persons from the municipalities participated in the focus groups. At one occasion one person from one municipality was missing and once there were two persons from the same municipality. There are more quotes regarding social assistance since there were two interviews and some questions were only addressed to the focus group on social assistance. When this project started the questionnaires regarding child and youth care as well as treatment of alcohol and drug abuse had already been completed for the year. Results are shown for all three social service fields, but only regarding social assistance has it been possible on the basis of project timelines to follow the process from securing data to the discussion of what the results might lead to. The names “social assistance”, “child and youth”, and “abuse” are used to show from which focus group quotes come from. The first focus group interview was conducted in April 2015 and the last in late February 2016. The interviews were recorded and most of the material has been transcribed.

Table 1. Schematic overview of meetings and interviews

1.3.3. The focus group method

A focus group interview is typically a more or less structured group discussion of predetermined themes. It is headed by the researcher, who acts as moderator, but group members can discuss quite freely. In the interviews conducted, the intention was to let the participants discuss with each other. They also had the opportunity of introducing new themes. My role as a researcher was to open the discussion and sometimes to guide the discussion back to focus on Open Comparisons as well as to elicit the participants’ own views and experiences.

The focus group method is a means of generating data. What knowledge is produced will depend on who the participants are, what they know and want, how much they choose to contribute, how they interact in the group, etc. Each group thus is unique. During the conversation a way develops to talk about the phenomenon under review, in this case Open Comparisons. The questions asked are shown in Appendix 1. The processing of answers was done in two steps. At first, the number of questions to present was reduced and related questions were merged. For example, there were two questions about quality but in results they are labelled under the same heading. Responses to the question about hopes and fears were merged with responses from a question about merits and drawbacks and presented together as well. These responses are presented as themes in the results section. Secondly, the information was further elaborated and interpreted in response to the research questions (Appendix 2).

1.3.4. Ethical considerations

The recordings of the interviews were conducted with informed consent. To make the interview talks as open and tolerant as possible, all people are anonymized and there is no disclosure of what the representatives of the different municipalities responded.

2. Results

In this compilation the discussions of the focus groups on social assistance, child and youth care, and treatment of alcohol and drug abuse are reported. All eight municipalities participated on at least one occasion. One idea was that the same personnel, for example, the division head responsible for social assistance in the municipality, would attend. However, this was not always the case because of who in fact completed the questionnaires, sometimes the same person was working with Open Comparisons in all three social service sectors.

2.1. What does quality mean in open comparisons?

One question asked of the focus group participants was, what is meant by quality in work with social assistance? A wide range of viewpoints emerged:

… if I care about the person, the person’s quality of life, [quality] is how rapidly I can help this person to be able to support her-/himself … to assist the person to attain health care, get the right benefits.

Quality was also considered to be a matter of making the right judgment about each individual based on their specific circumstances—in other words, to define the right requirements for that person. How quickly can I help this person to come to self-sufficiency within a reasonable time based on a correct assessment?

That you make the right judgment based on the individual circumstances – How much can you demand of a person who is mentally ill?

The lively discussion about quality pointed roughly in two different directions, where one was focused on results. Examples of answers were

[Quality for me is] that the individual comes to self-sufficiency as soon as possible.

To help individuals gain self-sufficiency through work, studies or other compensation - quality is to succeed with that.

The second direction rather emphasized how to proceed.

Quality is what we do. The road to getting there is quality.

(Speediness could be connected to both directions.) A question in Open Comparisons about staff workload was discussed as a measure of quality, for generally speaking it is easier to do a good job if you do not have too many clients. There was a long discussion about numbers of cases per officer, and how payouts are made. The difference between the municipalities was very large, with figures from 30–60 cases per officer mentioned. In some municipalities social workers do everything themselves, while other municipalities have assistants who also do calculations and payments. Some municipalities also had such additional resources as for example, housing support and rehabilitation officers. Overall, this demonstrated that it was very difficult in any simple way to compare staff workload through Open Comparisons.

An additional measure related to quality was the extent to which municipalities have to correct cases appealed in court. If the percentage is high, this may say something about how the municipality operates. But not even this question is simple to interpret if not related to anything. It says nothing about the extent to which municipalities will help clients to appeal or how decisions are justified. Or, as one of the participating social service professionals put it:

Number of decisions, refusals, how many go into appeal? How many convictions are changed? There are many parts. Simply measuring the number of convictions is pretty useless, an isolated part. (social assistance)

There was no direct question about “quality” put to the other focus groups, but different perspectives on quality issues still came up, leading to an overall view that the quality of the work done is not shown in Open Comparisons, and emphasizing that consultation with the client is missing. When asked if there are areas that are not made visible through Open Comparisons the replies often emphasized that daily developments are not visible and that too few quality aspects are covered.

The focus is on documentation, what papers you have, but nothing on the results. Do we make any difference, are we doing any good? [There is] no connection to how it works in practice. (child and youth)

2.2. How has open comparisons been used so far?

All responses are from the focus group on social assistance. The responses indicated very large variations in local government practices. Two voices can illustrate this:

My former boss loved Open Comparisons, so Open Comparisons I know well without having participated myself.

I have never looked at it.

One participant pointed out that in her experience, it was the politicians who decided that they would work with Open Comparisons and here one could sense some resistance against how Open Comparisons is used:

I’m anxious to make it good for the clients; they [the politicians] are keener that things should look good.

The responses indicate that the interest both in participating and in making use of Open Comparisons varied widely between municipalities, at least as expressed by the focus group participants. Two different perspectives reflect how the data are used. One emphasizes precisely the importance of looking good compared to other municipalities,

I have not used it, but my director used it to talk about how someone else is better or that we are better, in no other way.

The other perspective was that results can be used specifically to highlight and make improvements; that they can serve as eye openers. They can also be used to follow developments over time.

You can see how you improve yourself if you compare through the years.

Quite a number of comments, however, suggest that it is difficult to use the results of Open Comparisons in practice, for example the fact that questions are interpreted differently. Another problem highlighted is that the results are sometimes rather too abstract for daily use. Several persons mention that results are on a very general level and therefore difficult to use.

2.3. What are the merits and drawbacks of Open Comparisons compared with the data that the municipalities usually use for development?

Some respondents told of recurrent issues that they would have made use of anyhow in their regular planning. Another more positive formulation of the same message argued that although not revealing new issues to be tackled Open Comparisons discussions nevertheless led to consensus on how to develop the work. It was also highlighted that Open Comparisons tends to lead to more systematic treatment of issues.

Open comparisons puts the finger on some areas of improvement [and that] there might be areas of development we should put a spotlight on. (child and youth)

There are also examples of issues raised in the questionnaires not previously considered in the municipalities, as for example the importance of functioning dental care and routines around that. In some cases, participants gave examples of a more comprehensive approach:

We can see what deficiencies our administration has if we add up all the other fields [of Open Comparisons]. (child and youth)

The difference with Open Comparisons vis-à-vis other planning instruments, respondents said, is precisely the comparative aspect, but at the same time pointing out that in practice it is difficult to make comparisons.

At the moment it is like comparing apples and pears because the municipalities are so differently organized. (social assistance)

Among the drawbacks mentioned are important questions that are not captured with Open Comparisons, such as the recruitment of competent social workers, which is quite a big issue, but concerning which there are no items in the questionnaire.

It may not appear in the Open Comparisons that we find it difficult to recruit skilled staff. (abuse)

2.4. How do Open Comparisons influence the work with clients?

When asked whether their clients derived any benefit from Open Comparisons it was merely said that the various agreements between authorities and other routines might prevent people from falling between the cracks.

It could be an eye-opener … when you sit and moan about this – shit! Perhaps you should do so, so it may be a reminder … (social assistance)

However, several people did not believe that clients would be able to see any direct “benefit” from Open Comparisons. The following quotations may illustrate this:

I do not think our clients care if we have procedures or not … that is for our internal job, they just want it to work. (child and youth)

Difficult to see that this could be client oriented … it is right unwieldy … there are very many issues, some on a very broad level, some on a level of more detail … not user friendly if you are not familiar … (social assistance)

Citizens cannot make use of it. Our clients must adhere to the municipality [where they live].(social assistance)

2.5. Presentation with red and green fields

Municipality responses are color coded as red or green (In some other Open Comparisons results could also be reported as yellow). The participants mentioned difficulties in completing the forms. If the question is “do you always …” and the municipality normally has a routine but occasionally it is not possible to follow the routines—should they answer “yes” or “no”? This might affect the outcome and here the participants seem to reason differently.

There is constant assessment of the questions. We have been stricter – when it says “always” we have followed it.

When it came [i.e. when Open Comparisons was introduced] it was a crisis … until you realized that this was about areas of improvement … that results would become clear. Others argued that the presentation in red and green skill boxes is problematic in itself—then the one is right and the other wrong. (abuse)

All strive to get as many green skill boxes as possible to make it look a little good. (social assistance)

Other voices expressed much the same thing

You think that green is good and red is bad. You get red if you answer no, green if you answer yes … it is intellectually dishonest … The dimensions actually say nothing about the actual quality. (social assistance)

It’s a way to get it clear … Then there are huge numbers, a cemetery of figures so I don’t know if it really becomes clear. (social assistance)

In the discussion it says that yes, it is clear, but every time you have to explain the results to the politicians and the media, that they say nothing about the quality.

Participants also mentioned basic methodological flaws and shortcomings that cannot be ignored.

[It is] a simple truth, that is why they use signal colors, for it to be a simple truth. There must be further analysis of the numbers before we can draw conclusions, so therefore I am not completely keen on this use of color, I must say. (social assistance)

It is easy to follow, but it does not say everything. (social assistance)

[Open Comparisons] comes every year, you do not have time to start working with the results, there is so much to work with, wide fields … it will take time to implement, [to] change anything in the municipalities takes time. (abuse)

At the same time was mentioned that results can be presented to the politicians in different ways … now we choose to work with this … Other voices say that Open Comparisons can be used strategically:

They want to have more green fields; one might have a discussion, pursue certain issues. (social assistance)

When one person points out that red does not necessarily mean panic, another clarifies that the important thing is

how we report to politicians. We report to our politicians in writing [information about what, for example, red stands for] (child and youth).

It must be clear. It may be red, but this may just mean we have chosen to prioritize other things. (child and youth)

2.6. Is there a clear direction/ideology in what is being asked?

The most common type of objection was that it is time-consuming and bureaucratic and with too much detail.

You fill in the open comparisons because you feel you have to; because it is an edict from above. (social assistance)

Another perspective that some think emerges in Open Comparisons is how the municipalities should relate to other actors.

That we should interact – interactive thinking – so there is one form of control. (child and youth)

Participants in the focus groups also noticed that there are differences in the issues between the different Open Comparisons areas. In some fields the NBWH asks for the amount of staff training and experience when it comes to staff working with children and youth, but not when it comes to staff working with abuse. One participant put it as the staff [in the field of abuse] is made invisible.

Some perceive clearly that there is steering by the NBHW, among other things, that new questions are successively added.

It controls your development; you would of course like to receive praise. For each new area they put in it starts very red, then it gets greener and greener. (child and youth)

Even though just eight percent responded in a certain way the question remains. (The participants mentioned a few agreements with different actors)

They will keep on with the questions that are red until they turn green! (abuse)

The NBHW - even if it is mild governing – is for our monitoring and control, and it is clear that we see it this way, when it comes from there. (social assistance)

2.7. Development issues

The work with Open Comparisons also aimed to arrive at joint development issues for the three social service fields in the eight municipalities. According to the working model for Open Comparisons, the municipal social services managers consider and decide upon proposals from the working groups for joint development goals. At the third meeting (see the introduction) it was decided which topics the municipal officials wanted to work on together for presentation to the social services managers.

In the focus groups, the participants were asked a direct question about whether Open Comparisons led to new development areas. Participants in all groups had difficulty in identifying such areas. The municipalities have identified certain issues and certain procedures, such as investigation times and clients’ waiting times, but it has been difficult to give concrete examples of new development areas. One person says, for example, that Open Comparisons has not had any significant impact on the actual work with social assistance. It is not [the results from] Open Comparisons that are the source of development.

The joint development goals from the three social service sectors finally presented for the managers were:

Child and youth care: Review of an introductory program, a basic course for newly recruited staff. This was justified on the basis of a general need for trained personnel and was not a direct result of Open Comparisons responses.

Treatment of alcohol and drug abuse: Participants agreed on a plan to increase the use of the monitoring and evaluation instrument, Addiction Severity Index, on a limited group (to begin with) of externally placed clients. This was also an issue that was previously discussed among social workers in the field of addiction, but there was also some indication from the Open Comparisons responses that showed the need for systematic monitoring.

Social assistance: Finding out more about clients with health difficulties; and, finding internal routines for work with children and young people among social assistance recipients. The first question was motivated by a “general interest” while the question of routines with children and young people was more obviously based on the results of the Open Comparisons.

Overall, it was possible to find common areas of development in all three social service areas. Some of these can be attributed specifically to the Open Comparisons results, while others rather respond to questions and needs already known in the municipalities. Since the municipal civil servants have been ordered by their bosses, the social services managers, to work with Open Comparisons, this has become the reason to meet and has resulted in the above suggestions. The following comment summarizes the process:

Open Comparisons is not client-related, I don’t think,. It is an assessment instrument, there are routines and so. But that we have met like this because of Open Comparisons has led to discussions that have development potential.

3. Discussion and conclusions

The overall aim of this article is to describe and analyze the use of Open Comparisons with examples from the three social services fields—social assistance, child and youth care, and treatment of alcohol and drug abuse. The issues raised in this article are how the professionals in the municipalities have received and view Open Comparisons, the role it plays for social services development and what results may be used in their own local administrations.

According to central authorities in Sweden, Open Comparisons should give more influence to users and citizens, help to ensure/improve the quality of activities and highlight development needs, contribute to policy development and support an evidence-based practice of social services. Headings under these goals are democracy, quality, policy and evidence.

3.1. Methodological considerations

It is obvious that the exchange of experience between municipalities in the three social service fields has been important. Several of the participants in all the focus groups testify that they now have a greater awareness of the indicators’ significance and the importance of the municipalities agreeing on how to complete the NBHW surveys. Meeting with colleagues and exchanging experiences is much appreciated.

It should be emphasized that this is the first time that these municipalities collaborate jointly and systematically about these issues. Therefore, there was initially some uncertainty among participants about the point of it all and what it would entail. The joint work with Open Comparisons—and hence this research project—started in the spring of 2015. This meant that this year’s edition of the Open Comparisons questionnaire already was completed in the social service field child and youth care, as well as treatment of alcohol and drug abuse. It was possible only in the field of social assistance to interpret the indicators together before the questionnaires were completed.

When designing the study, it was expected that the municipalities would be represented by department heads or heads of service units since they would be the most suited to answering the questionnaires, but the municipalities have chosen different solutions. Sometimes it was the managers or unit heads, sometimes a controller and sometimes a person with some other function. The focus groups were therefore somewhat more heterogeneous than expected. The significance of this for the end result is difficult to say for sure, but group composition is important for the discussion in a focus group to develop. There will be a small story in itself in every focus group, a small discourse about what Open Comparisons is and how responses are taken care of in the municipalities. The composition of the group, however, reflects the actual situation of the persons who at the time worked with the issues in and their responses should thus be representative of the municipalities that participated. Most likely the understanding of and objections to Open Comparisons would be the same in other municipalities as well as the ambivalent feelings for the coloring of the results (It is beyond the scope of this article to discuss the use of indicators per se).

The research questions are discussed below. The findings about how professionals judge Open Comparisons and how they consider its usefulness are quite “hands on”. The question about overall goals of Open Comparisons is discussed in a wider context of soft power governance.

3.2. How do professionals judge Open Comparisons and what role do they believe that Open Comparisons plays in local development?

In SALAR’s manual (Handbok 2013), there are different types of indicators—structural process and outcome indicators. This division has not been so clear in the focus groups, but discussions have mainly focused on that Open Comparisons particularly captures technical quality. This means above all the various formalities, agreements and documentations. A merit of written procedures is that it is the responsibility of the organization and not the individual social worker. It is surely, for instance, quite valuable to have written procedures in municipalities with high staff turnover. Various agreements and practices can benefit the clients among other things, to prevent people from falling through the cracks. But it is not obvious that the existence of written procedures measures the quality of the help that clients actually get. That there is a written routine does not necessarily mean that it is particularly functional or even that it is used. On the other hand, if it is not there you have no chance of doing anything about it.

A very relevant question is whether it is at all possible to construct indicators that not only measure what is done, but also to measure what the clients get? The question is important because it gives an indication of what are reachable goals through Open Comparisons and what cannot be measured with this method. In the focus groups there was some frustration that the indicators do not measure the qualitative aspects of the social work carried out. An example of a focus group comment: [Quality is] to make the right assessment based on the individual’s particular circumstances, how much can you demand of a person who is mentally ill? It is a set of skills, the social work professional’s competence to meet and respond to clients that is not shown in Open Comparisons. Open Comparisons does not appear to be a particularly good tool for measuring client relationships and it is also difficult to measure the result for the clients (see also Carlstedt, Citation2015; Lindgren, Citation2014). Lindgren (Citation2008, p. 110) mentions that the professional abilities that are important in social work, such as “tacit knowledge”, the ability to see the context and to read an emotional climate, lose importance.

One issue that consistently arose concerned the color scheme of the presentation of the answers. It is reasonably better to have a large number of green fields than having many red. However, not all issues are equally important and it may be more useful to see the development over a longer period of time rather than only a single year. The color scheme is clear but at the same time problematic. Advantages are that it makes very visible the areas where improvement is clearly needed. The problem is that the colors are so strongly interpreted as a measure of quality, and also that they sometimes are influenced by quite small changes (cf. Kajonius & Kazemi, Citation2014). Societal developments also play a role in a way that affects the pre-conditions of response. The fact that there are many private schools and that health care providers in many parts of the country are largely privatized means that there are very many players to keep track of. The NBHW wants municipalities to have written procedures for schools and primary care, but it may in practice be very difficult to get the various agreements and collaborations in place. It may simply be difficult to qualify for green to the survey questions (One of the participants in a focus group wondered; why not use the colors blue and yellow instead). There is also a risk that the color scheme contributes to an involuntary race where the focus is on hierarchies and competition. It would thus be possible to say that municipality x has a shorter office turnaround time than municipality y, but not what a reasonable office turnaround time is from a client perspective.

3.3. How do professionals view the opportunities to turn Open Comparisons results into concrete effects in their own administration?

Local government representatives in the focus groups have had quite some difficulties to clarify what separates Open Comparisons from other measuring instruments. The emphasis is mainly only on the ability to make comparisons and that it is a recurrent survey so that changes can be tracked over time. One can also spot previously hidden areas that may be important to grasp. Those who are more resistant to Open Comparisons simply say that they take no notice of it.

Something that is becoming quite clear—especially if one only cares to make comparisons—is that municipalities would have to heavily invest (limited) resources for adequate analysis and handling of the material. It is not as simple as the numbers speaking for themself, they must be interpreted and put into context. Analysis requires statistical expertise and also a good knowledge of each respective municipality. To only care about green, yellow and red skills boxes is not enough. The municipalities’ capacities as well as their organizations have bearing on the results, which must be taken into account. There are also issues that might be considered sensitive, for example, the number of clients per member of staff. It can for example be difficult for a municipality to enforce the workload if the neighboring community has far less cases per officer. Thus the difference can sometimes be due to organizational factors and job titles rather than actual caseload.

To what extent has Open Comparisons been used to find common areas for development in the municipalities during this work? The answer is that the selection of joint municipal areas for development is only partially based on the results of red and green fields. Municipalities may also take into consideration certain issues noted in the past, such as the recruitment of staff, and therefore see this as a suitable area for cooperation. This was already an important issue that it has become more visible since their sitting in mutual discussions, and has not really come as a result of Open Comparisons. In this case, Open Comparisons is rather used as legitimization for something they want to do anyhow. However, the choice to work with internal procedures for matters relating to social assistance among children and youth is motivated by the results of Open Comparisons and is probably a question that would not otherwise have arisen in all the municipalities involved.

3.4. Does the use of Open Comparisons fulfill the objectives to promote democracy, raise quality, contribute to the development of social services and provide the basis for an evidence-based practice in the municipalities involved?

3.4.1. Democracy

It turned out quite early in the interview responses that it was hard for the municipalities to see how the users/clients themselves could make use of the results (“the democracy argument”). A patient can sometimes choose to visit another county for (better/faster) health care, but a dissatisfied social services client can hardly go to the neighboring community and get a second opinion or better treatment or more in income support. Our clients must adhere to the municipality, was said in a focus group. As revealed in the interviews, Open Comparisons measures whether there is a routine, but not how important it is for planning and implementation. Since the answers often are difficult to interpret and must be put into context, it is not the client or the public that primarily is in focus. So even though transparency may be more available to the citizen since all results are public, benchmarking still remains more a tool for policymakers and directors when discussing such issues as quality, availability, costs and performance.

3.4.2. Quality

The NBHW maintains that Open Comparisons is only one of several evaluation instruments used and that it cannot in itself be equated with quality, but must be seen in conjunction with other information about the municipalities. Although Lindgren (Citation2014) and Alm (Citation2015) suggest that “quality” can be seen as a sliding and elusive concept, Carlstedt (Citation2015)—specifically studying how quality is treated in Open Comparisons—is of the opinion that despite these reservations there is a gradual shift over time. The NBHW and SALAR appear to have been caught by their own traffic light metaphor—where green is equalized with good quality. The risk then is that the municipalities through their politicians uncritically adopt the Name and Shame debate, when media interest can be huge. A municipality topping a ranking list will most likely broadcast this to the world. Is it so that Open Comparisons, originally intended as a basis for a policy discussion, can then become the policy itself? It appears on the one hand – from focus group comments—that there is awareness in the municipalities that the color scheme is problematic and has limited range. On the other hand, it is easy to adapt both the completion of the forms and the work itself because all strive for as many green skill boxes as possible “to make it look a little good”.

The NBHW is also in some cases trying to influence the municipalities twice over. For example, the NBHW may advocate a particular method, such as the Addiction Severity Index, a manual-based approach for the assessment and monitoring of drug abuse problems. In Open Comparisons the NBHW then follows up whether or not municipalities actually use this method. A municipality not using that method will be coded red, even if they were to use a different but comparable method, which since not included among NBHW approved methods is not “counted”.

The NBHW may also choose to add fields that it considers important. The first time a new field is introduced many municipalities score red but learn to adapt so as to get more green. This indicates that there may be an element of self-discipline involved in responding to questionnaires. It should look good and you would like to receive praise. This can be understood in Foucauldian power terms. Foucault argues that (political) control systems affect people’s thinking, and self-discipline may then be a consequence if the governed internalize this thinking (= “governmentality”, see below).

3.4.3. Policy

The process of municipalities working together with Open Comparisons shows that it is possible to find joint development goals. This is not to say that Open Comparisons is the best way to reach agreement or that the most relevant targets emerge, but at least it is possible.

Those who can make use of Open Comparisons are mainly managers and other professionals working with the strategic development of social services. In this way, Open Comparisons reassembles the Swedish national guidelines for treatment of addiction, which clearly addresses policy issues but offers little to the individual social worker dealing with these client groups.

This of course can lead to a resistance to working with Open Comparisons among staff, who feel forced to spend time on tasks that offer little reward. They also think that the surveys appear too often and desire more feedback. There is possibly here some untapped potential to be proactive, to use Open Comparisons to highlight areas they would like to invest in, and perhaps request funding for. For an administration management, it might be interesting to see what the pattern looks like if they compare the different social service fields simultaneously, i.e. some form of meta-analysis of several social areas. Such an analysis could show whether there are issues that repeatedly stand out.

3.4.4. Evidence

There seems to be a certain order when some methods and approaches are introduced in Sweden. It is perhaps not surprising if the NBHW with its medical history and tradition gladly pick ideas from medicine (and the natural sciences). The discussion of evidence-based knowledge appeared earlier in medicine than in social work. Open Comparisons in Sweden was introduced in health care and could then via comparisons of eldercare (which has the strongest resemblances to health care) had an impact in other fields of social services some years later. Open Comparisons is also said to contribute to evidence-based knowledge.

During the focus group interviews it came to attention that in Open Comparisons the three fields of social services are measured with different yardsticks. Consider staff education, for example. A check of the questionnaires shows that the items relating to this are different (All responses relate to the 2015 results). Under the heading knowledge-based activities in the questionnaire about child and youth care the NBHW asks for staff with social work education and at least three years of experience in child investigations. In the social assistance questionnaire the NBHW asks for social worker staff with a bachelor's degree and work experience of more than two years. Within the field of abuse and addiction care there is no questions about education at all (besides questions about planned training and access to tutorials). This raises of course questions about how the NBHW chooses indicators. Is it more important to have access to experienced personnel working with children than when working with people with addictions? How can this difference be explained? Is it based on science, ideology or something else? One guess is that the NBHW formulated the survey questions based on separate processes in the various fields rather than on a comprehensive view of social service sector development. Even taking into account that different fields may have different conditions, it is reasonably a problem when the indicators without justification are allowed to vary so much in a method that is said to contribute to an evidence-based social service.

To provide an evidence-based practice requires dissemination and application of quality-assured knowledge. Open Comparisons can sometimes serve as an eye opener, can visualize processes in municipalities and can help improve the system, but the method is not evidence-based. Open Comparisons does not meet the requirements for quality-assured knowledge and it is uncertain whether the efforts made improve the situation for the service users.

3.5. Governance and governmentality

The implementation of Open Comparisons is an example of how the government is trying to control the local authorities despite the long established independent autonomy of local government. Since the merger of the municipalities and county organizations in SALAR, this organization has become a more obvious partner for the national government to have discussions with. Statskontoret (The Swedish Agency for Public Management, Citation2011, p. 64) writes: “The organization has also become more of a partner to the government and to some extent, in cooperation with the state, a standards authority over its members, such as in the work of public performance. SALAR also works often as informal coordinator of government control.” Most likely there is a potential role conflict with municipal association SALAR taking on the tasks of implementation and audit normally associated with the exercise of official authority.

Since the NBHW and SALAR cooperate, it is difficult for local authorities to refuse to participate. The NBHW and SALAR mutually negotiate a wide range of agreements in the social services sphere (see, for example, Socialdepartementet [Ministry of Social affairs], Citation2015). Open Comparisons is a form of control, an expression of soft power governance. Other expressions of soft governance are the national policy documents, agreements and national coordinators (The Swedish Agency for public Management, Citation2011). This form of governance is simultaneously a weak governance form because it is not based on laws and legislation, but on reciprocity. There is no accountability to be exacted on the basis of Open Comparisons results. Still, the municipalities can use it to define and regulate their social services and the establishment of good practice. But at the same time as the NBHW wants to ensure national uniformity, a soft power strategy such as Open Comparisons makes the implementation process more opaque and difficult to maneuver. Accordingly, the NBHW (together with SALAR) need to induce municipalities to adapt to their requests through the implementation itself and the questions asked.

In a lecture discussing the history of ideas, Foucault (Citation1991, p. 88) discusses “how to be ruled, how strictly, by whom, to what end, by what methods”. Some of his thoughts are captured in the idea of governmentality, a term sometimes used to explain the technology, mental processes and rationality that makes individuals and organizations allow themselves to be guided in a desired direction.

Perhaps Open Comparisons where the State (NBHW) creates peer pressure with the help of (political) benchmarking is seen as a form of “governmentality”, with emphasis on how the technological side of the exercise of power affects the local social services and its mindset. The control is operationalized by coloring the results of open comparisons. No one wants to get red results since they may be interpreted as ineffective, lacking in quality or simply not doing a good job. This can lead to the professional adjusting to the Open Comparisons questions despite the constraints and difficulties of interpretation available. Thus, the coloring contributes to a form of self-discipline. In the long run, as a consequence the planning of the social service practices may be arranged in ways that make them suitable to audit with these methods. If this is a reasonable analysis, it means that the use of Open Comparisons illustrates Foucault’s concepts as well as the downside of the audit society (cf. Power, Citation1999).

Open comparisons may also reflect a post-political vision of governance as these regulatory practices downplay the political conflicts behind seemingly objective figures and color charts. The same critique towards soft power regulations can be found on EU and global levels (e.g. Garsten & Jacobsson, Citation2011). One possible way to relate to the problem of monitoring and evaluating social services may be to reflect on what social work is about and the professional’s mission. It is both legitimate and important to seek knowledge, to find forms for a common consensus and in different ways try to develop the social services and social work. The basic problem in the social services is that people often live under difficult conditions and end up in vulnerable positions that make them more or less voluntary receivers of social services support. It is important that this perspective remains, so that basically political issues are not made invisible or become renegotiated to be questions of a purely technical nature. Open Comparisons in the social services measures the technical quality of public sector operations by reviewing what is available, such as routines and agreements, but says very little about what help clients actually get or whether they even are aware of the efforts made.

Funding

The author received no direct funding for this research.

Additional information

Notes on contributors

Sven Trygged

Sven Trygged’s research concern social work and social policy in a broad sense. He has written about labour market programmes, health issues and international social work. This study about Open Comparisons was executed together with FoU Nordväst, a regional unit for research and development work within the social services.

References

  • Alm, M. (2015). När kunskap ska styra – om organistoriska och professionella villkor för kunskapsstyrning inom missbruksvården [When knowledge is the ruling force – on organizational and professional conditions for knowledge governance in substance abuse treatment] (Dissertations No 215/2015). Växjö: Linnaeus University.
  • Berg, L., & Spehar, A. (Eds.). (2011). EU och välfärdens Europa. Familj – arbetsmarknad – migration [EU and the welfare of Europe. Family – labor market – migration]. Malmö: Liber.
  • Blomgren, M., & Waks, C. (2011). Öppna jämförelser inom hälso- och sjukvården – en utmaning mot den professionella byråkratin? [Open Comparisons in health care – a challenge for the professional bureaucracy]. Arbetsmarknad & Arbetsliv, 17(4), 95–108.
  • Carlstedt, E. (2015). En socialtjänst med kvalitet? Öppna jämförelser som skyltfönster och verktyg [Social work with quality? Public Comparisons as show-window and tool]. Masteruppsats, Socialhögskolan: Lunds universitet.
  • de la Porte, C., & Pochet, P. (2012). Why and how (still) study the open method of co-ordination (OMC)? Journal of European Social Policy, 22(3), 336–349.10.1177/0958928711433629
  • European Commission. (2016). Social protection and social inclusion. Retrieved August 11, 2016, from http://ec.europa.eu/social/main.jsp?catId=750
  • Foucault, M. (1991). Governmentality. In G. Burchell, C. Gordon, & P. Miller (Eds.), The foucault effect: Studies in governmentality (pp. 87–104). Chicago, IL: University of Chicago Press.
  • Garsten, C., & Jacobsson, K. (2011). Post-political regulation: Soft power and post-political visions in global governance. Critical Sociology, 39(3), 421–437.
  • Järholt, B., Engström, S., & Lindström, K. (2008). Kan kvalitetsregister värdera vårdkvalitet? [Can quality registers assess quality of care?]. Läkartidningen, 47(105), 3452–3455.
  • Kajonius, P. J., & Kazemi, A. (2014). Rankning av Sveriges kommuners äldreomsorg i Öppna jämförelser [Ratings of elderly care in Swedish municipalities in Open Comparisons]. Socialmedicinsk tidskrift, 4, 323–331.
  • Lindgren, L. (2008). Utvärderingsmonstret. Kvalitets- och resultatmätning i den offentliga sektorn [The evaluation monster. Quality and performance measurement in the public sector]. Lund: Studentlitteratur.
  • Lindgren, L. (2014). Nya utvärderingsmonstret. Om kvalitetsmätning i den offentliga sektorn [The new evaluation monster. Quality and performance measurement in the public sector]. Lund: Studentlitteratur.
  • NBHW – National Board of Health and Welfare. (2015). see Socialstyrelsen (below).
  • Nye, J. (1990). Bound to lead: The changing nature of American power. New York, NY: Basic Books.
  • OECD. (2014). Society at a glance. OECD Social Indicators. The crisis and its aftermath. Paris: OECD Publishing.
  • Power, M. (1999). The audit society. Rituals of verification. Oxford University Press.10.1093/acprof:oso/9780198296034.001.0001

SALAR see Sveriges kommuner och landsting (below)

Appendix 1

Focus group questions

(First meeting with staff representing social assistance, 24 April 2015)

(1)

What does “quality” stand for in your work with social assistance?

(2)

Is it possible to use the same measures of quality in Open Comparisons even when conditions between municipalities differ?

(3)

How has Open Comparisons been used so far and what may be its future use?

(4)

What hopes and fears do you have about the use of Open Comparisons?

(5)

Do you plan to share Open Comparisons results with the public? Why/why not?

(Third meeting with the same questions asked of staff from all three fields of the social services, 2 June 2015–29 February 2016)

(1)

What are the merits and drawbacks with Open Comparisons compared to other information that the municipalities usually use for development work?

(2)

Has Open Comparisons resulted in turning up new areas for development? If so, which?

(3)

Has Open Comparisons influenced your direct work with clients? How?

(4)

How do you view the NBHW/SALAR reporting with red/green coding?

(5)

Are there areas that are missed/not made visible (or even rendered invisible) by Open Comparisons?

(6)

Open Comparisons reflects what the state considers to be important in public services. Can you see a clear direction/ideology in what is being asked?

Appendix 2

Processing themes and research questions