1,236
Views
2
CrossRef citations to date
0
Altmetric
Articles

Evaluating the usability of two salutogenic instruments on health and work experience, using cognitive interviewing

ORCID Icon, , ORCID Icon & ORCID Icon
Pages 241-259 | Received 06 Sep 2017, Accepted 05 Sep 2018, Published online: 12 Jan 2019

Abstract

Workplace surveys are used in workplace health promotion as a basis for improvements at the workplace. But there is lack of psychometrically and qualitatively validated work-health related instruments with a salutogenic approach. The purpose of this study was, therefore, to evaluate the two instruments, the Salutogenic Health Indicator Scale and the Work Experience Measurement Scale, among staff of different professions in a healthcare setting. These instruments were evaluated with cognitive interviews conducted at a hospital in Sweden. The respondents were purposefully selected from various criteria such as profession, age, and sex (N = 14). The respondents read the items aloud and then spoke about how they experienced the items. A deductive (partly inductive) content analysis was done from Tourangeau’s four concepts of respondent actions: comprehension, retrieval, judgment, and response. Two main categories emerged: (1) interpreting and (2) responding, and an additional six subcategories: difficulty, essence, direction, keywords, strategy, and alternatives. The results showed strengths and weaknesses of the instruments. The results were discussed from various validity aspects: face validity, content validity, and user validity. The validity aspects were connected to concepts of respondent actions as well as to questionnaire and respondent factors for motivation.

Background

Using surveys is a common method in public health research, and many organizations use instruments to measure the health status and experience of the employees. Workplace-related instruments are often used in workplace health promotion (WHP) as a baseline for health interventions (Lee, Blake, & Lloyd, Citation2010), but the items in such instruments are seldom defined from a positive point of view (Nilsson, Citation2010). This is probably because work-related health issues traditionally have been devoted to stress problems and pathogenic risk factors (Garrosa, Rainho, Moreno-Jimenez, & Monteiro, Citation2010; Hertting, Citation2003; Kira & Forslin, Citation2008; Maslach, Schaufeli, & Leiter, Citation2001). A complementary perspective is the salutogenic one, with a focus on resources for promoting or maintaining health for individuals and favorable conditions for organizations (Bauer, Davies, & Pelikan, Citation2006; Bauer & Gregor, Citation2013; Nilsson, Andersson, Ejlertsson, & Troein, Citation2012). Regardless of perspective (pathogenic or salutogenic), it is important to use instruments in WHP that have showed psychometric reliability and strong evidence of validity across multiple studies. Currently, there are few reliable and valid salutogenic survey instruments available for practical use in a work context (Jenny, Bauer, Vinje, Vogt, & Torp, Citation2017; Nilsson, Citation2010; Streiner & Norman, Citation2003). Therefore, two instruments were developed by the authors of this article, the Salutogenic Health Indicator Scale (SHIS) (Bringsén, Andersson, & Ejlertsson, Citation2009) and the Work Experience Measurement Scale (WEMS) (Nilsson, Bringsén, Andersson, & Ejlertsson, 2010). They have items defined from a salutogenic point of view, and their intended practical application is to lead to salutogenic improvements, such as WHP. SHIS and WEMS have been evaluated psychometrically in earlier studies (see under Method), but a qualitative cognitive pretesting was also an important step in the development and evaluation of the instruments to understand the usability of the instruments in practice.

In a survey process, is it vital to understand how to motivate the potential respondents to commit themselves to a survey (Nilsson & Blomqvist, Citation2017; Wenemark, Hollman Frisman, Svensson, & Kristenson, Citation2010). Three factors have been described to affect the motivation for participation in a survey: (1) questionnaire factors, (2) respondent factors, and (3) data collection factors (Wenemark et al., Citation2010). Questionnaire factors relate to the respondents’ experiences of completing a survey such as cognitive burden, content relevance, and time burden. The respondent factors are interest in item topics, attitude to surveys, and the respondents’ decision to participate. All these factors influence the respondents’ efforts and commitment to the survey. Data collection factors, in terms of information before the survey, incentives, and anonymity, influence the respondents overall experience of survey satisfaction or burden (Wenemark et al., Citation2010).

Questionnaire and respondent factors, such as understanding and responding, influence the respondents’ efforts and commitment to the survey (Nilsson, Andersson, Ejlertsson, & Blomqvist, Citation2011). To investigate the respondents' views on an instrument, the cognitive interviewing method is often used (Miller, Citation2011; Willis, Citation2005). It comes from the interdisciplinary field of cognitive aspects of survey methodology (CASM), which started to create interest in the 1980s (Belli, Conrad, & Wright, Citation2007). CASM illuminate cognitive processes in survey responding (Belli, Shay, & Stafford, Citation2001; Tourangeau, Citation1984). Later, an alternative viewpoint, called the integrative approach (Miller, Citation2011), added a sociological perspective to survey responding. Miller (Citation2011) believes that response behavior, interpretation and the answering process, are all influenced by the cultural context.

Analysis of the respondents’ thoughts about instrument items is often done using the model of Tourangeau (Tourangeau, Citation1984; Tourangeau, Rips, & Rasinski, Citation2000), which consists of four respondent actions: (1) comprehension, (2) retrieval, (3) judgment, and (4) response. Survey responses are probably not that linear but rather more complex (Tourangeau, Citation1984; Conrad & Blair, Citation1996). Comprehension is about the consistency between intended meaning of an item and the respondent’s understanding. Inconsistency may lead to incomparable survey results and a lack of validity. However, item comprehension has two aspects: literal and intended. Literal comprehension is about understanding the words in the item, and intended comprehension has its focus on the constructor’s intention (Tourangeau et al., Citation2000). Retrieval is about the respondents' response strategy, which can be factual or attitudinal. A factual response strategy relates to facts of an event or situation, for example, whether it happens on a daily basis or is rare. An attitudinal answering strategy relates more to attitudes and feelings about the situation experienced over time (Tourangeau et al., Citation2000). Judgment is an often quick deliberation process with respondents deciding whether to respond to an item. The judgment process may include thoughts of whether the item is understandable, appropriate, feasible, and manageable in relation to the respondent’s situation (Tourangeau et al., Citation2000). Response starts when the responding action begins, and there are two components to consider. One is the response alternatives and the other is item editing. The response alternatives are important to the respondent’s interpretation of the item as a whole and, thereby, have an effect on the respondent’s reply. The response is also influenced by the order of the items, and the experienced layout, which is called editing (Tourangeau, Citation1984; Tourangeau et al., Citation2000).

Social and educational factors may also influence survey responding processes (Miller, Citation2011); this means that the profession of the intended survey population could have an effect, and this needs to be taken into account as well. Thus, it is important to explore whether instrument items are understood as intended and whether they function properly in practice. Only then can the quality of the survey results become a useful basis for discussion and improvements in WHP (Drennan, Citation2003; Garcia & McCarthy, Citation1996). Few studies in health-related quality of life research involve cognitive interviews in instrument development processes (McColl, Meadows, & Barofsky, Citation2003), and the trend is similar in public health research, where psychometric tests of the instruments are more frequent (Sudman, Bradburn, & Schwarz, Citation1996; Wenemark, Citation2010). Therefore, the results of this study will be related to aspects of validity. First, face validity, which indicates whether the instrument seems to measure what it intends to measure at a first glance (Kaplan & Saccuzzo, Citation2013). Then, content validity, which means how well an instrument covers its intended area (Kaplan & Saccuzzo, Citation2013). And third, user validity focuses on the potential usability and relevance of the instrument in practice (MacIver, Anderson, Costa, & Evers, Citation2014). Last, reliability is about how data is systematically and honestly gathered and processed, but also about the respondents' understanding of and attitude to the survey process (Kaplan & Saccuzzo, Citation2013; Wenemark, Citation2010).

There is, to our knowledge, a lack of work health-related instruments with a salutogenic approach useful for WHP that shows strong evidence of validity and reliability across multiple studies. Thus, to get a holistic picture of instruments from the respondents’ perspectives, both psychometric tests and qualitative exploration are complementary and necessary for practical usefulness. The purpose of this study was, therefore, to evaluate the two instruments, the SHIS and the WEMS, among staff of different professions in a healthcare setting using the cognitive interview method. It is important to include different professions in this study because there is a possibility that the instruments will also be used in other areas than healthcare in the future, as the items are not specific to a healthcare setting and its professions.

Method

Setting and respondents

A cognitive interview study (Beatty & Willis, Citation2007; Willis, Citation2005) was conducted in 2013 at a hospital in the south of Sweden. The hospital has around 1,100 employees, and the hospital’s care focus is primary care and planned specialist care. This hospital was chosen because it had previously been involved in research projects about WHP and was where the two instruments SHIS and WEMS were partly developed. So, this study was a continuation of those research projects.

Previous studies of SHIS and WEMS at the hospital mainly included registered nurses and assistant nurses. In this study, the aim was to see whether the instruments were also useful and valid for other professions in the healthcare setting, and for other registered nurses and assistant nurses who had not participated in previous instrument development studies. In a cognitive interview study, sample sizes may vary according to organizational context of a study (Miller, Citation2011) but are typically in the range of 5 to 15 (Willis, Citation2005). Respondents for cognitive interviews are selected strategically based on characteristics of importance to the study at hand, for example education and profession (Miller, Citation2003). Therefore, the sampling method was purposeful and the personnel department (human resources [HR]) received the following set of criteria on which to base the strategic selection of potential respondents (N = 15) for the study: working in various hospital units or functions, level of education, age, sex, and profession. The purpose was to get as much variation as possible of the respondents' characteristics. Names of potential respondents (who gave permission to provide their names and e-mail addresses) were obtained from the personnel department, and the researcher (PNL) invited them to the study using an informational letter sent by e-mail. Fourteen people agreed to participate, 10 women and four men. The different professions were registered nurse (n = 1), medical technical nurse (n = 1), assistant nurses (n = 2), physicians (n = 2), paramedics (n = 3), administrative personnel (n = 2), and maintenance staff (n = 3). Eight of the participants had a college degree, five had a high school education, and one had a diploma from a vocational university. Respondent ages varied between 31 and 66, and they had been in their current employment from 6 to 35 years. Information about the study and how the ethical aspects were taken into account was sent by e-mail to each of the respondents before the interview, and it was repeated orally before the interview started. Written informed consent was obtained from all study participants before the interviews started. As the interview study was not based on an experimental design nor involved sensitive personal information, there was no need for an ethical approval according to the Swedish Law of Research Ethics, SFS 2003:460.

Instruments

The two instruments evaluated were the SHIS (Bringsén et al., Citation2009), and the WEMS (Nilsson et al., Citation2010). These two instruments are unique in that they have a salutogenic approach (). The SHIS and the WEMS were developed by this article’s authors. All four authors have a PhD in medical sciences with a focus on public health. PNL and ÅB have several years of interviewing experience, of individual interviews and focus group interviews, as well as experience of instrument development. GE is a professor in public health and X3 is an MD and associate professor in public health, and they have solid experience in quantitative methods and instrument development. Items in SHIS and WEMS were developed from qualitative and quantitative studies among healthcare employees (Bringsén, Citation2010; Nilsson, Citation2010). SHIS emanated from theories about health and was first tested in a quantitative study among healthcare employees (Bringsén et al., Citation2009). WEMS was tested psychometrically on healthcare employees (Nilsson et al., Citation2010), and the results from the two studies were combined with results from a qualitative focus group study (Nilsson et al., Citation2012). The two instruments have also been further tested together psychometrically, with high-quality outcomes (Nilsson, Andersson, & Ejlertsson, Citation2013). Content, Cronbach’s alpha, and other characteristics of the instruments are shown in . Aspects from practical use of SHIS and WEMS in a survey process were explored in a participatory study with registered nurses and assistant nurses from two hospital wards (Nilsson et al., Citation2011). The SHIS can be used solely to measure health indicators or in combination with other instruments. The combination of SHIS and WEMS is intended for use in WHP-processes to assess work-health experience from a salutogenic approach (Bringsén et al., Citation2009; Nilsson et al., Citation2010).

Table 1. Description properties and Cronbach’s alpha (CA) for the Salutogenic Health Indicator Scale (SHIS) and the Work Experience Measurement Scale (WEMS).

Procedure

The interviewer (PNL) was known at the hospital for a long time because of previous research projects, but the people included in this study have not previously entered into studies with any of the authors. All individual interviews were performed during work hours at the hospital, in the respondent's own office or in meeting rooms. The interviews lasted 30 minutes to one hour and were digitally recorded.

Initially, the respondents described their own general experience of participating in survey processes at the workplace, then the interviewer asked them to read the items in SHIS and WEMS aloud one by one. The respondents then described their understanding of the items, feelings about responding in general, and the response alternatives in particular (Conrad & Blair, Citation1996; Tourangeau, Citation1984; Tourangeau et al., Citation2000). The respondents did not need to elaborate on their answer to the item itself. During the interview, the researcher asked additional probing questions such as how the respondent evaluated specific words in each item, what the respondent was thinking, and how each respondent decided on the answer to a item, and also the ideas about the response alternatives. Respondents were free to tell their stories in relation to the items, in line with Miller's (Citation2011) thoughts about the link between item responses and social context.

Analysis

The analysis of data was based on a deductive content analysis (Elo & Kyngäs, Citation2008) focusing on Tourangeau’s (Citation1984; Tourangeau et al., Citation2000) four concepts of respondent actions: comprehension, retrieval, judgment, and response. However, the analysis was also developed into an inductive analysis procedure as data showed new possibilities for labeling categories and subcategories. Interview data consisted of respondents' stories of item interpretation and thoughts when responding to the instrument items. Miller (Citation2011) points out that social contexts, in this study interpreted as work context and professions, mediate item response. The analysis highlighted the positive and negative aspects of the instruments and illustrates them in categories and subcategories supported by quotes in .

Table 2. Overview of categories, subcategories and quotes by number, illuminating each subcategory in the result.

The analysis in the study was performed by three of the authors (PNL, GE, and ÅB). No specific software was used for the data analysis. First, the first author (PNL) listened to all the interviews and read all transcripts to get an overall picture of the data. Then, the five interviews that seemed to differ the most were analyzed by PNL and GE. The authors sorted the data into the four categories from Tourangeau (Citation1984; Tourangeau et al., Citation2000) called (1) comprehension, (2) retrieval, (3) judgment, and (4) response, and then met to compare and discuss the analysis. Then, PNL analyzed the remaining nine interviews and searched for possible deviations, but nothing additional was found. Finally, PNL and ÅB analyzed the data in the four categories and realized that the data within the concepts of comprehension and retrieval were describing various experiences of how the respondents interpreted the items, such as difficulty of understanding the item, intended item essence, positive or negative item direction, and understanding of keywords in items. Therefore, a decision was made to put those two concepts together into a new category labeled “interpreting.” The same applied to the data in the concepts of judgment and response, which contained data about the respondents’ answering strategies and answering alternatives. Therefore, a new category was labeled “responding.” Within the interpreting category, four subcategories emerged: difficulty, essence, direction, and keywords. In the responding category, two subcategories were identified: strategy and alternatives.

Results

Quotes illuminating the results are shown in and referenced by numbers in parentheses.

Interpreting

Difficulty

The respondents’ first impression of each item comprehension was valued as easy, medium, or hard. All statements in SHIS and 27 of the 32 statements in WEMS were experienced as easy. Five statements in WEMS were experienced as medium or hard to understand. Three of these statements were about autonomy, and the respondents had questions about whether autonomy should be understood within all rules and regulations or in general (1). One statement about supportive working conditions, “I feel that my employer invests in my health”, was valued as medium or hard. Most respondents understood it as the directly offered benefit of a health check, not as overall work environment efforts, for example, ergonomic resources, workplace surveys, and self-administrated schedules (2). One statement about the process of change, “The process of change was a result of the needs and wishes of my workgroup”, was experienced as hard as the organizational level and type of change process was not clearly defined (3).

Essence

There was a consensus in terms of how the respondents interpreted the essence of the statements in WEMS and the item in SHIS. Even if the interpretation was broad, the essence of the statements remained the same. One example was the starting item in SHIS, which asked for an estimation of the respondent’s health experience in the past 4 weeks. Some respondents did not even read the 4-week time frame, but most respondents gave a response about “the last two weeks or so,” and saw no problem with the time frame based on the essence of the item (4).

Direction

The respondents evaluated the statements in WEMS in a positive direction, in line with the constructors’ salutogenic intention. The respondents experienced the positive wording in the statements as good because it opened up either for a more positive or a more negative answer (5). The fact that all the statements were in the same direction made responding more logical to the respondents. They found it a risk with mixed directions in a survey, because if you do not read carefully, it could go wrong (6). In SHIS, the semantic differential contributed to the respondents’ comprehensive experience as it gave a positive and a negative wording (e.g., have lots of ideas, have been creative—have been lacking ideas, have not been creative) for each response alternative (7).

Keywords

In general, the respondents had a similar lateral comprehension of keywords. Two keywords—feedback and challenge—in the statements “I get feedback on the work I do” and “My work is a great personal challenge” were evaluated as positive by the respondents, but with a negative component (8–9). There was one exception from a joint understanding of keywords and that was the word workplace, used in some statements in WEMS. The problem related to some of the respondents (administrative personnel, registered and assistant nurses) working at one workplace, whereas others (physicians, paramedics) worked at more than one workplace at the hospital. Some of the respondents working at several workplaces at the hospital found it difficult because their minds were split regarding which workplace they should relate to in their responses. The respondents considered the word work more suitable.

Responding

Strategy

The respondents experienced the statements in WEMS as relevant and related to their local workplace and daily work when responding. The statements about process of change and leadership were, in comparison to the other statements in WEMS, considered more general over time by the respondents (10). The majority of the respondents were reading all WEMS statements in one area at a time to get a comprehensive view of what the area was about, and then responded. If they hesitated about a statement, they went to the next statement and then went back to the previous statement later on (11). In contrast, the respondents most often responded to the statements in SHIS one after the other, and the response process was often fast. The statements in SHIS were interpreted as related to their lives as a whole, with work and private life intertwined. This related to the underlying structure in SHIS of individual and social indicators of health (12). The respondents said they seldom skipped an item when responding to surveys. Even if they were uncertain about a statement, they always responded, as they felt obliged to do so and wanted their answers to count (13).

Alternatives

The respondents said the 6-point Likert-type scale was satisfactory. A scale with an even number of steps forces the respondents to answer more positively or more negatively, and the respondents felt this was a good thing when later discussing the survey results in their workgroup. The opposite alternatives in WEMS of “totally agree” (6) and “totally disagree” (1) together with the unlabeled boxes in-between gave freedom for the respondents’ evaluation, and six alternatives were just enough to enable response nuances.

The positive and negative wordings in SHIS, together with the six alternatives in-between, were enough and clearly gave nuanced responses to the various health indicators. The respondents said that the item order in WEMS and SHIS was natural and that the layout was clear (14).

Discussion

The results of the study with regard to validity (Bowling, Citation2003; Kaplan & Saccuzzo, Citation2013; MacIver et al., Citation2014) and reliability aspects (Kaplan & Saccuzzo, Citation2013), as well as questionnaire and respondent factors (Wenemark et al., Citation2010) and respondent actions (Tourangeau, Citation1984; Tourangeau, Rips, & Rasinski, Citation2000), are shown in . The table should be read from left to right. An in-depth discussion of its contents is presented in the Discussion.

Table 3. An overview (from left to right) of how different quality aspects relate to the study results, various questionnaire and respondent factors, and respondent actions.

Face validity is important to ensure that a respondent perceives the survey as relevant to feel motivated to answer (Kaplan & Saccuzzo, Citation2013). Bowling (Citation2003) describes the respondent's reaction to face validity as an immediate assessment of whether the item is conforming to the respondent’s own view. An outcome of assessed face validity is the degree of cognitive burden, which is a questionnaire factor that may or may not motivate the respondent to complete the survey (Wenemark et al., Citation2010). The category called interpreting, with the two subcategories difficulty and essence, are related to assessment of cognitive burden (Wenemark et al., Citation2010). Respondents’ estimation of difficulty from easy to hard gave an assessment of cognitive burden. SHIS and WEMS show low cognitive burden, as the respondents could easily understand a majority of the items. Only five statements in WEMS were experienced as medium or hard to understand. For these, clarifications have now been made in the instruction of WEMS. shows identified strengths and weaknesses of SHIS and WEMS, with adherent suggestions for improvement of the identified weaknesses. Cognitive burden is also related to the respondents’ interpretations of an item. Interpretation could vary as people are different, and in our case have different professions, but the important thing is that the essence of the item remains the same. Characteristics as socioeconomic status and education could be factors that had an influence when completing a survey (Miller, Citation2003). Advantageously, these study results did not show any difference between professions for the essence of SHIS and WEMS. Comparing the difficulty and essence subcategories to Tourangeau’s comprehension concept (Tourangeau, Citation1984; Tourangeau et al., Citation2000), it can be noted that literal comprehension is linked to perceived difficulty about understanding the words in the item. In addition, intended comprehension was linked to essence because it focuses on the constructor’s intention of how the respondents’ should understand the item.

Table 4. Strengths and weaknesses from the evaluation of the Salutogenic Health Indicator Scale (SHIS) and the Work Experience Measurement Scale (WEMS), and suggestions made for the improvement of identified weaknesses.

A second validity aspect was content validity, which should be assessed by experts with regard to whether the items together are able to catch the latent constructs (in this study work experience and health indicators) or not (Bowling, Citation2003). Well-operationalized and relevant content is also a questionnaire factor related to survey motivation (Wenemark et al., Citation2010). In this study, the experts were the respondents who assessed the content relevance and if any parts were missing. It is important that the instrument contains items that are relevant for the target group, and in this study the direction and keyword subcategories are related to the importance of the item relevance. The statements in WEMS are all assessed in a positive direction, and SHIS has positive and negative wording as response alternatives. The assessment of the direction relates to the focus for work intervention, meaning that items from a positive (salutogenic) perspective are directed toward health promotion, and items from a negative (pathogenic) perspective are directed toward disease prevention (Bowling, Citation2003). The assessment of the item directions also affects the respondents’ attitudes and feelings about the survey content (Hagelin, Nilstun, Hau, & Carlsson, Citation2004), and it could be seen as a questionnaire factor that could affect respondents’ interest in the survey (Wenemark et al., Citation2010). Respondents’ interpretation of keywords in SHIS and WEMS were in general similar, except for the word workplace. Workplace was, therefore, changed to the word work in WEMS (). The results point to a broad joint feeling that the items and statements are relevant, appropriate, and meaningful aspects of the respondents’ daily work. Thereby, keyword understanding connects to judgement and intended comprehension (Tourangeau, Citation1984; Tourangeau et al., Citation2000), as SHIS and WEMS seem to catch keywords that are well known and understood by the respondents. The respondents compared SHIS and WEMS with other surveys they had completed before and positively highlighted SHIS and WEMS from a practical point of view, indicating usefulness for salutogenic WHP processes.

A third aspect on validity is user validity, focusing on usability and relevance in practice (MacIver et al., Citation2014). One questionnaire factor that affects motivation for survey participation is the time burden (Wenemark et al., Citation2010), meaning the respondents’ experience of the time needed for responding in terms of retrieving item-related information from their memory. The respondent makes an assessment whether the utility exceeds the time burden. There is a large time burden on instruments that are experienced as too extensive and too difficult to respond to (Wenemark et al., Citation2010). In the results, time burden is connected to the category responding and the adherent strategy subcategory. Tourangeau et al. (Citation2000) state that retrieval of responding to an item and the factual response strategy is about the respondents’ thought process when responding to an item. Regarding whether the item content happens on a daily basis or is rare, it could, therefore, be more or less easy to remember and answer to. In this study, the respondents’ experiences were that the items in WEMS related to their daily work, they could respond directly and the time burden was low. Many respondents read an entire item area in WEMS first to get a picture of the content before responding. A different strategy was used for SHIS, where all indicators were read and immediately answered in sequence. Both these response strategies seemed effective from a time-burden perspective.

A fourth aspect, also related to user validity (MacIver et al., Citation2014) and motivational factors for survey participation, is effort and commitment (Miller, Citation2003). Effort and commitment to surveys can be related to the strategy and alternatives subcategories in the results. When a respondent had read an item with its response alternative, a decision was made whether to respond based on relevance. Tourangeau (Citation1984; Tourangeau et al., Citation2000) calls this judgment. Another Tourangeau-related aspect is response, where one side is response editing influenced by the respondents’ holistic experience of the survey, and the other is assessment of response alternatives (Tourangeau, Citation1984; Tourangeau et al., Citation2000). The results of this study showed that the respondents have a positive holistic experience of WEMS and SHIS. The items come in a logical order and have valid response alternatives. Suitable alternatives contribute to generate more true survey results and the practical usefulness of the survey results increases (Sanchez, Citation2007). The respondents described the usefulness of a 6-point answering scale in WEMS and SHIS. The respondents were forced to give a response in a positive or negative direction resulting in a more positive or negative starting position when further discussing the survey results in their workgroup. These results are strengthened by a previous study (Nilsson et al., Citation2011), where the practical use of a 6-point scale was tested and discussed with another group of healthcare professionals.

A reliability aspect connected to the respondents' participation emerged and was related to the strategy subcategory. Strategy contains attitudes to surveys and the decision to participate and is a motivational respondent factor (Wenemark et al., Citation2010). Respondents said they generally felt obliged to complete the surveys at work, as they wanted their answers to count (see also ). This could create a problem in trusting survey results, as respondents not really wanting to participate may not answer truthfully. This could, however, also be seen as a strength, showing respondents’ commitment to and involvement in workplace development. Connecting to Tourangeau’s (Citation1984; Tourangeau et al., Citation2000) aspect of retrieval and an attitudinal response strategy, it relates to attitudes and feelings about the situation experienced over time. Thus, respondents’ attitudes to participation in surveys fluctuate and should be seen as a natural source of errors. Yet it could provide a reliability problem over time. However, this problem was not specific to SHIS and WEMS, but a perception of surveys is worth general consideration. Despite this identified reliability aspect, SHIS and WEMS seem to have high validity based on the validity aspects presented, and this provides the prerequisites for high reliability as well.

Methodology considerations and limitations

The conducted study needs to be discussed regarding quality terms such as credibility, dependability, confirmability, and transferability (Lincoln & Guba, Citation1985). Credibility in the study is enhanced by the participatory method. Participation is highlighted as important in WHP, meaning that involving the participants is efficient as the employees have experience of their daily work and are the best people to know what to enhance, improve, or change at work (Drennan, Citation2003; McColl et al., Citation2001). Thereby, it is best to ask the employees if the items are usable. Some kind of participatory approach in all the steps of preparing and performing a survey process is, therefore, preferred to reach quality (Wenemark et al., Citation2010; Nilsson et al., Citation2011). From another credibility point of view, an abundance of research studies have been conducted on the theoretical background of respondent perspectives on instruments (Tourangeau, Citation1984; Tourangeau et al., Citation2000), and on the method of cognitive interviews (Willis, Citation2005). The number of participants in this study was within the typical range (Beatty & Willis, Citation2007; Willis, Citation2005), saying that five to 15 participants are often enough for cognitive interviews, and a strategic selection of participants was preferred, as possible differences in opinions could be captured. Thus, the low number of respondents per profession may be a limitation, as there is a risk that they have responded more based on their personal opinion than as representatives of their specific profession. Another limitation of the study is that it only included one hospital setting with its specific professions, and the fact that the hospital had participated in previously related instrument development studies. But one strength is that the respondents in this study did not participate in previous studies. However, the study needs to be redesigned in other settings so the result can be considered transferable.

Dependability was ensured by the descriptions of setting, respondents, instruments, procedure, and analysis in the Method section. On the other hand, it may be considered a limitation that the analysis did not follow a deductive procedure fully, but it can also be seen as a strength that the analysis was adapted to be empirical and went deeper, thus producing categories other than the predetermined. A risk regarding confirmability is that the researcher performing the interviews was one of the instrument constructors. All the respondents were not aware of this, but to limit any negative effect the researcher emphasized that “no response is incorrect, and you are free to say what you want about the items.” This instruction may have increased the credibility, as the respondents could feel free to express their opinions. Connecting the results to the four concepts of Tourangeau (Citation1984) and motivational aspects of survey responding (Wenemark et al., Citation2010) in the discussion facilitates a holistic picture of the respondents' opinions on the SHIS and WEMS instruments. This could enhance transferability of the results.

Conclusions

This study with cognitive interviewing among healthcare professionals has evaluated two work health-related instruments, SHIS and WEMS. The results showed strengths and weaknesses of the instruments, and these have been discussed from various validity aspects: face validity, content validity, and user validity (Bowling, Citation2003; MacIver et al., Citation2014). The two categories interpretation and responding, as well as their subcategories difficulty, essence, direction, keywords, strategy, and alternatives are discussed on the basis of concepts of respondent comprehension, retrieval, judgment, and response (Tourangeau, Citation1984), as well as motivational aspects such as cognitive and time burden, effort and commitment (Wenemark et al., Citation2010). The results showed that SHIS and WEMS have high validity from a qualitative perspective. Previous studies have also shown good psychometric results (Bringsén et al., Citation2009; Nilsson et al., Citation2010, Citation2013). To get a holistic picture of an instrument, both psychometric tests and qualitative investigation of the respondents’ perspectives are complementary and necessary for practical usefulness. When an instrument is also evaluated qualitatively, there is a good chance for improving practical function and meaningfulness in the target group. If a survey is experienced as understandable and meaningful, effort and commitment to participate in the survey could lead to data quality, as the respondents are motivated to complete the survey. Data quality will serve as a useful basis for general statistical compilations and local discussions at work.

With regard to one of the limitations of the study, that it only included a healthcare setting, this needs to be offset by the fact that future studies are conducted in other settings and with other professions. Recommendations for future research are, therefore, to test SHIS and WEMS in other settings and among other professions, such as private for-profit companies and professions in other public administrations to ensure validity and usability.

The practical usefulness of the study is to demonstrate another aspect of the comprehensive evaluation of SHIS and WEMS as two useful instruments for WHP based on a salutogenic approach. The second contribution is to demonstrate the usefulness of cognitive interviews as a method for evaluating instruments qualitatively for increased practical use and quality.

References

  • Bauer, G., & Gregor, J. J. (2013). Salutogenic organizations and change: The concepts behind organizational health intervention research. Dordrecht: Springer.
  • Bauer, G., Davies, J. K., & Pelikan, J. (2006). The EUHPID Health development model for the classification of public health indicators. Health Promotion International, 21(2), 153–159.
  • Beatty, P. C., & Willis, G. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311.
  • Belli, R. F., Conrad, F. G., & Wright, D. B. (2007). Cognitive psychology and survey methodology: Nurturing the continuing dialogue between disciplines. Applied Cognitive Psychology, 21(2), 141–144.
  • Belli, R. F., Shay, W. L., & Stafford, F. P. (2001). Event history calendars and question list surveys: A direct comparison of interviewing methods. Public Opinion Quarterly, 65, 45–74.
  • Bowling, A. (2003). Measuring health. A review of quality of life measurement scales (2nd ed.). Buckingham: Open University Press.
  • Bringsén, Å. (2010). Taking care of others - what's in it for us? Exploring workplace-related health from a salutogenic perspective in a nursing context. (Doctoral Dissertation Series 2010:130). Lund University, Lund, Sweden.
  • Bringsén, Å., Andersson, H. I., & Ejlertsson, G. (2009). Development and quality analysis of the Salutogenic Health Indicator Scale (SHIS). Scandinavian Journal of Public Health, 37(1), 13–20.
  • Conrad, F., & Blair, J. (1996). From impressions to data: Increasing the objectivity of cognitive interviews. Alexandria, VA: American Statistical Association.
  • Drennan, J. (2003). Cognitive interviewing: Verbal data in the design and pretesting of questionnaires. Journal of Advanced Nursing, 42(1), 7–63.
  • Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62(1), 107–115.
  • Garcia, P., & McCarthy, M. (1996). Measuring health. A step in the development of city health policies. Copenhagen: World Health Organization Regional Office in Europe.
  • Garrosa, E., Rainho, C., Moreno-Jimenez, B., & Monteiro, M. J. (2010). The relationship between job stressors, hardy personality, coping resources, and burnout in a sample of nurses: A correlational study at two time points. International Journal of Nursing Studies, 47(2), 205–215.
  • Hagelin, J., Nilstun, T., Hau, J., & Carlsson, H. E. (2004). Surveys on attitudes towards legalisation of euthanasia: Importance of question phrasing. Journal of Medical Ethics, 30(6), 521–523.
  • Hertting, A. (2003). The health care sector: A challenging or draining work environment. Stockholm, Sweden: Karolinska Institute.
  • Jenny, G. J., Bauer, G. F., Vinje, H. F, Vogt, K., & Torp, S. (2017). The application of salutogenesis to work. In M. B. Mittelmark, S. Sagy, M. Eriksson, G. F. Bauer, J. M. Pelikan, B. Lindström, & G. A. Espnes (Eds.), The handbook of salutogenesis (pp. 197–210). Cham: Springer International Publishing. Retrieved from: https://link.springer.com/book/10.1007/978-3-319-04600-6#about
  • Kaplan, R. M., & Saccuzzo, D. P. (2013). Psychological testing - principles, applications, and issues. Belmont, CA: Wadsworth, Cengage Learning.
  • Kira, M., & Forslin, J. (2008). Seeking regenerative work in the post-bureaucratic transition. Journal of Organizational Change Management, 21(1), 76–91.
  • Lee, S., Blake, H., & Lloyd, S. (2010). The price is right: Making workplace wellness financially sustainable. International Journal of Workplace Health Management, 3(1), 58–69.
  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage Publications Inc.
  • MacIver, R., Anderson, N., Costa, A-C., & Evers, A. (2014). Validity of Interpretation: A user validity perspective beyond the test score. International Journal of Selection and Assessment, 22(2), 149–164.
  • Maslach, C., Schaufeli, W. B., & Leiter, M. P. (2001). Job burnout. Annual Review Psychology, 52(1), 397–422.
  • McColl, E., Jacoby, A., Thomas, L., Soutter, J., Bamford, C., & Steen, N. (2001). Design and use of questionnaires: A review of best practice applicable to surveys of health service staff and patients. Health Technology Assessment, 5(31), 1–256.
  • McColl, E., Meadows, K., & Barofsky, I. (2003). Cognitive aspects of survey methodology and quality of life assessment. Quality of Life Research, 12(3), 217–218.
  • Miller, K. (2003). Conducting cognitive interviews to understand question-response limitations among poorer and less educated respondents. American Journal of Health Behavior, 27(S3), 264–272.
  • Miller, K. (2011). Cognitive interviewing. In J. Madans, K. Miller, A. Maitland, & G. Willis. (Eds.), Question evaluation methods: Contributing to the science of data quality (pp. 51–75). New Jersey, NJ: John Wiley & Sons, Inc.
  • Nilsson, P. (2010). Enhance your workplace! A dialogue tool for workplace health promotion with a salutogenic approach. (Doctoral Dissertation Series 2010:112). Lund University, Lund, Sweden.
  • Nilsson, P., & Blomqvist, K. (2017). Survey process quality is a question of healthcare manager approach. International Journal of Health Care Quality Assurance, 30(7), 591–602.
  • Nilsson, P., Andersson, H. I., & Ejlertsson, G. (2013). The Work Experience Measurement Scale (WEMS), a usable tool in workplace health promotion. WORK: A Journal of Prevention, Assessment & Rehabilitation, 45(3), 379–387.
  • Nilsson, P., Andersson, H. I., Ejlertsson, G., & Blomqvist, K. (2011). How to make a workplace health promotion questionnaire process applicable, meaningful, and sustainable. Journal of Nursing Management, 19, 906–914.
  • Nilsson, P., Andersson, H. I., Ejlertsson, G., & Troein, M. (2012). Workplace health resources based on Sense of coherence theory. International Journal of Workplace Health Management, 5(3), 156–167.
  • Nilsson, P., Bringsén, Å., Andersson, H. I., & Ejlertsson, G. (2010). Development and quality analysis of the Work Experience Measurement Scale (WEMS). WORK: A Journal of Prevention, Assessment & Rehabilitation, 35(2), 153–161.
  • Sanchez, P. M. (2007). The employee survey: More than asking questions. Journal of Business Strategy, 28(2), 48–56.
  • Streiner, D. L., & Norman, G. R. (2003). Health measurement scales: A practical guide to their development and use. (3rd ed.). Oxford: Oxford University Press.
  • Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass.
  • Tourangeau, R. (1984). Cognitive sciences and survey methods. In T. Jabine, M. Straf, J. Tanur, & R. Tourangeau. (Eds.), Cognitive aspects of survey methodology: Building a bridge between the disciplines (pp. 73–100). Washington, DC: National Academy Press.
  • Tourangeau, R., Rips, L., & Rasinski, K. (2000). The psychology of survey response. New York, NY: Cambridge University Press.
  • Wenemark, M. (2010). The respondent’s perspective in health-related surveys. The role of motivation. No. 1193. Linköping, Sweden: Linköping University.
  • Wenemark, M., Hollman Frisman, G., Svensson, T., & Kristenson, M. (2010). Respondent satisfaction and respondent burden among differently motivated participants in a health-related survey. Field Methods, 22(4), 378–390.
  • Willis, G. (2005). Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks, CA: Sage.