744
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Impacts of cultural factors and mode of administration on item nonresponse for political questions in the European context

ORCID Icon, ORCID Icon & ORCID Icon
Pages 389-400 | Received 26 Jan 2022, Accepted 30 Jan 2023, Published online: 19 Feb 2023

ABSTRACT

We tested the impacts of Hofstede’s cultural factors and mode of administration on item nonresponse (INR) for political questions in the European Values Study (EVS). We worked with the integrated European Values Study dataset, using descriptive analysis and multilevel binary logistic regression models. We concluded that (1) modes of administration with an interviewer lead to an INR lower than those without an interviewer. In terms of Hofstede’s cultural factors, we concluded that (2) the higher the power distance index in the country, the lower the INR, (3) the higher the rate of individualism in the country, the higher the INR, and (4) the higher the uncertainty avoidance index, the lower the INR. These findings stress the importance of sensitive work with cross-national data, especially with increasingly abundant data from mixed-mode surveys.

Introduction

An increasing number of cross-national surveys contain questions on various sensitive topics (e.g. the Generations and Gender Survey, the Survey of Health, Ageing and Retirement in Europe, and the European Values Study) in the European context. These surveys include questions that can be intrusive, potentially threatening to respondent anonymity, unpleasant, offensive, or influenced by strong social expectations or norms. Such questions may be answered by the respondents in several different ways, one of which being not providing a valid answer (item nonresponse; INR). This reaction may be the result of a reluctance to invest cognitive effort in unpleasant questions or a rational choice not to answer given the interview situation (Riphahn & Serfling, Citation2005; Tourangeau et al., Citation2000).

The sensitivity of the research topics differs across individual and contextual factors; contextual factors have been largely under-researched. The focus on macro-context is based on the assumption that a failure to respond can be associated with different culturally related issues across countries and other geographical contexts. This macro-context of cultural meaning and social norms may cause a specific error in response styles and INR patterns limiting the possibility of cross-cultural comparisons. One question type that can cause an ambiguous reaction depending on the socio-cultural context is political questions. The willingness to answer political questions largely depends on the attitude towards politics and the current political regime in the country (Frye et al., Citation2017; Ratigan & Rabin, Citation2020).

Another increasingly important factor of survey data quality is the mode of administration. The number of cross-national surveys using more modes within/among countries has been steeply increasing (E. D. de Leeuw, Citation2018), and further accelerated during the COVID-19 pandemic. This trend threatens data quality and comparability of results over countries, as the modes of administration may provide – especially for sensitive questions – different answers (Callegaro et al., Citation2015, p. 38). Moreover, the question of data comparability may become even more complex, if the question sensitivity (and thus the mode effect) depends on a cultural context.

The existing studies find the concept of Hofstede’s cultural dimensions valuable for cross-cultural comparison of survey errors. The form and perception of the survey setting, including the presence of an interviewer or (more generally) the response mode, also offer a useful framework for cross-cultural comparisons. Our goal is to examine the influence of Hofstede’s cultural dimensions and the effect of mode of administration on INR for political questions. Using the data of the European Values Study (EVS), we illustrate that three of Hofstede’s cultural dimensions and the mode of data collection, to some point, explain the level of INR for political questions.

Theory

Sensitive questions

Tourangeau and Yan (Citation2007) see sensitivity as a broader term that encompasses three distinct meanings:

  • intrusive questions – questions about taboo topics for which the act of asking about them is sensitive in itself (e.g. masturbation)

  • the threat of disclosure – a respondent might fear potential consequences should the responses become known to third parties (e.g. illegal behavior)

  • social desirability – the respondent seeks to fulfill societal expectations or avoid appearing undesirable, either knowingly or unknowingly (e.g. voting habits).

Questions that may be seen as falling into one of the described categories have potential consequences for the respondent. These consequences can be external (fear of punishment for illegal or socially unacceptable behavior) and internal (a topic that causes a person to feel unpleasant sensations, fear, or resentment) (Lensvelt-Mulders et al., Citation2005). Hence, some of the sensitive topics include substance use, sexuality, delinquency, victimization, health, income, and voting habits (Gnambs & Kaspar, Citation2015; Lensvelt-Mulders et al., Citation2005; Tourangeau et al., Citation2000, p. 260;). The appearance of such topics in a questionnaire may lead to several types of bias in the respondent’s behavior, such as formulating socially desirable responses, avoiding disclosure of sensitive data, and refusing to respond (Shoemaker et al., Citation2002; Tourangeau et al., Citation2000, p. 261).

Item nonresponse in social surveys

INR can be defined as a failure to obtain a valid answer to a question. INR threatens data quality through an increased occurrence of nonresponse errors and reduced sample sizes (Roberts et al., Citation2019; Struminskaya et al., Citation2015). Moreover, this phenomenon causes both ethical (Denscombe, Citation2009) and technical issues (Durrant & Steele, Citation2009) of treating respondents with some INR in the analysis. We can define answers ‘do not know’, ‘no opinion’, and ‘refused’ as types of INR with some specific meaning, which should be treated differently. While the ‘do not know’ answer has a legitimate meaning for instance in political questions (E. de Leeuw et al., Citation2003), refusal to provide any answer is related – in contrast to the ‘do not know’ option – to question sensitivity (Shoemaker et al., Citation2002). Therefore, we take into account the distinction in this study and (in line with previous papers; see Denscombe, Citation2009; E. de Leeuw et al., Citation2003) indicate INR by refusal to answer.

Regarding the variation of INR across a questionnaire, the proportion of invalid answers increases with lower respondent engagement (Clifford & Jerit, Citation2015), question sensitivity (Meitinger & Johnson, Citation2020), question difficulty (Shoemaker et al., Citation2002), and self-administered modes (Al Baghal & Lynn, Citation2015). Little and Rubin claimed that ‘It is generally accepted that when INR on a variable is higher than five percent, the missing answers are most likely not random’ and should be further analyzed (Callens & Loosveldt, Citation2018, p. 3).

Regarding the individual variation of INR, older, less educated, female, and unemployed respondents tend to have more missing answers (E. de Leeuw et al., Citation2003; Kupek, Citation1998; Lee et al., Citation2017; Pickery et al., Citation2001; Riphahn & Serfling, Citation2005; Serfling, Citation2005; Yan et al., Citation2010). Questions with high INR are also associated with questionnaire characteristics such as length, question intelligibility, sensitivity, and mode of administration, as described in the derivation of hypotheses (E. de Leeuw et al., Citation2003; Kupek, Citation1998; Rässler & Riphahn, Citation2006; Vandeschrick & Sanderson, Citation2012). Krosnick et al. (Citation1996) provide reasons for INR (or other forms of threats to data quality) as part of the satisficing theory. These reasons include task difficulty, ability to answer, and respondent motivation, with such motivation being connected both with the presence of an interviewer and a sense of privacy. Building on previous evidence, the tools reducing INR in web mode, such as motivational prompts, are especially important in mixed-mode surveys (Al Baghal & Lynn, Citation2015).

INR has been investigated based on various national and cross-national data, collected mainly in Western countries. Failure to respond can stem from different cultural processes and this failure may, in turn, cause a specific error in the representativeness of the sample and the inability to perform comparative analyses (Koch & Blohm, Citation2009). These differences have been confirmed by methodological reports and documentation of various cross-national studies showing that the numbers of missing answers differ among countries (e.g. Beullens et al., Citation2014; Koch & Blohm, Citation2009; Malter & Börsch-Supan, Citation2015; Publications Office of the European Union, Citation2020).

Political questions are one type of sensitive questions connected with cultural context (Callens & Loosveldt, Citation2018), while differences in political regimes can partly account for the variation. In non-democratic regimes, refusal to respond to sensitive questions can be a means of self-preservation (Frye et al., Citation2017). Some results from China support this idea, indicating that an increased political confidence in the authorities in China may appear precisely because marginalized groups with minority political opinions are afraid to answer political questions in surveys (Ratigan & Rabin, Citation2020). However, some evidence suggests that political regimes can only slightly explain cross-national differences in INR to political questions (Popp, Citation2018). Therefore, exploring international differences in INR to political questions requires a different theoretical framework.

Cultural framework for analyzing INR

Most research projects studying survey errors use Hofstede’s framework of cultural orientation. Despite the existing criticism of this cultural framework, its structure is considered reliable and has remained relatively unchanged after a large amount of research done on this topic. In addition, this cultural framework has been applied to a wide range of studies on various issues and has proved effective (Meitinger & Johnson, Citation2020; Silber & Johnson, Citation2020). Hofstede’s concept of cultural values is based on a survey among IBM employees in 71 countries from 1967–1970. Based on this data, Hofstede identified four cultural dimensions that varied systematically across countries. These dimensions were labeled individualism vs. collectivism, power distance, uncertainty avoidance, and masculine vs. feminine culture (Hofstede, Citation1980). Later, based on the World Values Survey data, the author added the indulgence vs. restraint and long-term orientation vs. short-term orientation dimensions (Hofstede & Minkov, Citation2010). Each of the six dimensions presents a scale with two extreme categories. A more detailed description of the dimensions is shown in .

Table 1. The definitions of Hofstede’s cultural dimensions.

To the best of our knowledge, only two articles analyzed cross-national differences in INR, both applying Hofstede’s cultural value framework. Lee et al. (Citation2017) studied the influence of Hofstede’s dimensions of long-term orientation and uncertainty avoidance on INR in questions about financial situations. They estimated the life expectancy of respondents using data from the Survey of Health, Ageing, and Retirement in Europe and several harmonized studies. The authors used scores for country-level cultural indicators from the book by Hofstede et al., Cultures and Organizations (Hofstede & Minkov, Citation2010). The authors concluded that the higher the uncertainty avoidance in a country, the higher the INR rate on questions about financial situations. Long-term orientation in the country was not associated with the INR rates on financial or on life expectancy questions.

In their recent research, Meitinger and Johnson (Citation2020) associated the INR in the International Social Survey Programme (Module on Role of Government) with several dimensions of Hofstede’s framework. They analyze all the questions with more than 5% of missing answers in relation to country-level Hofstede scores (Hofstede & Minkov, Citation2010). From the analyzed dimensions of individualism vs. collectivism, power distance, and uncertainty avoidance, only uncertainty avoidance is a significant predictor of the INR rate. Respondents from countries with lower uncertainty avoidance have a higher rate of INR. Of course, the cultural dimensions do not explain all cross-national differences in INR; we also address a technical-cultural aspect (mode of administration), which has been both increasingly important and available for evaluation in survey data.

Mode of administration and INR

A specific survey characteristic connected with INR and potentially accounting for some of the cross-national differences is the mode of administration, which is increasingly important to study due to a sharp increase of mixed-mode data collections (E. D. de Leeuw, Citation2018). The most frequent survey modes are computer-assisted personal interviewing (CAPI), pen-and-paper interviewing (PAPI), computer-assisted web interviewing (CAWI), and computer-assisted telephone interviewing (CATI). A combination of modes within a survey can address the drawbacks of specific modes or the inability to collect all the data via one mode, but it also brings a threat of different answers across mode (Callegaro et al., Citation2015).

The key grouping characteristic of modes is the presence/absence of an interviewer. Overall, INR is higher in self-administered surveys because of the absence of control over the answering process (Heerwegh & Loosveldt, Citation2008; Meitinger & Johnson, Citation2020; Rässler & Riphahn, Citation2006; Tourangeau et al., Citation2000). However, INR on sensitive questions is lower in the same conditions due to a higher rate of privacy (Corkrey & Parkinson, Citation2002; Essig & Winter, Citation2009; Kreuter et al., Citation2008). We can attribute this contrast to a higher motivation induced by the interviewer, which is a factor less important than lower privacy and trust in the case of sensitive questions. Another less consistent distinction indicates that computer-based self-administered questionnaires have a lower INR than pen-and-paper ones (Callens & Loosveldt, Citation2018; Gnambs & Kaspar, Citation2015; Tourangeau & Yan, Citation2007), but only for open-ended questions (Denscombe, Citation2009), with Denniston et al. (Citation2010) arriving at the opposite conclusion.

Although survey mode has already been examined in relation to overall INR in cross-national surveys (Callens & Loosveldt, Citation2018; Meitinger & Johnson, Citation2020), we found only one study considering INR in terms of the specific type of sensitive question. Antoni et al. (Citation2019) compared CATI with CAPI and found a higher INR for income questions in CATI; we compare INR for political questions for five modes of data collection. Additionally, we suggest that it is important to examine the mode effect for sensitive questions together with an adequate cultural framework, because the framework can reflect the level of sensitivity of the studied topic.

Derivation of hypotheses

Based on the book Cultures and Organizations (Hofstede & Minkov, Citation2010), in which the authors describe the connections between cultural dimensions and all spheres of life, we found evidence that INR on political questions can be influenced by three out of six dimensions, that is, power distance (PDI), individualism-collectivism (IDV), and uncertainty avoidance (UAI). Due to the limited scope of the paper, we include only three dimensions with both theoretical and empirical relationship to INR.

In countries with a substantial PDI, people are highly dependent on the more powerful population segment. They are likely to agree or consult with the authorities. In countries with a low PDI, people are readier to disagree with the executors of power or disobey (Hofstede & Minkov, Citation2010). Therefore, people in countries with a high PDI can leave political questions unanswered more often if they fear punishment for abandoning a socially accepted opinion. On the other hand, given a high level of PDI, refusal to answer may also be considered unacceptable. Therefore, we assume the following hypothesis.

H1:

The higher the power distance in the country, the lower the INR on political questions.

People in collective cultures are not used to having personal space, with the opinions they express usually going along with the majority one, even if they do not actually share it. They then feel ashamed if a group member breaks any rules. By contrast, the central theme of individualistic societies is privacy. We can assume that, in collectivistic societies, people tend to agree with generally accepted opinions rather than refuse to answer a question. On the other hand, in individualistic societies, political issues can be a private topic that the respondents do not want to disclose.

H2:

The higher the rate of individualism in the country, the higher the INR.

Residents of countries with strong UAI are less interested in politics and do not believe that they can influence political affairs (Hofstede & Minkov, Citation2010). Thus, we can assume that countries with high UAI have higher INR to political questions because people are uninterested in this issue and do not have any personal opinions about it. On the other hand, failure to answer the question is itself uncertain. Therefore, we assume that respondents avoid ambiguous answers in countries with high UAI, reducing the number of INR. Thus, the UAI can lower or increase INR according to the previous empirical studies outlined above, which leads us to the hypothesis:

H3:

The higher the rate of uncertainty avoidance, the lower the INR.

The mode of administration can also affect the INR, since answering in more private conditions leads to fewer missing answers on sensitive questions (Joinson et al., Citation2008; Kays et al., Citation2012). In conditions of increased privacy, respondents may feel less threatened that they will disclose their private information to a third person or that their answer will be socially unacceptable. Thus, we assume the following:

H4:

Modes without an interviewer have lower INR than modes with an interviewer.

Methodology

We worked with the integrated European Value Study 2017 dataset (EVS, Citation2020). The EVS is a cross-national longitudinal survey project covering questions concerning such topics as family, religion, work, society, gender, and politics. This wave contains data from 34 countries, of which only Greece was dropped from our analysis as their respective data did not contain questions Q31 (left-right scale) or Q49 (voter’s choice). This left us with 34 countries with a total of 50,514 (Q49) and 48,546 (Q31) respondents. The European Values project is managed by the Council of Program Directors, who discuss the general outlines of the project approving the final questionnaire and the survey method. All daily responsibilities are delegated to the Executive Committee. The questionnaire is developed by the Theory Group; the quality of the project is taken care of by the Methodology Group. The funding is provided by universities and research institutes in the participating countriesFootnote1 that conduct the data collection according to the standards provided by the EVS committee.

Six countries decided to test a combination of interviewer-administered mode (F2F) and self-administered mode (CAWI supplemented for MAIL) in the 2017 wave. Five countries (Denmark, Iceland, Finland, Germany, and Switzerland) drew separate individual random samples for personal and online interviewing, while the Netherlands used an existing probability online panel for the recruitment of online respondents (Luijkx et al., Citation2021). A paper questionnaire was provided to respondents not participating online with a reminder (Denmark, Germany, Switzerland), only on request (Iceland, Finland), or not at all (Netherlands).

In line with the literature, we focused on cases with no answer provided and identified only three questions with INR (refusal/no answer) greater than or equal to 5%; (1) a question about political views measured on the left vs. right scale, (2) a question about which political party a respondent voted for, and (3) a question about their household’s weekly income. We selected the first two questions (Q31 and Q49 in the questionnaire) as we are interested in the impact of cultural factors (Hofstede valuesFootnote2) on the INR for political questions. The exact wording of Q31 is ‘In political matters, people talk of “the left” and “the right.” How would you place your views on this scale, generally speaking? (options 1=the left to 10=the right)’ and the wording of Q49” Which (political) party appeals to you most?” These dependent variables were coded as binary with 0=valid answer and 1=INR.

The presented analysis sets up multilevel regression models with respondents nested in countries. We control for sex, age, and level of education (lower, middle, or higher) while including the mode of administration (CATI, MAIL, CAWI, PAPI, or CAPI) as the essential individual-level variable of our paper. We created four multilevel binary logistic regression models for each political question using the SPSS integrated function GENLINMIXED with LOGIT link. First, we computed the basic model with individual-level variables, followed by three expanding models, each testing a different country-level variable: PDI, UAI, and IDV, respectively. The respective values of cultural country-level variables are in Appendix B. We also pay close attention to the mode of administration and how it affects the INR. The possibility of identifying possible dependence between the effects of cultural variables and mode administration is limited due to low mutual variation, but we outline this option in the conclusion (and Appendix C). We do not attempt to compare the models, focusing more on each model and how the country-level variables affect the INR in political questions.

Analysis

We start the analysis of each dependent variable with the base model with individual-level variables: the mode of administration and control variables for sex, age, and education. The dependent variable is the INR for the specific question – a binary variable where 0 = a question answered validly and 1 = an answer refused (INR). This categorization means that positive effects increase the likelihood of an answer being refused – INR. Negative effects decrease the likelihood of INR. The three Hofstede dimensions were selected as our country-level variables are measured on a scale of 1 to 100.

First, we look at question Q31: Political views of the respondent measured on the left-right scale in . Starting with the mode of administration in M0, the presence of an interviewer decreases the likelihood of INR. If we exponentiate the coefficients, the question has lower odds (−2.360, meaning 0.094 lower odds) of being without a valid answer (of being INR) when asked using a mode of administration with an interviewer. When it comes to personal characteristics, females (−0.293) are less likely to refuse answers (INR) than males. Age does have a weak positive effect on the INR (each additional year of a respondent’s age increases the likelihood of INR by a logit of 0.008 or by 0.006 in models M0 to M3). Finally, respondents with higher education (0.740, meaning 2.09 higher odds) and mid-level education (0.324, meaning 1.38 higher odds) are more likely to refuse answers (INR) than those with lower education.

Table 2. Effect of mode and cultural variables on the INR of Q31: Political views of the respondent measured on the left-right scale; N = 48,546.

The effects of country-level variables may appear small, but the scale for each dimension is from 0 to 100. This range means that a change of ten on the scale would mean a change of 0.1 for PDI in our model. Our first (H1) hypothesis was: The higher the power distance in the country, the lower the INR on political questions. Our hypothesis holds as with increasing PDI in model M1, the likelihood of respondents refusing to answer decreases by a logit of −0.018 (meaning respondents have 0.98 times lower odds) for each point on the 0 to 100 scale. The same goes for our second (H2) hypothesis: The higher the rate of individualism in the country, the higher the INR. IDV, according to model M3, increases the likelihood of INR. With each one-point increase on the scale from 0 to 100, the likelihood of respondents refusing to answer (INR) is increased by a logit of 0.019 (the odds are 1.02 higher). As for the UAI, we confirmed the H3 hypothesis in model M2: The higher the rate of uncertainty avoidance, the lower the INR. With each one-point increase in UAI, the likelihood is lowered by a logit of −0.022, meaning respondents have 0.98 times lower odds of refusing to answer.

shows the results of the same analytical approach for question Q49: Which political party appeals the most? Regarding the individual-level variables in model M4, the effect of an interviewer remains negative, meaning that answering political questions using modes of administration with an interviewer has lower odds of being INR. Then, each year of an increased respondent’s age increases the likelihood of INR by a logit of 0.005 or by 0.003 (models M4 to M7). The effects of education and sex were not statistically significant for the question asking about the appeal of political parties. With each increase in PDI and UAI, the odds of respondents refusing to answer are lowered by 0.99 and 0.97 (logit of −0.013 and −0.034), respectively. Increasing IDV means 1.02 higher odds (logit of 0.018) that the respondent refuses to answer. Thus, our hypotheses H1, H2, and H3 hold.

Table 3. Effect of mode and cultural variables on the INR of Q49: Which political party appeals to the respondent the most; N = 50,514.

Conclusion

This study examines the role of Hofstede’s cultural factors and mode of administration on item nonresponse (INR) in the European Values Study (EVS). Regarding the impact of cultural factors, we confirmed three cultural factor-based hypotheses, H1, H2, and H3. In the dispute in a (small) pool of literature as to whether the uncertainty avoidance index (UAI) increases or lowers INR, we concluded that, based on our analysis, the likelihood of respondents refusing to answer decreases with higher UAI. The higher the country’s power distance index (PDI), the lower the likelihood that the respondent refuses to answer. This finding corresponds to Hofstede’s (Citation1980) definition of PDI and UAI. In countries where people respect authorities and are more likely to abide by the rules, respondents seem to be more likely to answer the question, especially when asked by trained personnel (see Tables 6 and 7 in Appendix C). The same can be said about our conclusion regarding the effect of individualism (IDV). With rising IDV, the likelihood that respondents refuse to answer rises as well. Respondents from countries with higher levels of IDV try to protect their interests and are less likely to share their opinions with others (Hofstede & Minkov, Citation2010), thus the increase in INR.

We can see in a bivariate description (see Appendix D) that INR is higher for modes with an interviewer. However, the effect of the interviewer in both our models was negative, meaning that when controlled for sex, education, and cultural factors, using the mode of administration with an interviewer lowers the odds of INR. Considering the multivariate analysis more relevant, we confirm our H4 hypothesis that modes without an interviewer have lower INR than modes with an interviewer. Still, we cannot fully separate the mode effect from coverage, nonresponse, and measurement error in the cross-sectional data (Callegaro et al., Citation2015) so the relationship needs to be further tested. We can see that, with increasing PDI and UAI, the chance of respondents refusing to answer decreases (by 0.079 and 0.050, respectively) when using modes of administration without an interviewer. This finding shows that the PDI and UAI cultural variables greatly impact INR as they shift the positive effect of the mode of administration without the interviewer (see Appendix C, Tables 6 and 7). Nevertheless, the interactions in these models are not very robust due to the low variation of cultural variables across modes; we place them only in the appendix.

We found that using MAIL or CAWI (self-administered modes) results in an INR higher than that resulting from using CAPI or PAPI, which supports the importance of the tools reducing INR in web mode (e.g. motivational prompts) in mixed-mode surveys (Al Baghal & Lynn, Citation2015). Otherwise, higher INR leads to an increased nonresponse error, reduced adequate sample size, as well as to a lower equivalence of modes. In this context, we expect that the recent shift to self-administered modes in many cross-national surveys could increase INR, or even raise the risk of measurement invariance, as the mode of administration gets more heterogeneous across countries, within countries, and across time. Particular attention should be paid to mode as a tool for improving the data collection process and as a risk for creating a new source of measurement error (E. D. de Leeuw & Toepoel, Citation2018).

The indicator of the data quality and comparability – providing no answer to sensitive political questions – is only one of several ways for approaching the topic. Nevertheless, we combine our findings with other studies (overview in Silber & Johnson, Citation2020) and argue that the results show the limited comparability of cross-national data in several aspects. This issue cannot be fully resolved during the survey preparation and fielding; thus, we consider it when analyzing and interpreting data. Cross-country research always needs to reflect various meanings of the same topic (e.g. by robustness checks) and, more recently, various modes of administration used in large surveys (E. D. de Leeuw, Citation2018).

We conclude that INR is not much of a problem in EVS 2017, with only three questions having INR higher than or equal to 5% and only seven questions having more than 1,000 INR responses (out of 56,491 total respondents). Thus, applying our methodology to different datasets with a higher INR or focusing more on specific countries with elevated levels of INR would better help us understand the role of the mode of administration and its relation to INR and the importance of cultural factors. The lack of CATI (only one country in our dataset used this mode) and small numbers of MAIL limits the extent of our evaluation for each specific mode of administration.

Supplemental material

Supplemental Material

Download MS Word (60.1 KB)

Acknowledgments

This work was supported by the Technology Agency of the Czech Republic (TAČR) under Grant TL02000152 (Vývoj multimode sběru dat a zavádění tohoto typu dotazování v oblasti populačního, sociologického a marketingového výzkumu).

Disclosure statement

No potential conflict of interest was reported by the authors.

Supplemental data

Supplemental data for this article can be accessed online at https://doi.org/10.1080/13645579.2023.2175921.

Additional information

Notes on contributors

Ondřej Klíma

Ondřej Klíma is a Ph.D. student and researcher of Sociology at Masaryk University in Brno. His main research interests are video games and their meanings within civil society. He is currently working on a concept of the “gaming sphere” to explain the meanings of microtransactions in video games and other social problems such as the Blitzchung controversy or the Lootbox controversy.

Martin Lakomý

Martin Lakomý is a researcher in the field of sociology, social policies, and population studies at the Faculty of Social Studies of Masaryk University and the Faculty of Business and Economics of Mendel University in Brno. He uses quantitative methods to examine the social consequences of population ageing in family and labour market behaviour, and their connection to individual value changes and quality of life.

Ekaterina Volevach

Ekaterina Volevach is a researcher in the field of sociology, population studies, and public health at the Faculty of Social Studies and the Faculty of Medicine of Masaryk University. She is a Ph.D. student of Neuroscience at Masaryk University. She is currently working on design and implementation of evaluation research in the field of primary prevention.

Notes

1. For mode distribution and sample size of specific countries, please see Appendix A.

2. We used the original values presented in: Hofstede, G., Hofstede, G. J., & Minkov, M. (Hofstede & Minkov, Citation2010). Cultures and organizations: Software of the mind, third edition (3rd ed.). McGraw-Hill Professional and values calculated by Hofstede Insights (Citation2021) team of researchers (https://www.hofstede-insights.com/).

References

  • Al Baghal, T., & Lynn, P. (2015). Using motivational statements in web-instrument design to reduce item-missing rates in a mixed-mode context. Public Opinion Quarterly, 79(2), 568–579. https://doi.org/10.1093/poq/nfv023
  • Antoni, M., Bela, D., & Vicari, B. (2019). Validating earnings in the German national educational panel study. determinants of measurement accuracy of survey questions on earnings. Methods, Data, Analyses, 13(1), 32. https://doi.org/10.23889/ijpds.v1i1.308
  • Beullens, K., Matsuo, H., Loosveldt, G., & Vandenplas, C. (2014). ‘Quality report for the European social survey, Round 6.’
  • Callegaro, M., Manfreda, K. L., & Vehovar, V. (2015). Web survey methodology (First ed.). SAGE Publications Ltd.
  • Callens, M., & Loosveldt, G. (2018). ‘Don’t Know’ responses to survey items on trust in police and criminal courts: A word of caution. In Survey methods: Insights from the field. FORS -. GESIS. https://doi.org/10.13094/SMIF-2018-00002
  • Clifford, S., & Jerit, J. (2015). Do attempts to improve respondent attention increase social desirability bias? Public Opinion Quarterly, 79(3), 790–802. https://doi.org/10.1093/poq/nfv027
  • Corkrey, R., & Parkinson, L. (2002). A comparison of four computer-based telephone interviewing methods: Getting answers to sensitive questions. Behavior Research Methods, Instruments and Computers, 34(3), 354–363. https://doi.org/10.3758/BF03195463
  • de Leeuw, E. D. (2018). Mixed-mode: Past, present, and future. Survey Research Methods, 12(2). Article 2. https://doi.org/10.18148/srm/2018.v12i2.7402.
  • de Leeuw, E., Hox, J., & Huisman, M. (2003, January). Prevention and treatment of item nonresponse. Journal of Official Statistics, 19(2), 153–176.
  • de Leeuw, E. D., & Toepoel, V. (2018). Mixed-mode and mixed-device surveys. In D. L. Vannette & J. A. Krosnick (Eds.), The Palgrave handbook of survey research (pp. 51–61). Springer International Publishing. https://doi.org/10.1007/978-3-319-54395-68
  • Denniston, M., Brener, N., Kann, L., Eaton, D., McManus, T., Kyle, T., Roberts, A., Flint, K., & Ross, J. (2010, September). Comparison of paper-and- pencil versus web administration of the Youth Risk Behavior Survey (YRBS): Participation, data quality, and perceived privacy and anonymity. Computers in Human Behavior, 26(5), 1054–1060. https://doi.org/10.1016/j.chb.2010.03.006
  • Denscombe, M. (2009). Forskningshandboken—För småskaliga forskningsprojekt inom samhällsvetenskaperna. Studentlitteratur AB.
  • Durrant, G. B., & Steele, F. (2009). Multilevel modelling of refusal and non-contact in household surveys: Evidence from six UK government surveys. Journal of the Royal Statistical Society: Series A (Statistics in Society), 172(2), 361–381. https://doi.org/10.1111/j.1467-985X.2008.00565.x
  • Essig, L., & Winter, J. K. (2009). Item non-response to financial questions in household surveys: An experimental study of interviewer and mode effects*. Fiscal studies, 30(3‐4), 367–390. https://doi.org/10.1111/j.1475-5890.2009.00100.x
  • EVS. 2020. European values study 2017: Integrated dataset (EVS 2017). GESIS Data Archive. ZA7500 Data file Version 4.0.0. https://doi.org/10.4232/1.13560
  • Frye, T., Gehlbach, S., Marquardt, K. L., & Reuter, O. J. (2017). Is Putin's popularity real? Post-Soviet Affairs, 33(1), 1–15. https://doi.org/10.1080/1060586X.2016.1144334
  • Gnambs, T., & Kaspar, K. (2015). Disclosure of sensitive behaviors across self-administered survey modes: A meta-analysis. Behavior Research Methods, 47(4), 1237–1259. https://doi.org/10.3758/s13428-014-0533-4
  • Heerwegh, D., & Loosveldt, G. (2008). Face-to-face versus web surveying in a high- internet-coverage population: differences in response quality. Public Opinion Quarterly, 72(5), 836–846. https://doi.org/10.1093/poq/nfn045
  • Hofstede, G. (1980). Culture’s consequences: international differences in work-related values. Sage.
  • Hofstede Insights. (2021). Hofstede insights organisational culture consulting. Hofstede Insights. https://www.hofstede-insights.com/
  • Hofstede, G., & Minkov, M. (2010). Cultures and organizations: Software of the mind, third edition (3rd ed.). McGraw-Hill Professional.
  • Joinson, A., Schofield, C. P., Buchanan, T., & Reips, U. -D. (2008, September). Measuring self-disclosure online: Blurring and non-response to sensitive items in web-based surveys. Computers in Human Behavior, 24(5), 2158–2171. https://doi.org/10.1016/j.chb.2007.10.005
  • Kays, K., Gathercoal, K., & Buhrow, W. (2012). Does survey format influence self-disclosure on sensitive question items? Computers in Human Behavior, 28(1), 251–256. http://doi.org/10.1016/j.chb.2011.09.007
  • Koch, A., & Blohm, M. (2009). ‘Item non-response in the European social survey,’ January. http://hdl.handle.net/1811/69564.
  • Kreuter, F., Presser, S., & Tourangeau, R. (2008). Social desirability bias in CATI, IVR, and web surveys: The effects of mode and question sensitivity. Public Opinion Quarterly, 72(5), 847–865. https://doi.org/10.1093/poq/nfn063
  • Krosnick, J. A., Narayan, S., & Smith, W. R. (1996). Satisficing in surveys: Initial evidence. New Directions for Evaluation, 1996(70), 29–44. https://doi.org/10.1002/ev.1033
  • Kupek, E. (1998). Determinants of item nonresponse in a large national sex survey. Archives of Sexual Behavior, 27(6), 581–594. https://doi.org/10.1023/a:1018721100903
  • Lee, S., Liu, M., & Hu, M. (2017). Relationship between future time orientation and item nonresponse on subjective probability questions: A cross-cultural analysis. Journal of Cross-Cultural Psychology, 48(5), 698–717. https://doi.org/10.1177/0022022117698572
  • Lensvelt-Mulders, G. J. L. M., Hox, J. J., van der Heijden, P. G. M., & Maas, C. J. M. (2005). Meta-analysis of randomized response research: Thirty-five years of validation. Sociological Methods & Research, 33(3), 319–348. https://doi.org/10.1177/0049124104268664
  • Luijkx, R., Jónsdóttir, G. A., Gummer, T., Ernst Stähli, M., Frederiksen, M., Ketola, K., Reeskens, T., Brislinger, E., Christmann, P., Gunnarsson, S. Þ., Hjaltason, Á. B., Joye, D., Lomazzi, V., Maineri, A. M., Milbert, P., Ochsner, M., Pollien, A., Sapin, M., Solanes, I. … Wolf, C. (2021). The European values study 2017: On the way to the future using mixed-modes. European sociological review, 37(2), 330–346. https://doi.org/10.1093/esr/jcaa049
  • Malter, F., & Börsch-Supan, A. (2015). ‘SHARE Wave 5: Innovations and methodology.’. http://www.share-project.org/fileadmin/pdf_documentation/Method_vol5_31March2015.pdf
  • Meitinger, K. M., & Johnson, T. P. (2020). Power, culture and item nonresponse in social surveys. In P. S. Brenner (Ed.), Understanding survey methodology: Sociological theory and applications (pp. 169–191). Springer. https://doi.org/10.1007/978-3-030-47256-6_8
  • Pickery, J., Loosveldt, G., & Carton, A. (2001). The effects of interviewer and respondent characteristics on response behavior in panel surveys. Sociological Methods & Research, 29(4), 509–523. https://doi.org/10.1177/0049124101029004004
  • Popp, L. (2018). ‘Non-Response to Politically-Sensitive Questions Across Political Regimes’. Central European University, Department of political science. master thesis.
  • Publications Office of the European Union. (2020). ‘Quality report of the European Union labour force survey 2018 2020 edition.’
  • Rässler, S., & Riphahn, R. T. (2006). Survey item nonresponse and its treatment. Allgemeines statistisches Archiv, 90(1), 217–232. https://doi.org/10.1007/s10182-006-0231-3
  • Ratigan, K., & Rabin, L. (2020). Re-evaluating political trust: The impact of survey nonresponse in Rural China. The China Quarterly, 243, 823–838. https://doi.org/10.1017/s0305741019001231
  • Riphahn, R. T., & Serfling, O. (2005). Item non-response on income and wealth questions. Empirical economics, 30(2), 521–538. https://doi.org/10.1007/s00181-005-0247-7
  • Roberts, C., Gilbert, E., Allum, N., & Eisner, L. (2019). Research synthesis: Satisficing in surveys: A systematic review of the literature. Public Opinion Quarterly, 83(3), 598–626. https://doi.org/10.1093/poq/nfz035
  • Serfling, O. (2005, January). The interaction between item, questionnaire and unit nonresponse in the German SOEP. Schmollers Jahrbuch : Journal of Applied Social Science Studies/Zeitschrift Für Wirtschafts- Und Sozialwissenschaften, 125(1), 195–205.
  • Shoemaker, P., Eichholz, M., & Skewes, E. A. (2002, June). Item non-response: Distinguishing between don’t know and refuse. International Journal of Public Opinion Research, 14(2), 193–201. https://doi.org/10.1093/ijpor/14.2.193
  • Silber, H., & Johnson, T. P. (2020). Culture and response behavior: An overview of cultural mechanisms explaining survey error. In P. S. Brenner (Ed.), Understanding survey methodology: Sociological theory and applications (pp. 67–86). Springer International Publishing. https://doi.org/10.1007/978-3-030-47256-64
  • Struminskaya, B., Weyandt, K., & Bosnjak, M. (2015). The effects of questionnaire completion using mobile devices on data quality. Evidence from a probability-based general population panel. Methods, Data, Analyses, 9(2), 32. https://doi.org/10.12758/mda.2015.014
  • Tourangeau, R., Rips, L., & Rosinski, K. (2000). The psychology of survey response. https://doi.org/10.1017/CBO9780511819322
  • Tourangeau, R., & Yan, T. (2007). ‘Sensitive Questions in Surveys. ’Tourangeau, Roger: Joint Program in Survey Methodology, University of Maryland, 1218, Psychological Bulletin. American Psychological Association. https://doi.org/10.1037/0033-2909.133.5.859.
  • Vandeschrick, C., & Sanderson, J. -P. (2012). GGS Wave 1 Item Non-Response. 5. GGP Belgium Paper Series.
  • Yan, T., Curtin, R., & Jans, M. (2010, March). Trends in income nonresponse over two decades. Journal of Official Statistics, 26(1), 145–164.