915
Views
7
CrossRef citations to date
0
Altmetric
Advances in Research on Survey Interview Interaction

Clarifying question meaning in standardized interviews can improve data quality even though wording may change: a review of the evidence

&
Pages 203-226 | Published online: 06 Dec 2020
 

ABSTRACT

Survey interviews are conducted to produce objective, accurate information in which interviewers ask questions as worded and their discretionary speech is carefully managed. To limit interviewer influence over answers and reduce between-interviewer variance, Standardized Interviewing (SI) requires interviewers to administer nondirective probes – scripted utterances designed to elicit acceptable answers without leading respondents. To promote the intended interpretation of questions, and thus response accuracy, Conversational Interviewing (CI) authorizes interviewers to clarify questions when they suspect respondents have misunderstood. This article reviews evidence from 12 studies about the effectiveness of these two approaches. Findings consistently show that CI leads to considerably more accurate question interpretation and response accuracy than SI across different samples, modes, and languages, and does not increase interviewer variance. CI generally leads to longer interviews than SI, requiring practitioners to weigh increased response accuracy against longer interview duration. Several online implementations of CI have produced initially promising results.

Disclosure statement

No potential conflict of interest was reported by the authors.

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Notes

1. Transcription conventions:# hashtag indicates audible keyboard click (x) parentheses enclosing number indicate silence with duration in seconds enclosed

2. One cannot be sure on the basis of only the transcript that the report ‘Just reading glasses’ is actually intended to indicate ‘no.’ The question asks about the respondent’s ability to read without glasses or contact lenses, i.e., with negation in the ability being described, which may be confusing. It is thus possible that the report was an indirect question to interviewer ‘I can read with reading glasses – is that what you’re asking me?’ The implicit double negative (‘no’ would mean ‘I cannot read without glasses’) may simply have confused the respondent. The important point is that the report is not one of the authorized response categories and thus warrants a neutral probe.

3. It is possible that some misunderstandings will not lead to the wrong answer. For example if asked ‘Have you smoked at least 100 cigarettes in your entire life?’ a respondent who misinterprets ‘cigarettes’ as including non-tobacco cigarettes but has not smoked 100 cigarettes of any kind can correctly respond ‘no’ despite the misconception (Schober et al., Citation2018). We return to this topic in the final section.

4. In formulating this question, Schober and Conrad (Citation1997) actually used the phrase ‘Flexible Interviewing’ to describe what has since been referred to as ‘Conversational Interviewing.’ While there is certainly more to a conversation than providing clarification the authors use the ‘conversational’ label to reflect the centrality of conversational grounding in the approach.

5. The PONS, probably the most widely accepted measure of trait nonverbal sensitivity across a variety of fields, has been used in a large number of studies (e.g., Hall et al., Citation2009; Lee et al., Citation1980) since its publication (Rosenthal et al., Citation1979).

6. To the extent that non-paradigmatic question-answer sequences reflect mapping ambiguity, Dijkstra and Ongena (Citation2006) observed paradigmatic patterns in over 95% of sequences in a very large corpus. This would suggest that conversational interviews should not be much longer than standardized interviews in practice. However, not all paradigmatic interactions reflect grounded meaning.

7. This type of confusion could potentially be eliminated with fully labelled scales but they are difficult for telephone respondents to keep in mind; thus to the extent, interviewers can help clarify the mapping for respondents exhibiting this kind of confusion, this can help improve response quality.

8. Inclusive definitions consist of more criteria for what should be counted (e.g., ‘Count a room as a bedroom that was designed as a den but is being used as a bedroom’) than should not be counted. The opposite is true for exclusive definitions (e.g., ‘Do not include expenses for food or lodging in the cost of moving’).

9. A definition was included in the re-interview questionnaire for only half of the respondents. Here we focus on only that half.

10. All of the interviewers in these comparisons worked at the same US Census Bureau call center, asked the same questions via telephone to respondents in the same government laboratory; respondents answered on the basis of fictional scenarios, allowing the researchers to calculate response accuracy.

11. Providing definitions to online survey respondents should not be confused with pretesting online questionnaires by inserting probes after particular questions (‘online probing’), e.g., Behr et al. (Citation2012). The Conrad et al. (Citation2007) method communicates the designers’ intended meaning for specific questions and thus should reduce measurement error, i.e., an online implementation of CI. The Behr et al. (Citation2012) method is used to explore respondents’ reasons for selecting particular answers to evaluate how well the questions work so that they can be revised prior to use in production surveys; respondents answer the probe question, e.g., ‘Please explain why you have chosen “agree,”’ in their own words.

12. The speech technology to more fully automate spoken survey interviews currently exists (e.g., Johnston et al., Citation2013) and is clearly a next step in deploying not just CI systems but speech-based interviewing systems generally.

Additional information

Funding

National Science Foundation grants IIS-0081550, SES-0551294, SES-1025645, SES-1324689, SES1323636, and SBR-9730140. National Institutes of Health (National Cancer Institute) grant #R01 CA172283). Additional funding from US Bureau of Labor Statistics; ASA/NSF/BLS Senior Research Fellowship to Michael Schober; Survey Research Center, University of Michigan; VU University Amsterdam; the Institute for Employment Research (IAB) (Nuremberg, Germany); and infas (Bonn, Germany)

Notes on contributors

Frederick G. Conrad

Frederick G. Conrad is Research Professor and Director, Program in Survey Methodology, University of Michigan. His research concerns improving survey data quality (e.g., via texting, two-way video, and virtual interviewers) as well as new data sources (e.g., social media posts and sensor measurement) for possible use in social and behavioral research.

Michael F. Schober

Michael F. Schober is Professor of Psychology and Vice Provost for Research at the New School. He studies shared understanding—and misunderstanding—in survey interviews, collaborative music-making, and everyday conversation, and how new communication technologies (e.g., text messaging, video chat, automated speech dialogue systems, social media posts) are affecting interaction dynamics.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 323.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.