1,091
Views
18
CrossRef citations to date
0
Altmetric
Research Note

Drivers of Data Quality in Advertising Research: Differences across MTurk and Professional Panel Samples

, &
Pages 515-529 | Received 28 Jan 2022, Accepted 12 May 2022, Published online: 27 Jun 2022
 

Abstract

Crowdsourcing has emerged as a preferred data collection methodology for advertising and social science researchers because these samples avoid the higher costs associated with professional panel data. Yet, there are ongoing concerns about data quality for online sources. This research examines differences in data quality for an advertising experiment across five popular online data sources, including professional panels and crowdsourced platforms. Effects of underlying mechanisms impacting data quality, including response satisficing, multitasking, and effort, are examined. As proposed, a serial mediation model shows that data source is, directly and indirectly, related to these antecedents of data quality. Satisficing is positively related to multitasking and negatively related to effort, and both mediators (in parallel) extend to data quality, indicating that the indirect effects on data quality through these mediating variables are significant. In general, a vetted MTurk sample (i.e., CloudResearch Approved) produces higher quality data than the other four sources. Regardless of the data source, researchers should utilize safeguards to ensure data quality. Safeguards and other strategies to obtain high-quality data from online samples are offered.

Notes

1 For this vetted sample, we used the MTurk screening tool from CloudResearch (i.e., CloudResearch Approved). We also examine an MTurk sample without any screening criteria in place (a standard MTurk sample), and we anticipate it will perform more poorly in terms of data quality and its antecedents. We address these differences further in the Method section.

2 Survey response “quality” can be defined and assessed in different ways. In this study we use an index comprised of attention check questions (e.g., Goodman, Cryder, and Cheema Citation2013; Smith et al. Citation2016) embedded in the survey as our primary measure of data quality. We acknowledge that a number of alternative measures of quality are possible.

3 Both the Qualtrics and Kantar data were solicited as “general population” samples with no special criteria or quotas.

4 Although not the focus here, the dependent measures used in the original study were assessed using the same items used in the original study (see Berry, Burton, and Howlett Citation2017). The reliabilities for these measures were acceptable for the overall sample and each of the samples independently (see Supplemental Online Appendix C). The overall experimental effects replicated those reported in the original paper, and in general, effects were consistent across samples (i.e., the sample did not moderate natural claim and / or disclosure effects).

5 Reliabilities for more parsimonious three-item versions remains acceptable (see footnote in Appendix A).

6 Note that the significant findings occur despite some issues with floor and ceiling effects for some measures.

7 It is important to recognize that the professional panel companies (i.e., Qualtrics and Kantar) also have measures in place to help ensure data quality. For instance, Qualtrics has automated checks such as bot detection, built-in attention checks, and speed traps) and post-hoc data scrubbing to detect straightliners and nonsensical open-ended responses. Kantar has protections against high survey completion rates and ensures that respondents are real people with verifiable data. These checks obviously improve quality when compared to the standard MTurk sample.

Additional information

Notes on contributors

Christopher Berry

Christopher Berry (PhD, University of Arkansas) is Assistant Professor, Colorado State University

Jeremy Kees

Jeremy Kees (PhD, University of Arkansas) is Professor and Richard J. and Barbara Naclerio Endowed Chair in Business, Villanova University

Scot Burton

Scot Burton (PhD, University of Houston) is Distinguished Professor and Tyson Graduate Research Chair, University of Arkansas

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 158.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.