ABSTRACT
While crowdsourced data are often used in marketing research, there remain concerns about the validity of data sourced from crowdsourcing platforms such as Amazon’s Mechanical Turk (MTurk). Across two studies using different data sources and respondent screening strategies, this research examines how an explicit measure of participant response satisficing can adversely impact theory-driven research findings and the applied conclusions drawn. Findings demonstrate that respondent satisficing level and the degree of its effects are a significant threat to the integrity and validity of marketing and consumer research. Implications are offered concerning managerial decisions, tests of theory, and consumer research and well-being.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Supplementary material
Supplemental data for this article can be accessed online at https://doi.org/10.1080/10696679.2024.2385374
Notes
1 In both the pilot study and main experiment, we address these effects after accounting for various demographic variables that may be associated with objective knowledge levels.
2 As noted previously, we examine differences in objective knowledge effects in H1-H4 while including demographic variables that may be related to objective knowledge as covariates (e.g. age and gender may be related to nutrition knowledge; age and education may be related to financial investment knowledge.).
3 Based on prior literature, the two dependent variable outcomes examined in H1 and H2, perceived product healthfulness, and perceived disease risk from consuming the product on a regular basis, should be affected by the nutrition profile manipulation shown in the Nutrition Facts label. Effects are consistent with prior consumer-based experiments (e.g. Burton et al., Citation2015; Newman et al., Citation2018).
4 The effects of the demographics varied across the 12 domains of objective knowledge, but age had the most consistent impact with a significant effect (p < .05) on nine of the 12 measures. Older respondents were more accurate (i.e. positive correlations with the objective measures) in their performance on these knowledge indices.
5 The professionally managed data panel provider used in this study was Rep Data. Rep Data provides full-service data collection solutions for market researchers and has assurances of “strict data quality checks at each phase of a project to ensure that high quality respondents are entering and completing the survey, low quality responses are removed, and feedback is incorporated and relayed back to partners at the close of fieldwork” (RepData, Citation2024).
6 Because we are interested in the potential for using satisficing as a screener in research and sought to directly compare it to the three sample sources, we wanted to examine groups of participants that would yield three groups, similar to our three sample sources, as shown in . However, we also performed a Model 3 analysis in PROCESS using the quantitative measure of response satisficing. The three way interaction is significant (F = 7.84; p < .01), and follow-up results for those with low satisficing and high knowledge mirror those shown in , Panel B.
7 We also performed some basic analyses that replicated pilot study results and initially addressed why there are differences due to sample source by examining variations across sample sources for objective nutrition knowledge, satisficing and multitasking, and effort.