817
Views
12
CrossRef citations to date
0
Altmetric
Articles

Manipulated vs. Measured: Using an Experimental Benchmark to Investigate the Performance of Self-Reported Media Exposure

, , , , &
Pages 99-114 | Published online: 20 Apr 2016
 

ABSTRACT

Media exposure is one of the most important concepts in the social sciences, and yet scholars have struggled with how to operationalize it for decades. Some researchers have focused on the effects of variously worded self-report measures. Others advocate the use of aggregate and/or behavioral data that does not rely on a person’s ability to accurately recall exposure. Our study illustrates how an experimental design can be used to improve measures of exposure. In particular, we show how an experimental benchmark can be employed to (1) compare actual (i.e., manipulated) and self-reported values of news exposure; (2) assess how closely the self-reported items approximate the performance of “true” exposure in an empirical application; and (3) investigate whether a variation in question wording improves the accuracy of self-reported exposure measures.

Acknowledgments

We thank Yanna Krupnikov and Scott Clifford for helpful comments on previous versions of this article.

Notes

1 Convergent validity refers to the fit between independent measures of the same construct, while predictive validity refers to the ability of a construct to predict a criterion variable (e.g., political knowledge, in the case of media exposure; see Dilliplane et al., Citation2013).

2 Nielsen ratings represent another alternative to self-reported exposure measures (Prior, Citation2009b), but they are an aggregate-level measure (i.e., the ratings represent the total audience for a particular program). Additionally, the ratings may contain error because (historically) they have been based on “people-meters,” which require viewers to push a button to indicate the beginning and ending of their viewing.

3 We view the general vs. specific distinction as one of many potential variations in question wording that may be used to facilitate recall (see Van Elsas, Lubbe, Van Der Meer, & Van Der Brug, Citation2014; or Timotijevic, Barnett, Shepherd, & Senior, Citation2009 for other applications).

4 Subjects were recruited to participate in exchange for extra credit and instructed to sign up for the study through an online appointment system. The studies were approved by the Institutional Review Board at Stony Brook University (Application #: 580472). See Druckman and Kam (Citation2011) on the use of student samples in experimental research.

5 There were slight imbalances across conditions on partisanship, political knowledge, risk aversion, and presidential approval (see ), however, a joint test reveals no significant difference in the overall composition of our experimental groups (p = .48).

6 In the original study, 66% of the respondents from Time 1 completed the follow-up. The re-contact rates for the first and second replications were 80% and 55%, respectively. Attrition was not significantly related to treatment assignment (see ).

7 The treatment was an edited version (338 words) of an actual USA Today story that appeared on the newspaper’s website approximately 1 year before our study. We selected this article because it was geared toward young job seekers, and thus the topic should have had some appeal for our subjects.

8 To the extent that there are “errors” in self-reported measures of news exposure, these items will allow us to examine the characteristics of people who over or under report their exposure.

9 When combined with the initial (dichotomous) recall item, this resulted in a six-point measure ranging from “Yes, absolutely certain” to “No, absolutely certain.”

10 Respondents received a general or specific version of this question, depending on treatment assignment, along with the certainty follow-up. We confirmed via media content analysis that there had been no media coverage of human cloning around the time of the study.

11 All statistical tests are two-tailed unless otherwise noted.

12 For the treatment indicator, the marginal effect is .11. This model includes indicators for the first and second replications, along with the pretreatment measure of respondent attention. Excluding the latter (only including controls for the replications) results in a p-value of .14 for the treatment indicator (marginal effect = .11).

13 The sign and significance of the coefficient on the treatment indicator remains unchanged in a model that excludes the pretreatment measure of attention.

14 67% of treated respondents accurately report exposure, which is slightly higher than the analogous figure reported in Ansolabehere and Iyengar (Citation1995). Notably, there is no difference in the amount of time spent viewing the story according to self-reported recall status (i.e., answering “yes” or “no” to the recall question; p = .95).

15 We ignore the variation in question wording and collapse across the general and specific wording conditions (the effect of question wording is examined in the next subsection).

16 The results remain unchanged when we include a measure of self-reported attention to politics, which itself is insignificant (coeff = .06; s.e. = .12).

17 provides a tabular presentation of the data from . We estimated the model from separately for respondents who accurately report their exposure and those who were inaccurate (over or under reporting). For accurate respondents, there is a positive but non-significant relationship between self-reported exposure and knowledge. For inaccurate respondents, that relationship is negative and significant (p < .001). This pattern underscores the importance of devising self-reported exposure questions that elicit accurate responses.

18 In explaining why more politically interested people over report media exposure, Prior states, they “rely too heavily on their generally high political involvement when estimating network news exposure without help. Even when they recall only a few episodes of news exposure, they may infer frequent exposure from their considerable interest in (and knowledge of) politics” (Citation2009a, p. 901).

19 As noted earlier, those who said they recalled seeing something about job growth were asked whether the information pertained to “rates of job creation in different regions of the United States,” “a decline in the number of jobs in the restaurant industry,” or “a new federal program to train people receiving unemployment benefits.” This question appeared in Wave 2 and it represents a more challenging recall task than the previous two items (overall recall and recall certainty). That said, we are not comparing groups formed by random assignment since we analyze a subset of people from the treatment group (i.e., those who recalled exposure).

20 Another challenge facing researchers who use survey-based measures of exposure is the steep decline in response rates (Kohut, Keeter, Doherty, Dimock, & Christian, Citation2012), and the potential for unobserved factors to be correlated with survey participation and self-reported media use (irrespective of how the exposure question is worded).

21 Implicit techniques such as the Implicit Association Test (IAT) or lexical decision tasks are another class of tools for measuring media effects, however, many challenges remain in adapting these measures for communications studies (see Hefner, Rothmund, Klimmt, & Gollwitzer, Citation2011 for discussion).

22 Experimental subjects who have seen a stimulus are asked about the likelihood they would have ordinarily encountered that material in the real world.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 258.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.