240
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

Model selection in observational media effects research: a systematic review and validation of effects

, ORCID Icon, , , &
Pages 227-246 | Published online: 18 Dec 2017
 

ABSTRACT

Media effects research has produced mixed findings about the size and direction of the relationship between media consumption and public attitudes. We investigate the extent to which model choices contribute to these inconsistent findings. Taking a comparative approach, we first review the use of different models in contemporary studies and their main findings. In order to extend and validate this review, we consider the implications for national election studies attempting to measure media effects in election campaigns and recreate these models with the British Election Study 2005–2010 panel data. We compare the direction and size of effects of media content on attitude change across: between-subjects, within-elections models, in which the effects of individual-level variance in media exposure and content are assessed; within-subjects, within-elections models, which compare the effects of variance in media content for the same individual; and within-subjects, between-elections models that allow us to analyse the links between media content and exposure with attitude change over time. Our review shows some notable differences between models in terms of significance of effects (but not effect sizes). We corroborate this finding in the British campaign analysis. We conclude that to check the robustness of claims of media effects in observational data, where possible researchers should examine different model choices when evaluating media effects.

Disclosure statement

No potential conflict of interest was reported by the authors.

Table A1. Tone and visibility in 2005 newspapers.

Table A2. Tone and visibility in 2010 newspapers.

Notes

1. Citation counts suggest that media effects researchers have focused more on Bartels’ criticisms: According to Google Scholar, the citation numbers for these two articles were 890 versus 89 at the time of writing (November 2017).

2. More generally, panel data also have other advantages (e.g. decomposing variation into a between-subjects and a within-subjects component), but there can be problems of non-random panel attrition (Bartels Citation1999; Frankel and Hillygus 2013), and they can require more advanced statistical techniques, such as corrections to standard errors in order to account for the longitudinal nature of the data.

3. This is not a meta-analysis, which would look at published and unpublished work over a wider range of journals and a greater time period; our intention here is simply to get a systematic idea of the methods that are being used in contemporary media effects research and whether they appear to provide different results. Another point where our survey of the published results departs from a meta-analysis is that it does not address factors like methods of coding, the inclusion of various independent factors as control variables, the way the researchers deal with ‘no answers’ or ‘undecided’ responses and whether or not measures such as interactions effects terms were included in the models. The journals we used were American Journal of Political Science, American Political Science Review, Journal of Politics, Journal of Common Market Studies, Comparative Political Studies, Journal of European Public Policy, West European Politics, British Journal of Political Science, Annual Review of Political Science and Political Analysis for Political Science and New Media and Society, Journal of Communication, Public Relations Review, Journal of Computer-Mediated Communication, Journal of Pragmatics, Journalism, International Journal of Communication, Public Opinion Quarterly, Communication Research and Media, Culture and Society for Communication.

4. We exclude papers that report the results from experimental designs from this analysis. A majority of these experiments took place in the lab with only a handful occurring out in the real world in the form of a field experiment or a quasi-experimental design. Since our emphasis in this paper is on a comparison of three models in detecting large-scale media effects, we choose to focus on observational studies alone. In order to be able to compare results with those of our UK analysis, we also exclude studies that have behavioural or emotional rather than cognitive dependent variables. The effect sizes in the observational studies we report are about half as large as the average effect size (= 0.21) in the experimental results that we also collected.

5. NB: we only analysed one within-subjects design (Banducci, Giebler and Kritzinger Citation2017) with no time component. Because of that, we grouped it with the within-subjects over time models. Over Time models refer to panel, time-series or rolling cross-sectional designs in this analysis.

6. Although we do not report it in , the analysis of media effects articles also shows that a majority of studies are conducted in a single country with only a very small minority of papers making cross-country models even though there is reason to believe that media systems matter when it comes to media effects (Pollock et al. Citation2015; Schoonvelde Citation2014).

7. That is, d is estimated using the t-statistic multiplied by 2, divided by the square root of the degrees of freedom of the model. As such, it takes into account the number of observations the model is based on.

8. Since we considered knowledge, attitudes and evaluations as outcomes, we have also broken down our effect sizes for each of these outcomes separately. Because we did not find any systematic differences across outcomes, we decided to take them together in the presentation of our results.

9. This could be disputed on the grounds that the press in Britain is partisan, readers select outlets that accord with their partisan predispositions and what one observes is therefore selection effects rather than media effects. However, recent studies, for example Stevens et al. (Citation2011), suggest that this account is simplistic and that there are short-term media effects on partisans in British elections. Research on partisan media in the USA also suggests effects that are not just an artefact of self-selection (Dilliplane Citation2014; Smith and Searles Citation2014).

10. For a discussion of self-reported items in media effects research, see Jerit et al. (Citation2016).

11. To keep the regression output clear and concise, we did not include parameter estimates for the control variables in the table. The model displays the results of the regression of the change in party evaluations on party visibility, party tone, party identification, medium education dummy, low education dummy, gender, income and race. All regression tables were constructed using the Stargazer package in R (Hlavac Citation2013).

12. The lowest value (−1) for visibility and tone (−1) now denotes a situation where visibility (tone) is larger (more positive) in 2005 than in 2010, whereas the highest value (+1) denotes a situation where visibility (tone) is larger (more positive) in 2010 than in 2005.

Additional information

Funding

This work was supported by the Economic and Social Research Council [ES/K004395/1].

Notes on contributors

Susan Banducci

Susan Banducci is a Professor in Politics at the University of Exeter. She worked on the 1996 and 1998 the New Zealand Election Studies and the 2004 and 2009 European Election Studies. Her current research focuses on media and elections in the UK.

Martijn Schoonvelde

Martijn Schoonvelde is a postdoctoral researcher in political science at the Vrije Universiteit in Amsterdam where he works for EUENGAGE, a project that explores the relationship between elites and the public in the European Union. His research interests are in automated text analysis, media effects, political behavior and EU politics.

Daniel Stevens

Daniel Stevens is a Professor of Politics at the University of Exeter. His research focuses on political behaviour and political communication.

Jason Barabas

Dr. Jason Barabas is Professor of Political Science at Stony Brook University. In the past, he has held postdoctoral fellowships at Harvard University and Princeton University. Professor Barabas currently directs the M.A. Program in Public Policy at Stony Brook and has published articles on public opinion, political communication, and methodology in a wide range of journals.

Jennifer Jerit

Dr. Jennifer Jerit is Professor of Political Science at Stony Brook University. She has published articles on public opinion, political rhetoric, and experimental methods. She serves on a variety of editorial boards and received the Erik Erikson Early Career Award for Excellence and Creativity in the field of Political Psychology.

William Pollock

William Pollock is a doctoral candidate at Stony Brook University. His research focuses on the behavior of political candidates, political knowledge, and the media.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 336.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.