3,200
Views
52
CrossRef citations to date
0
Altmetric
ARTICLES

The Role of Perceptions of Media Bias in General and Issue-Specific Political Participation

, , , , , & show all
Pages 343-374 | Published online: 12 May 2011
 

Abstract

Despite a large body of literature documenting factors influencing general political participation, research has lagged in understanding what motivates participation regarding specific issues. Our research fills this gap by examining the interplay of perceptions of media bias, trust in government, and political efficacy on individuals' levels of general and issue-specific political participation. Using survey data with indicators related to general political participation, our results demonstrate that perceptions of media bias overall are negatively related to general political participation. Moreover, this relationship is an indirect one, mediated by trust in government and political efficacy. Using survey data with indicators of issue-specific political participation in the context of stem cell research, our results show that—contrary to the relationship found for general political participation—perceptions of media bias are directly and positively associated with issue-specific participation. Implications for political participation and media bias theories are discussed.

ACKNOWLEDGMENTS

This work was supported by Worldwide Universities Network as well as the U.S. Department of Agriculture (grant number NYC-131421).

Notes

Past research has excluded voting from analyses of participation for a variety of reasons, most notably because voting is related to social norms and requires neither the effort nor resources required to perform more specific political behaviors (see Eveland & Scheufele, Citation2000, for a more detailed discussion).

2Of the total sample (N = 508), 148 respondents were excluded as they had not been asked the “political discussion” item due to a programming error in the computer-assisted telephone interviewing instrument. Instead of using regression imputation, which may introduce systematic biases into our results, we decided to exclude the 148 missing cases from our analyses. To determine whether these missing cases influenced our results, we conducted two tests comparing the full sample with the reduced sample. First, independent sample t tests show that those who responded to the questions on political discussion were significantly more likely to be younger and higher in SES than were those who did not respond to the questions on political discussion. There were no significant differences between the two groups on other variables analyzed in this study. Second, to test the relationships among our variables of interest—perceived bias, trust, efficacy, and participation—we calculated partial correlations between these variables, controlling for sex, age, SES, ideology, and ideological strength. The magnitudes of these coefficients differed slightly between our reduced sample and the original sample (differences ranged from .00 to .07), as would be expected with the loss of variance associated with excluding a large number of cases. With one exception, the direction and significance of relationships remained unchanged. (The exception was the partial correlation between trust and participation, significant in the full sample but nonsignificant in the reduced sample. In spite of this difference, the relationship in our final structural equation modeling analysis was significant and the magnitude similar to the partial correlation between these two variables in the full sample.) These comparisons suggest that our exclusion of these cases, which was necessary to utilize the measure of political discussion, did not introduce systematic biases in our results.

3There are some similarities and differences between the two surveys. In terms of sampling technique, both surveys used random digit dial telephone sampling. With regard to sampling frame, Survey 1 consisted of residents of a single county in a northeastern state, whereas Survey 2 consisted of residents of an entire Midwestern state. Both surveys engaged universities' computer-assisted survey teams to collect the data. Survey 1 was a topical survey in which specific questions were asked about national issues pertaining to politics and general political participation. Survey 2 was part of a representative, biannual omnibus survey of the particular Midwestern state residents in which all questions related to stem cell research and participation were developed by the authors. The response rates for both surveys were calculated based on AAPOR Formula 3. However, Surveys 1 and 2 had response rates of 42% and 24.3%, respectively. Because the survey methodology was similar for the two surveys, the difference in response rate may be due to the 6-year gap between the two data collections. In particular, Survey 1 was conducted prior to the 2000 U.S. presidential election, whereas Survey 2 was conducted just before the 2006 midterm elections. This difference is not too surprising because studies have shown that response rates for phone surveys have been declining in the past few decades, with an even more precipitous drop-off in recent years (Curtin, Presser, & Singer, Citation2005). Despite this overall trend, which produces response rates at 30% or below, its impact on the ability of researchers to calculate unbiased estimates of population parameters appears to be minimal (Keeter, Kennedy, Dimock, Best, & Craighill, 2006; Keeter, Miller, Kohut, Groves, & Presser, 2000).

4This is a prospective measure of issue-specific participation, in contrast to our retrospective measure of general participation in the first data set. The question wording and scaling of the two variables are therefore different, and we address this concern in footnote 5.

Note. N = 435. All coefficients are at least 1.96 times larger than their standard error. Coefficients in first row are direct effects, coefficients in second row are indirect effects, and coefficients in third row are total effects. Direct and indirect effects may not always add up to total effects due to rounding error and nonsignificant pathways.

Note. N = 360. All coefficients are at least 1.96 times larger than their standard error. Coefficients in first row are direct effects, coefficients in second row are indirect effects, and coefficients in third row are total effects. Direct and indirect effects may not always add up to total effects due to rounding error and nonsignificant pathways.

5The fit indices were as follows: χ2(32, N = 360) = 57.22, p = .004 (NFI = .90, CFI = .95, GFI = .97, AGFI = .94).

6The new model was reestimated by first freeing paths from all the exogenous variables to endogenous variables (subsequently removing nonsignificant paths). Following this, we allowed paths from each antecedent endogenous variable to the other endogenous variables in the model (again removing nonsignificant paths). Next, we freed paths among the consequent endogenous variables based on the modification indices and removed any remaining nonsignificant paths among exogenous and antecedent endogenous variables.

7Regarding possibility (a), we tested two separate reverse causation path models with perceptions of media bias as the consequent endogenous variable. For the general case, the reverse causation model did not fit the data and general political participation did not significantly predict perceptions of media bias. For the issue-specific case, the reverse causation model did not fit the data, even though issue-specific participation significantly predicted perceptions of media bias. For possibility (b), we tested two separate path models by reversing the directions of trust in government and perceptions of media bias, keeping the rest of the variables and relationships in place. For both general and issue-specific political participation, both reverse causal models fit the data according to the chi-square likelihood ratio test, with trust in government significantly predicting perceived media bias. Given good fit for both models, we used the Akaike Information Criterion (AIC) to compare differences in their predictive accuracy relative to one another, where a lower value indicates better accuracy (Kaplan, Citation2009). Our original general participation model exhibits better predictive accuracy (AIC = 130.25) compared with the reverse causation general participation model (AIC = 131.15). Likewise, our original issue-specific participation model had better predictive accuracy (AIC = 130.37) compared with the reverse causation issue-specific model (AIC = 133.21).

8The differences in item wording and measurement scales between the two data sets are particularly notable for the measures of trust, efficacy, and political participation (especially as measured prospectively vs. retrospectively). In spite of these differences, the stability or instability of relationships among these measures and their interitem correlations should indicate whether they tap the same underlying constructs (Chaffee, Citation1991). To explore this possibility, we compared the zero-order correlations between participation and our key variables of interest—perceptions of media bias, trust in government, and efficacy—and concluded that the relationships were similar and stable between the two separate data sets. One exception—the correlation between trust in government and participation—differed in magnitude and significance (though not direction), which does not seem entirely surprising given the instability between these relationships in past research (noted in our overview of existing literature).

Additional information

Notes on contributors

Shirley S. Ho

Shirley S. Ho (Ph.D., University of Wisconsin-Madison, 2008) is an Assistant Professor in the Wee Kim Wee School of Communication and Information at Nanyang Technological University, Singapore. Her research focuses on media and public opinion, in the context of controversial science and public health.

Andrew R. Binder

Andrew R. Binder (Ph.D., University of Wisconsin-Madison, 2010) is an Assistant Professor in the Department of Communication and Associate Director of the Project on Public Communication of Science and Technology (PCOST) at North Carolina State University. His research focuses on public opinion, the interplay between science and politics, and risk communication.

Amy B. Becker

Amy B. Becker (Ph.D., University of Wisconsin-Madison, 2010) is an Assistant Professor in the Department of Mass Communication and Communication Studies at Towson University. Her research focuses on public opinion and citizen participation, and political effects of exposure and attention to political entertainment.

Patricia Moy

Patricia Moy (Ph.D., University of Wisconsin-Madison, 1998) is the Christy Cressey Professor of Communication at the University of Washington. Her research focuses how mass media and interpersonal communication shape public opinion and political behavior.

Dietram A. Scheufele

Dietram A. Scheufele (Ph.D., University of Wisconsin-Madison, 1999) is the John E. Ross Chaired Professor and Director of Graduate Studies in the Department of Life Sciences Communication at the University of Wisconsin-Madison. His research deals with public opinion on emerging technologies and the political effects of mass communication.

Dominique Brossard

Dominique Brossard (Ph.D., Cornell University, 2002) is an Associate Professor and Director of Undergraduate Studies in the Department of Life Sciences Communication at the University of Wisconsin-Madison. Her research focuses on the area of strategic communication and public opinion, more particularly in the context of controversial science.

Albert C. Gunther

Albert C. Gunther (Ph.D., Stanford University, 1987) is a Professor in the Department of Life Sciences Communication and the School of Journalism and Mass Communication at the University of Wisconsin-Madison. His research interests focus on the psychology of the mass media audience – often in the context of scientific controversies or public health.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 324.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.