170
Views
0
CrossRef citations to date
0
Altmetric
Other

Electronic vs. Paper Voter Guides and Citizen Knowledge: A Randomized Controlled Trial

Pages 291-303 | Published online: 30 May 2020
 

ABSTRACT

Scholars are generally aware of a lack of citizen knowledge about politics in the United States. One commonly suggested remedy is the production of nonpartisan voter guides to provide voters a simple and low-cost information source to help inform voters. One of the more significant variations across states is whether the voter guide is provided in a hard-copy format or distributed on the internet. This paper explores the results of a randomized controlled trial comparing a group of registered voters in Utah who received a hard-copy of a paper voter guide against a group registered voters who received a postcard inviting them to view their voter guide online. Results show few statistically significant differences between the paper guide group and the postcard group in knowledge and turnout, and those that are observed slightly favor the postcard/online group.

Notes

1. While those perceptions are of state-produced voter guides generally, Gill (Citation2014) raises important concerns about implicit racial and gender biases in the judicial performance evaluations sometimes included in voter guides for the evaluation of candidates in judicial retention elections.

2. Note that this study was sponsored and run by Ballotpedia staff; it is not a Wikipedia-type crowdsourced information source.

3. This figure is based on personal communication with the Utah Elections Division.

4. We acknowledge, though, that a contemporary digital divide is developing between those with high speed and high bandwidth connections and those whose internet experience is markedly different due to slow speeds.

5. A brief word is in order about our selection criteria. Potential participants were selected at random from the Utah Voter File as it was constituted on August 28, 2012. The file contains “inactive voters” who haven’t voted in the last several election cycles and have failed to respond to a query from the Lieutenant Governor’s office. Many of these inactive voters likely have moved. Additionally, not all voters give a phone number when they register. I randomly sampled the 6,955 potential participants from active voters with phone numbers. Excluding inactive voters and voters without phone numbers in the voter file has some implication in terms of limiting generalizability (in fact it is even possible that inactive voters or voters who are harder to contact could benefit more from voter guides than the general population). However, the composition of the two groups is still comparable because they are both drawn at random from the same population–we are comparing active voters with valid phone numbers who received the postcard treatment with active voters with valid phone numbers who received the old paper guide treatment. Due to difficulty in distinctions between mailing addresses and physical addresses, residents who live in ZIP codes where all mail delivery is through PO Boxes (generally very rural areas) were excluded from the sampling frame.

6. The telephone survey was conducted after the election because a preelection phone call could well have caught many voters before they had reviewed their voter guide. The only way to guarantee that everyone who was sampled who intended to review the guide before voting had actually done so is to wait until after the election. The downside of this approach is that people could forget the information they learned from the voter guide by the time they took our survey. Indeed, it was for this reason that we worked to ensure our sample took place quickly after the election. As a robustness check to ensure that voter forgetfulness is not an issue, I performed a z-test for difference of proportions on each of our knowledge items for people who responded to the survey within 3 days of Election Day versus those who responded later than that. Our results show no statistically significant difference for those who responded very close to Election Day versus those who responded later. I repeated this making the split at 5 days and again found no statistically significant differences between the groups. This suggests that levels of knowledge did not decay in a meaningful way across the 10 day time span. What’s more, even if such effects were occurring, it wouldn’t affect our core conclusions about the relative effectiveness of online versus paper voter guides unless there was more rapid memory decay for people who consumed their voter guide in one method versus the other (something there is no reason to believe would occur).

7. A comprehensive look at the representativeness of the sample is not feasible because we only have extensive data on those who responded to the survey. However, we can evaluate this for age and previous voting. For age, our sample had a mean age of 51.3 relative to an average age of 46.5 in the full voter file and an average age of 48.9 for active voters with phone numbers in the file; our sample is slightly older than those benchmark populations. For previous voting, we found that 65.8% of our sample voted in the previous election while only 43.9% of all registered voters and 54.9% of active voters with phone numbers did. The low participation of the overall file is attributable in large part to extremely low participation among “inactive” voters; the figure for active voters with phone numbers is a much better benchmark, but still lags behind participation in our sample. This is likely due to the tendency of the more politically active to agree to participate in surveys. While these differences are important to note, because of the high internal validity of the experimental design, we are still able to be confident in the our results within the sample we are studying. The external validity of experiments can be somewhat lower depending on the magnitude of the difference of effects on individuals not represented or underrepresented in the experimental sample.

8. Specifically, this power analysis was calculated using the G*Power software (Faul, Erdfelder, Lang, & Buchner, Citation2007) and assumes a two-tailed test with a 95% level of confidence and 80% power with one of the two groups having a percentage at 70% (or equivalently 30%) and the other being 7.5% higher (or for 30%, 6.8% lower) and 522 subjects in each group. The 70% threshold is selected because all of our outcome measures except one (the percentage recognizing the state treasurer on the ballot) are at or beyond that range and thus achieve that level of power. The percentages that are over 70% (or lower than 30%) have slightly more statistical power while the percentages that are between 70% and 30% have somewhat lower statistical power.

9. More specifically, G*Power indicates that 80% power to achieve effects as small as.2 standard deviations can be achieved using a two-tailed test with a 95% level of confidence for sample sizes as small as n= 400 in each group.

10. The reader may wonder if, in addition to these objective measures of knowledge, we asked whether the voter actually read the voter guide. I did ask such a question but have very serious reservations about it. First, respondents may face a serious social desirability bias in a survey to respond that they did read the guide. It is all but certain that we overestimate actual reading. This becomes more problematic when one considers a second issue: someone who receives the paper guide might feel inclined to say they “read” the voter guide if they browsed a page or two at some point while it was sitting in their kitchen or while they walked from the mailbox to their trash can. A respondent who is in the postcard group has no such “easy out” and those respondents would only be able to say they “read” the voter guide if they actually made the effort to go to the website. Because this possibility means respondent’s propensity to respond accurately affects the paper and postcard groups asymmetrically, there is a high risk of bias in estimates of this quantity. With those limitations in mind, 57.3% of paper guide respondents indicated that they read their voter guide while only 35.8% of respondents in the postcard group indicated that they read their voter guide (n= 1066). Nevertheless, as a robustness check, I ran a set of z-tests for difference of proportions comparing just self-reported paper guide readers and self-reported postcard guide readers. When comparing only self-reported readers from the two groups, we find no statistically significant difference between self-reported readers in the paper group versus those in the postcard group with the exception of recalling a gubernatorial candidate, where recall is 7.6% higher among the postcard/online group than among the paper group (a result identified in the full data as well). These results reinforce the core finding of the paper–that there are few differences between those that are in the postcard/online group and the paper guide group, and the differences that are identified tend to favor the postcard/online group.

11. I obtained the state voter file on Nov. 27, 2012 after the state’s official canvas had been completed.

12. We performed several robustness tests on our data as well. Specifically, we replicated the analyses below with probit models that also controlled for education and gender and found no substantive difference between those results and the results presented here. We also replicated the analyses below only for those who have complete data on all of the questionnaire. The only statistically significant relationships we found were showing that recalling a gubernatorial candidate’s name was more likely for postcard group members (as was the case in the full data set) and also that recognizing that a state house race was on the ballot was slightly more common for the postcard/online group. These robustness checks reinforce our findings from the original data.

13. The vast majority of citizens recalled the name of the incumbent governor, Gary Herbert, who sought reelection in 2012.

14. I also replicated these assessments using scores from a factor analysis of the 8 items (and the 7 item test as well) instead of the additive index approach. Both the 7 and 8 item factor analyses produced one-factor solutions by the Kaiser and scree criteria. I found no substantively meaningful difference in the outcomes.

Additional information

Funding

This work was supported by the Utah Lieutenant Governor’s Office, Elections Division.

Notes on contributors

Damon M. Cann

Damon M. Cann is Professor of Political Science at Utah State University. He conducts research and teaches on themes relating to representation and elections.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 270.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.