ABSTRACT
An important problem voters face is that they frequently encounter unfamiliar candidates and policies during elections. The Internet provides a solution to this problem by allowing voters to access vast amounts of information using communication technologies like laptops and smartphones. However, the online environment is “noisy,” containing information both relevant and irrelevant to any given query. Existing research has not examined whether voters are able to discriminate between relevant and irrelevant political information during online search and how this discrimination ability influences voting decisions. We conducted a preregistered experimental study (N = 128; 64 younger participants and 64 older participants) in which we created our own search engine and webpages about political candidates to examine people’s discrimination ability during search. We found that people’s ability to discriminate between relevant and irrelevant facts during search increased the likelihood that their later vote choices were influenced by relevant (instead of irrelevant) information. In addition, older and younger adults’ discrimination abilities did not differ between searches on smartphones or laptops. Our findings demonstrate a new way to integrate theories of political behavior and communication technology and highlight information search in “noisy” online environments as an important problem faced by voters in democracies.
Disclosure statement
No potential conflict of interest was reported by the authors.
Supplementary material
Supplemental data for this article can be accessed online at https://doi.org/10.1080/19331681.2023.2194881
Notes
1. Some elections will of course feature more information about candidates (both relevant and irrelevant) than others. Low-information elections will likely contain less total information available about candidates, but among the information people will encounter when conducting online searches for candidates in those races, some will be relevant (information about the candidate searched for) while other information will be irrelevant, even if little overall information is available compared to higher-profile elections.
2. Beyond a particular voting decision, this account also suggests that irrelevant information can influence people’s broader opinions about candidates that can persist beyond the period of a campaign.
3. Prior to data collection, we preregistered the hypothesis, design, and analytic strategy of the study (https://osf.io/uwmm8/).
4. In our preregistration document, we meant to put 128 participants instead of 120 participants in order to have an equal number of participants across our counterbalancing conditions.
5. Participants recruited as part of the group “50 years of age or older” are considered older adults for the purposes of these analyses while those not in this group are considered younger adults. See the Supplementary Information for more information about the sampling frame.
6. More detailed information on the procedure is contained in the Supplementary Information.
7. In addition to the positive and negative articles about candidates, participants also encountered pages which were associated with neither positive nor negative information on the search engine’s search results. These “neutral” results resembled popular online directory sites (e.g., WhitePages, Spokeo) that are likely to appear in search results when conducting searches for first and last names and contained no information which could help aid voters in their vote choices ().
8. Data and materials available on request.
9. To calculate discrimination ability, we extracted the amount of time participants spent on each webpage while searching for information in a given voting trial directly from the backend of the Sagamore search engine we created. A participant was considered to have spent time on a webpage if they clicked on the result representing the page in the search results and were taken to the corresponding webpage. Times spent on visited pages were captured in milliseconds.
10. The results are substantively similar if we use the median amount time participants spent on the target and misleading candidates’ websites (see Table S1, Model 2).
Additional information
Notes on contributors
Ryan C. Moore
Ryan C. Moore, is a PhD candidate in the Department of Communication at Stanford University.
Jason C. Coronel
Jason C. Coronel, is an Associate Professor at The Ohio State University's School of Communication.
Olivia M. Bullock
Olivia M. Bullock, PhD (Ohio State University), is an Assistant Professor in the Department of Organizational Sciences and Communication at The George Washington University.
Samuel Lerner
Samuel Lerner, is a cybersecurity researcher at Red Balloon Security. He received his BS and MS in computer science at The Ohio State University.
Michael P. Sheehan
Michael P. Sheehan, MD is a second-year general surgery resident at Summa Health.