913
Views
0
CrossRef citations to date
0
Altmetric
Discussion Paper

Unraveling 2016: Comments on Gelman and Azari's 19 Things

1. Introduction

Scholars, pundits, and wonks will be studying the 2016 election for a long time. The sheer number of unprecedented elements of the 2016 U.S. elections produced some shock fatigue and left even seasoned election watchers scratching their heads (Fallows Citation2017). Drawing on insights from data science, statistics, and political science, Julia Azari and Andrew Gelman identify an impressive 19 potentially productive threads to pull on in our attempt to unravel the mysteries of 2016.

There are so many features of the 2016 election that strayed from the status quo that, like a spoiled experimental design, it is challenging for scholars to explain exactly why the election turned in the surprising ways it did. To name just a few, 2016 included the first female major party candidate, the first modern election with evidence of undue foreign influence, the first election with a nominee who had no government or military experience of any kind, and the list goes on.

While some may find the Gelman–Azari treatment dissatisfying for being too shallow on any individual point, too contrived, or just too long of a list, I submit that their holistic approach to breaking down the oddities of 2016 is necessary given the circumstances. Here, I focus on four of the items on their list—two that I find worth underscoring and strongly worthy of further exploration, and two that are perhaps too complex to pursue, even if perfectly valid.

2. Threads Worth Tugging

Two of the 19 Things (#6 and #9) seem particularly worthy of emphasis, if only to pile on and offer encouragement for deeper investigation.

In noting that news is presented and consumed in silos, the authors point to work by Pablo Barberá, who observes that when one's social network is made of “weak ties” (e.g., social acquaintances, former friends, or people whom you know, but not very well) they are more likely to be exposed to new information (Granovetter Citation1973; Barberá Citation2015a, Citation2015b). In theory, social media participation should therefore result in people being exposed to a greater breadth of political viewpoints and information, and less likely to be exposed only to extreme information. Where else but in your social media feed are you likely to interact with that girl you went on a date with sophomore year, or your 2nd-cousin from Modesto? In theory, social media participation should expose us to new information because of the influx of material from our weak social ties.

However, as the authors allude, social network platforms, like Facebook, Twitter, or Instagram, may not operate as an unconstrained network. In other words, if there were no exogenous restrictions, limits, or rules about how ties form in these networks, then the theoretical properties that Granovetter, Barberá, and others hypothesized, would be present. But the companies that run these platforms are profit-seeking. They use complicated and proprietary algorithms to generate feeds and encourage, or discourage, particular connections. Given what we know now about the presence of bots, fake accounts, and contrived promotions on these platforms, we should absolutely not expect these networks to behave like they would in theory (Leonnig, Hamburger, and Helderman Citation2017; Howard et al. Citation2017). The artificial restrictions placed on social media exposure by network managers, whose interests may conflict with protecting natural network properties (e.g., Mark Zuckerberg, Vladimir Putin), means social media consumption is highly structured and contrived in ways that benefit those who control the levers, not the users. The information about this aspect of the election is still under investigation. We do not yet have enough information to know how much this affected outcomes; however, we know that social media exposure affects voter turnout and voter choice (Bond et al. Citation2012). Keep pulling this thread.

The second thread worth emphasis is where the authors correctly point out that most elections can be accurately forecast months before Election Day, by studying the so-called “fundamentals” (#9). In 2016, the fundamental models pointed to the likelihood of a narrow Republican win because (1) the incumbent party was seeking a third term in office, (2) the democratic incumbent had mediocre approval ratings in the run up to the election, and (3) economic indicators suggested the aggregate economy was limping. These three important features are known to predict election outcomes with relative accuracy, and are entirely independent of candidates, campaigns, gaffes, or hot mics. The models based solely on the fundamentals suggested a tight race with a narrow Republican edge (Campbell Citation2016).

However, given the unprecedented presence of a Republican candidate so unusual and such a political novice, few took the fundamental models at face value. Even while political science prognosticators looking at the fundamentals saw the likelihood of a Republican victory, they appeared to try to walk these expectations back after Trump became the nominee: “These early signs and readings of the context in which the campaigns will be run would suggest that 2016 was shaping up with a tilt to the Republicans until they nominated the bombastic Donald Trump” (Campbell Citation2016). Experts appeared to be flabbergasted by the audacity of the possibility of a Trump win, not because of some ideological or partisan preference, but because no one like Trump ever had.Footnote1 This “motivated reasoning” among pundits and scholars led to a sort of denial about the full range of possible outcomes on election night. We did not believe many of our own models. The natural human cognitive bias to explain away disconfirming information when we feel like we have good reason to dismiss it hit experts particularly hard this past election season. One lesson to draw is to develop greater consciousness about our susceptibility to these cognitive errors. Another is, as the authors suggest in #3, to present election forecasts with simpler estimates and confidence intervals, or with more compelling visualizations.

3. Knotty Threads

Two of the 19 things are particularly provocative and enticing, but pose significant obstacles to detangle.

Azari and Gelman point out that the “ground game” may have been overrated in 2016; however, determining the ways in which campaign tactics at this level were determinative of the outcome is fraught with causal difficulty. While many expected the 2016 election to be more about candidates' ground games (meaning field offices, armies of volunteer canvassers, sophisticated voter targeting backed by data science) rather than the all-out air war (e.g., TV ads) of previous campaigns, in the end the candidate who had superior resources in both these categories lost the election. While data-driven campaigning is clearly changing the field, the hardest part about knowing which elements “mattered” is our inability to test reasonable counterfactuals in history. Would Clinton have won if she had had more field organizers or ads in Wisconsin? It is difficult to draw such inferences. Our inability to adequately test these counterfactuals is confounded by the great number of factors that are at play in these questions. Since we cannot run experiments in the past, teasing out the effects of strategic campaign choices is still more art than science. And while social and data sciences can, and have, contributed a great deal to efficient campaign strategies, candidates have very strong incentives to raise money and spend it on the latest trends promoted by the campaign consultant world. Even if social science emphasizes that campaign tactics may not matter much in determining the outcome, no one is going to run a campaign that ignores these things (nor should they).

Second, Gelman and Azari are quick to note that the too-clever idea of forecasting elections by asking people who they think will win, rather than who they would vote for, did not improve prediction (#4). Drawing on recent advances in political science, the authors suggest that prognosticators would do well to consider voters' social networks when seeking to determine one's likelihood of voting and their vote choice. Their expectation is well grounded in recent evidence about voter behavior (Nickerson Citation2008; Sinclair Citation2012; Rolfe Citation2013; Rolfe and Chan Citation2017; Santoro and Beck Citation2017).

Voting is a socially determined activity. If we study voters at the individual level, we are likely to misconstrue our inferences with respect to their behavior. Rather, voters exist in a social context and we must include that context if we seek to understand their choices (Rolfe Citation2013). In practice, this means that if your friends vote, you are more likely to vote. Therefore, if I want to understand the probability that you will vote, my inferences will be stronger if I also know something about the likelihood that your friends will vote. It is easy to see why no one does forecasting this way. Observations are endogenously dependent and generating samples of voters to study poses nontrivialCitation problems.Footnote2 The complexity of gathering such data is prohibitively costly. While the authors are on solid ground about referencing these recent advances in our understanding of voting behavior, it is not yet clear how practitioners can incorporate this knowledge into practice.

4. Conclusion

Gelman and Azari have drawn together some of the most creative and important observations about the highly unusual election cycle in 2016. Scholars will continue to investigate the oddities of that election for some time, but how much insight it provides is somewhat dependent on what happens from here. The 2016 election was unusual in many ways, but what if unusual is the new normal? If the next couple of election cycles also show weakened party structures, outsider candidates, and successful counterintuitive campaign tactics, then we will stop studying 2016 as an anomaly and start looking at it as the start of something new.

Notes

1 Author not excluded.

2 There are significant mathematical challenges with drawing samples from networks (Gross and Jansa Citation2017).

References