ABSTRACT

This study investigates the interaction and messaging tactics of political Twitter bots before an election. We analyzed the strategies of influential bots seeking to affect the immigration debate before the 2018 U.S. midterm elections. Our findings reveal that the 10 most influential bots in our dataset all presented an anti-immigration viewpoint, and both posted original tweets and retweeted other bot accounts’ tweets to give a false sense of authenticity and anti-immigration consensus. Bots’ messages relied heavily on negative emotional appeals by spreading harassing language and disinformation likely intended to evoke fear toward immigrants. Such accounts also employed polarizing language to entrench political group identity and provoke partisanship. Our findings help to understand the interaction and messaging tactics employed by political bots and suggest potential strategies that may be employed to counter their effectiveness.

Introduction

While social media platforms hold great promise for promoting an inclusive public sphere, they are simultaneously susceptible to nefarious manipulation, including rampant harassment and echo chambers that silence political debates and amplify the spread of disinformation (Akhtar & Morrison, Citation2019; Choi, Chun, Oh, Han, & Elbe-Bürger, Citation2020; Ferrara, Chang, Chen, Muric, & Patel, Citation2020). Political bots – fully or semi-automated accounts that produce content and engage with humans on political issues – take advantage of these weaknesses to polarize, confuse, amplify, and silence participants in political debates on Twitter (Luceri, Deb, Badawy, & Ferrara, Citation2019; Stella, Ferrara, & De Domenico, Citation2018). Political bots have been found to spread misleading, false, and hyper-partisan content intended to influence public opinion of candidates and political issues in U.S. elections, including in the 2010 midterm election (Ratkiewicz et al., Citation2011), the 2016 presidential election (Bessi & Ferrara, Citation2016), the 2018 midterm election (Luceri et al., Citation2019), and the 2020 presidential election (Ferrara et al., Citation2020). Within these elections, political bots have intentionally targeted contentious political issues, including the vaccine debate (Broniatowski et al., Citation2018), women’s reproductive rights (Nonnecke, Martin, Singh, Wu, W., & Crittenden, Citation2019), racial equity (Arif, Stewart, & Starbird, Citation2018), and the COVID-19 pandemic (Ferrara et al., Citation2020).

While the prevalence of political bots is well documented, their interaction strategies (e.g., tweeting, liking, sharing) and messaging tactics (e.g., disinformation narratives or use of tropes to polarize voters) targeting politically contentious issues remain less understood (Grover, Bayraktaroglu, Mark, & Rho, Citation2019). Our research investigates the prevalence of political bots targeting the immigration debate before the 2018 U.S. midterm election and provides a deeper understanding of political bots’ interaction and messaging tactics.

Political Bots, Trolls, & Troops

The spectrum of types of social bots ranges from simple bots that perform single automated engagements (e.g., liking posts) to partially human-controlled bots (i.e., cyborgs) that incorporate automated elements (e.g., human posting of content with automated liking) and to more advanced bot automation that utilize artificial intelligence to craft messages and engagement tactics (Assenmacher et al., Citation2020; Gorwa & Guilbeault, Citation2020). Political bots, a subcategory of social bots, share content and engage users specifically on political issues (Woolley & Howard, Citation2016). While many of the features of a bot may be automated (e.g., retweeting and liking posts), other features like posting content or replies may be manually completed by a human. In doing so, these accounts can avoid bot-detection algorithms that may flag fully automated behavior as ‘malicious’ (Gorwa & Guilbeault, Citation2020).

Social media trolls – individuals that “misrepresent their identities with the intention of promoting discord” – have also been found to spread disinformation intended to influence political behavior (Broniatowski et al., Citation2018, p. 1378). Most definitions for trolling share a common underlying framing of “behavior intended to provoke a reaction in another user” (Ortiz, Citation2020, p. 2). Oritz expands upon the definition of trolling through an analysis of how social media users, including self-identified trolls, define it. Taking on a more antagonistic framing, Ortiz defines trolling as “a form of harassment with the malicious intent to provoke another user” (Ortiz, Citation2020, p. 6).

Bradshaw and Howard (Citation2017)explore the prevalence of “cyber troops” composed of bots and trolls that seek to harass and manipulate public opinion for political gain. These “troops” are often deployed to flood social media or target specific individuals with harassment campaigns. These campaigns have been documented in Mexico where journalists have been targeted by pro-government cyber troops (O’Carroll, Citation2017), in Azerbaijan where IRELI Youth have deployed harassment campaigns that have had a documented chilling effect on political discourse online (Pearce & Kendzior, Citation2012), and in South Korea to spread smear campaigns targeting opposition parties before the 2012 presidential election (Sang-Hun, Citation2013).

Computational Propaganda

Political bots are a frequent feature of computational propaganda campaigns – the use of algorithms, automation, and human curation to purposefully distribute misleading information (Ferrara, Varol, Davis, Menczer, & Flammini, Citation2016; Howard, Woolley, & Calo, Citation2018; Woolley & Howard, Citation2018). These campaigns are often “developed and deployed in sensitive political moments when public opinion is polarized” (Kollanyi, Howard, & Woolley, Citation2016, p. 1). Political bots are adeptly deployed to target this vulnerability, spreading misleading and vitriolic content to sow fear, confuse readers, silence opponents, and artificially amplify or suppress narratives on politically contentious topics (Bello & Heckel, Citation2019; Nonnecke et al., Citation2019; Woolley & Howard, Citation2018).

In the lead-up to the 2016 U.S. presidential election, it was estimated that over 1.4 million Twitter users interacted with Russian-controlled political bots by “retweeting, quoting, replying to, mentioning, or liking” their tweets (Twitter, Citation2018). A rich body of research has emerged on political bots’ targeting of contentious political issues in order to polarize voters along ideological fault lines, including in the vaccine debate (Broniatowski et al., Citation2018), women’s reproductive rights (Nonnecke et al., Citation2019), racial equity (Arif et al., Citation2018), and the COVID-19 pandemic (Ferrara et al., Citation2020).

Computational propaganda campaigns often deploy political bots that target both sides of a contentious political issue in order to deepen partisanship. In their analysis of Russian-controlled bots’ targeting of the vaccine debate in the U.S., Broniatowski et al. (Citation2018) identified both pro- and anti-vaccine bot accounts. Nonnecke et al. (Citation2019) found both pro-life and pro-choice bots targeted the women’s reproductive rights debate in the months before the 2018 U.S. midterm election. In both studies, bots employed strategic tactics to mobilize and polarize voters.

Theoretical Framework

Political Bots & the Public Sphere

Online public spheres, such as social media platforms, are increasingly playing a central role in political discourse and deliberation (Keller & Klinger, Citation2019). However, the presence of political bots jeopardizes their value (Mitter, Wagner, & Strohmaier, Citation2014). Drawing upon Ferree, Gamson, Gerhards, and Rucht (Citation2002) four normative models of the public sphere in democracies, Keller and Klinger (Citation2019) explore key problems bots pose for each normative model. In the (1) representative-liberal model, the public sphere is “an elite-dominated, free, and transparent forum” where there is a “recurring exchange of political elites in elections” (ibid., p. 173). In this model, bots undermine the value of the online public sphere by creating a false sense of popularity for certain candidates or issues. In the (2) participatory-liberal model, where the public sphere is envisioned as a space for “public discourse that seeks to achieve maximum popular inclusion” (ibid., p. 174), bots can inauthentically popularize or suppress certain viewpoints. In the (3) discursive model, where conflict resolution and decisions are made through rigorous debate, bots threaten to undermine the deliberative process because they “have no intention to understand or consider others’ opinions” (ibid., p. 174). Last, in the (4) constructionist model, a well-functioning public sphere seeks to “give voice to the marginalized” (ibid., p. 174). Bots threaten this by creating inauthentic participants in the online public sphere, especially when those inauthentic participants represent marginalized groups.

Keller and Klinger (Citation2019) note bots’ tactics in the public sphere can be passive, where bots follow each other to give a false sense of popularity without engaging with or contributing to content, and active, where bots like, retweet, or create their own content through comments and posts. We expand on this work to further define and understand political bots’ active tactics by differentiating interaction tactics (e.g., tweeting, liking, retweeting) from messaging tactics (e.g., use of disinformation narratives or harassing language to polarize voters). We approach our analysis from the viewpoint of the participatory-liberal model and seek to understand bots’ active tactics (i.e., their interaction and messaging tactics) are deployed to popularize or suppress viewpoints. In doing so, we seek to further understanding of how bots’ interaction and messaging tactics manifest in a politically contentious issue (i.e., immigration) and the potential effects of these tactics on the online public sphere.

Literature Review

Political Bots’ Interaction & Messaging Tactics

Political bots use a variety of interaction and messaging tactics intended to influence the public sphere. Bots have been found to exploit social media platforms’ interaction capabilities (i.e., posting, liking, sharing) to make them appear more human-like and authentic, including by strategically tweeting coordinated message campaigns and liking and retweeting certain tweets to give a false sense of ‘grassroots’ consensus (Luceri et al., Citation2019; Lukito et al., Citation2019). Political bots also propagate messages containing disinformation, harassment, and divisiveness to polarize and stymie healthy political debate (Kollanyi et al., Citation2016).

To identify and catalog Twitter political bots’ interaction tactics, Luceri et al. (Citation2019) monitored the activity of almost 245,000 Twitter accounts engaged in political discussion during the 2016 U.S. presidential election and the 2018 U.S. midterm election. Using the Botometer API, they identified that approximately 13% of the accounts studied, roughly 31,000 accounts, were bots. Luceri et al. (Citation2019) identified through a retweet network analysis that conservative and liberal political bots embed themselves within their political factions and seldom interact with accounts on the opposite end of the political spectrum. Conservative bots were more likely than liberal bots to retweet other bots. Conservative human accounts were more likely to interact with conservative bots than liberal human accounts with liberal bots (ibid.).

In their comparison of political bots’ interaction tactics between the 2016 U.S. presidential election and the 2018 U.S. midterm election, Luceri et al. (Citation2019) found that bots decreased their retweeting and increased their direct tweeting of duplicated or slightly modified content (i.e., multiple bots tweeted the same or very similar messages). It is hypothesized that both tactics were employed to make their actions appear more like grassroots efforts and avoid bot detection tools (ibid).

By directly tweeting duplicated or slightly modified content, bots can strategically amplify certain narratives by flooding a platform with posts or strategically retweeting certain content to give a false sense of grassroots consensus. Referred to as ‘astroturfing,’ the “practice of masking the sponsors of a political message or events to make it appear as though it originates from or is supported by grassroots participants” can reinforce and normalize fringe political ideologies and disinformation by giving an impression of widespread consensus that encourages individuals to coalesce around a shared narrative (Al-Rawi & Rahman, Citation2020; Bay & Fredheim, Citation2019; Bello & Heckel, Citation2019).

Bots’ use of astroturfing can successfully convince authentic users that a given perspective has widespread consensus, and in turn influence authentic users to aid its spread. The deceptive virality and authenticity of tweets causes authentic users to amplify the effectiveness of astroturfing campaigns by rebroadcasting tweets that are sensational or have high “like” and “retweet” counts, inadvertently assuming an account’s passionate perspective on an issue or high retweet counts indicate authenticity and public support (Bay & Fredheim, Citation2019; Lukito et al., Citation2019). Astroturfing can solidify authentic users’ positions and encourage individuals to support inauthentic political campaigns that they perceive – because of their viral nature on social media – to be a dominant perspective in society (Chung, Citation2019; Schmitt‐Beck, Citation2015). This phenomenon, referred to as the ‘bandwagon effect,’ is common on social media (Lee, Ha, Lee, & Kim, Citation2018).

Political bots have been found to employ various messaging tactics intended to take advantage of the ‘bandwagon effect’ to influence public opinion and polarize voters. Primary among them is the spreading of disinformation, divisiveness, and harassment in an attempt to “manipulate public opinion, choke off debate, and muddy political issues” (Kollanyi et al., Citation2016, p. 1). In their analysis of Russian-backed Internet Research Agency (IRA) Twitter bots, Freelon and Lokot (Citation2020) identified bot messaging tactics to entrench political group identity and provoke partisanship by spreading language “vilifying political and social adversaries” (p. 2). Stewart, Arif, and Starbird (Citation2018) confirmed the role of political IRA bots in spreading polarizing messaging on Twitter in a likely attempt to “accentuate disagreement and foster division” (p. 5).

Bots’ Targeting of the Immigration Debate

Harassment and hate speech targeting immigration on Twitter is well documented (Pitropakis et al., Citation2020). Research investigating the role of political bots in spreading these messages is less common. Recent research (De Saint Laurent, Glaveanu, & Chaudet, Citation2020; Grover et al., Citation2019) has identified Twitter political bots’ messaging tactics in the immigration debate. In their study of pro- and anti-immigration debates on Twitter, Grover et al. (Citation2019) identified greater prevalence of suspected anti-immigration bots. In their analysis of 56,258 tweets, anti-immigration tweets were found to use derogatory titles to describe immigrants, such as “aliens” and “illegals,” and expressed more negative emotion than pro-immigration tweets (ibid.). In their analysis of anti-immigration messaging on Twitter, De Saint Laurent et al. (Citation2020) found that anti-immigration bot accounts used messaging that portrayed immigrants as dangerous and/or criminal, evoked patriotism and support for President Donald Trump, and employed threats and insults in their messaging. Anti-immigration accounts were more likely than pro-immigration accounts to effectively use Twitter’s interaction features to gain followers and obtain high like and retweet counts on their posts, such as their use of creative anti-immigration hashtags that enabled a “linguistic creativity” to encourage retweets, likes, and followers by “offending specific social targets and signaling a particular identity online” (De Saint Laurent et al., Citation2020, p. 73).

Considering these findings, we pose the following research question:

RQ: What are the interaction and messaging tactics of Twitter political bots targeting the immigration debate before the 2018 U.S. midterm election?

Methods & Data

With the Twitter Search and Stream APIs, we used 22 trending hashtags focused on the migrant caravan and immigration (see ) to capture 674,603 tweets from 146,460 unique handles between Oct. 22 and Nov. 2, 2018, a timeframe within which a nationwide debate was occurring on the movement of an immigrant caravan toward the United States (Jordan, Citation2018). This timeframe was also in close proximity to the U.S. midterm elections held on Nov. 6, 2018. Twitter’s Stream API was used to collect tweets in real-time and Twitter’s Search API was used to collect historical tweets within our timeframe of interest that we were unable to capture live from the Stream API.

Table 1. Trending hashtags on the migrant caravan and immigration used to collect tweets.

This research used a combination of social network analysis, bot detection, and qualitative coding of tweets. Rather than analyzing all 674,603 tweets collected, we prioritized analyzing tweets from the most influential bot accounts in the network. To identify these accounts, we use the “betweenness centrality” score of a retweet network node to identify accounts that have a high degree of influence over the information flow in the network (Schuchard, Crooks, Stefanidis, & Croitoru, Citation2019). Nodes with a high betweenness centrality score appear at the center of the network, indicating that they have a high rate of being retweeted and influence the flow of information in the network.

Accounts with a high betweenness centrality score were run through Botometer, a bot detection tool created by the Observatory on Social Media at Indiana University, to identify the 10 most influential bot accounts in the network (Davis, Varol, Ferrara, Flammini, & Menczer, Citation2016). We selected the first 10 accounts that had a high betweenness centrality score and a score of 0.6 or higher on Botometer, which calculates the likelihood (from 0 to 1) that an account is controlled entirely or in part by software through the analysis of more than 1,000 features grouped within six feature classes: content; sentiment; timing of tweets; friends, including number of followers and followees; network characteristics, such as interactions with followers; and user metadata, such as when they joined Twitter (ibid.). However, identifying bots and their origins is not straightforward (Benkler, Faris, & Roberts, Citation2018, p. 267). Social media researchers have come to rely on Botometer; however, recent research has shown that its model is prone to false positive and false negative errors, a likely outcome from an arms race between bot creators and detectors (Rauchfleisch & Kaiser, Citation2020; Yang et al., Citation2019). Using the 0.6 threshold can result in false positives and false negatives. In light of this, each of the handles flagged at the 0.6 threshold were double-checked by our research team to make a final determination on whether the account exhibited bot-like behavior.

In alignment with prior research on political bots’ messaging tactics (see De Saint Laurent et al., Citation2020; Freelon & Lokot, Citation2020; Grover et al., Citation2019; Stewart et al., Citation2018), we investigated the prevalence of harassment, disinformation, and politically divisive language. The Twitter Abusive Behavior and Hateful Conduct Policies (Twitter, Citation2020) and research literature on coding harassing language on Twitter (Kennedy et al., Citation2018; Sharma, Agrawal, & Shrivastava, Citation2018) were used to guide our coding of tweets for whether they contained harassment. Tweets exhibiting offensive, vulgar, or aggressive insults intended to demean, humiliate, or embarrass were coded as expressing harassment. Our work follows the UNESCO Handbook for Journalism Education and Training to identify instances of disinformation by cross-checking with verification sites like PolitiFact.com and Snopes.com (Ireton & Posetti, Citation2018). Politically divisive language was identified as speech intended to strengthen party cohesion and vilify the opposing political party. Of the 10 most influential bot accounts, 1,004 tweets were distributed in the retweet network. Of these, 578 were unique tweets that were coded by individuals trained in qualitative social media analysis at the CITRIS Policy Lab, Human Rights Center, and the Graduate School of Journalism at UC Berkeley. Each tweet was coded by two individuals, where discrepancies existed the individuals discussed the coding to reach agreement.

Results

The 10 most influential bots in our dataset, identified as bot accounts that have a high betweenness centrality score in a retweet network, posted 1,004 tweets, out of which a little over half (n = 578) were unique. This indicates that the bots were engaging in the interaction tactic of retweeting each other’s tweets and shared contacts’ tweets, in a likely attempt to give a false sense of popularity. In contrast to established research showing the deployment of political bots to target both sides of politically contentious issues (Broniatowski et al., Citation2018; Nonnecke et al., Citation2019), all tweets distributed by the 10 most influential bot accounts in our dataset presented an anti-immigration stance. This could be due to the greater prevalence of anti-immigration focused trending hashtags used to collect our dataset. However, prior research has shown that both sides of a politically contentious debate will hijack the opposition’s hashtag, thus we expected to see both sides of the debate represented (see Mousavi & Ouyang, Citation2021).

Analyzing the tweets distributed by the 10 most influential bot accounts in our dataset, we identified coordinated messaging tactics by bots to spread harassment and fear, disinformation, and politically divisive language targeting immigration as a means to polarize Democrats and Republicans and influence voting behavior in the 2018 U.S. midterm election. We next present tweets that demonstrate the three messaging tactics. Because of the potential for false positive errors in bot detection, we have anonymized the tweets presented (see Webb et al., Citation2017 on ethical challenges of publishing Twitter data in research).

Prevalence of harassment & fear to polarize

Over half of the unique tweets (n = 347) contained harassing language targeting undocumented immigrants (i.e., aggressive insults that demean, humiliate, or embarrass, see Methods and for example tweets). These tweets contained prejudiced and racist language using pejoratives against undocumented immigrants (e.g., criminals, criminal mob, illegals, disease carriers) and made claims that members of the caravan were “murderers and rapists;” for example, “The same democrats telling you there aren’t murderers, rapist or terrorist in the invader caravan are exact same democrats who let this take place in the cities they run.”

Table 2. Examples of bot-distributed tweets expressing harassing language.

Claims intended to stoke fears that the caravan included violent criminals were rampant. An image of a bloodied Mexican police officer was widely shared in an attempt to bolster claims of the caravan’s violence. However, the image’s provenance was debunked by the New York Times when it confirmed the photo was taken during a student protest in Mexico in 2012 (Roose, Citation2018). Numerous tweets claimed the caravan was trafficking children: “ICYM- Guatemalan officials rescued 7 unaccompanied minors smuggled inside #TheCaravan. Human traffickers & gang members are benefiting by transporting children and paying clients illegally into the US” and “Murder, $, child trafficking, & drug mules will get you over the border from Mexico … #StoptheCaravan.” The validity of the Guatemalan child trafficking rescue was not confirmed (Palma & Evon, Citation2018). In addition, seven tweets with over 8,000 retweets distributed statements alleging undocumented immigrants committed murder in the United States; for example, “Charged w/ attempted murder, deported felon, marching back to the US to get a lawyer & work here. Isn’t that lovely! The invaders are on their way to spend our taxes & put our citizens in danger.”

Disinformation campaigns target Democrats

Bots employed disinformation messaging tactics targeting Democrats in their tweets (see for example tweets). The most widely retweeted disinformation campaign in our analysis claimed that Democrats and George Soros, the billionaire founder of Open Society Foundations, were funding the caravan and registering members to vote (see ). Twenty-three tweets expressed these claims and were retweeted over 9,000 times. For example, “Of course the Democrats are funding the #Caravan, @realDonaldTrump. It’s most likely #Soros behind it!” Another tweet claimed that Democrats had registered 7,000 caravan members to vote in the 2018 U.S. midterm election (see ). Claims that Democrats and George Soros were funding the caravan were quickly debunked (Qiu, Citation2018). However, these claims may have been bolstered by comments made by President Trump during a rally in Montana on Oct. 18, 2018, where he insinuated Democrats were paying the caravan to come to the United States to vote for their party: “A lot of money’s been passing to people to come up and try and get to the border by Election Day, because they think that’s a negative for us … They also figure everybody coming in is going to vote Democrat” (Bump, Citation2018).

Table 3. Example bot-distributed tweets expressing disinformation targeting Democrats.

Figure 1. Disinformation and polarizing tweets retweeted by bot accounts in our dataset.

The tweets claimed the caravan was funded by Democrats and George Soros and that Democrats were registering caravan members to vote in the 2018 U.S. midterm election.
Figure 1. Disinformation and polarizing tweets retweeted by bot accounts in our dataset.

Appeals to intergroup enmity to polarize & influence voting

Over half of the unique tweets (n = 304) contained polarizing language vilifying Democrats (see for example tweets), such as “#INVASION! Dirty #Democrats manufactured this #Caravan crisis! Here’s the proof, watch this video! #Trump is right again!” and “WE DON’T WANT … People Winning Elections Base On Mob Rule & Violence! Do Not Give Democrats The Power in Congress Way Too Dangerous For The American People VOTE THEM OUT At MidTerms! VOTE Straight-Ticket REPUBLICAN #BuildTheWall #StopTheCaravan.” In addition, many of the polarizing language tweets also included polarizing hashtags, for example: #DemocratsHateAmerica, #VoteDemsOut, #VoteDemsOut2018, and #VoteRedtoSaveAmerica. This finding is in alignment with De Saint Laurent et al.’s (Citation2020) identification of bots’ use of unique hashtags to encourage retweets, likes, and followers by “offending specific social targets and signaling a particular identity online” (p. 73).

Table 4. Example bot-distributed tweets expressing polarizing language.

Discussion

Politically contentious issues will continue to be targeted by political bots on social media in an attempt to influence public opinion before elections. Our results provide a deeper understanding of the interaction and the messaging tactics they employ. In alignment with previous findings (Luceri et al., Citation2019), bots’ interaction tactics appear to employ ‘astroturfing’ behaviors, including posting original tweets and retweeting the same tweets to increase virality. We find that political bots’ messaging tactics rely heavily on negative emotional appeals with over half of all tweets containing harassing language and fear mongering to strengthen opposition to immigration. Bots’ spread of disinformation targeting Democrats’ involvement in immigration issues and the upcoming election were likely employed to strengthen intergroup enmity.

Our findings provide helpful insights into possible counter tactics social media platforms could implement to mitigate bots’ effectiveness. First, Twitter’s interface and incentive structure – where automated and semi-automated bot accounts are not easily detected and ‘retweet’ and ‘like’ counts influence the ‘bandwagon effect’ – enables political bots to surreptitiously spread vitriolic, misleading, and divisive information to sow discord. This phenomenon is especially perpetuated when political bots amplify verified and/or influential politicians’ tweets by retweeting or reproducing their messages in ways that give a false sense of widespread support (Caldarelli, De Nicola, Del Vigna, Petrocchi, & Saracco, Citation2020; Nyst & Monaco, Citation2018). To counteract this, Twitter should label automated and semi-automated bot accounts and include information on how automation is determined to support transparency and accountability.

Twitter has recently implemented a policy to flag tweets that contain misleading information and disputed or unverified claims with contextual warnings. In addition to these warnings, Twitter is implementing technical strategies to suppress the spread of harmful tweets, including by making these tweets only viewable if clicked on, hiding ‘like’ and ‘retweet’ counts, and blocking the ability for users to ‘like’ or reply (Roth & Pickles, Citation2020). In its initial application, Twitter has included contextual warnings on tweets sent by influential, verified politicians’ accounts (e.g., President Donald Trump’s account) that spread false or misleading claims on the COVID-19 pandemic (Conger, Citation2020a) and on election integrity (Conger, Citation2020b). While promising, research has demonstrated that warning labels placed on President Donald Trump’s tweets that expressed election-related misinformation spread further and longer than unlabeled tweets (Sanderson, Brown, Bonneau, Nagler, & Tucker, Citation2021). Further research is needed to evaluate whether the inclusion of a warning label and the disabling of interaction features may be an effective mechanism to mitigate the amplification of harmful content.

There are several limitations of this work. First, bot detection is not foolproof. It is well documented that Botometer is prone to false positive and false negative errors (see Rauchfleisch & Kaiser, Citation2020). Thus, some of the accounts in our study could be authentic accounts belonging to individuals who are very passionate and frequently engage in anti-immigration messaging on Twitter. Second, our use of trending hashtags focused on the migrant caravan and immigration to collect tweets on immigration could have been more likely to be used by accounts with an anti-immigration stance. While prior research has shown that both sides of a politically contentious debate will hijack the opposition’s hashtag (see Mousavi & Ouyang, Citation2021), our use of trending hashtags on immigration and the caravan may have more heavily represented one side of the political stance (i.e., anti-immigration) rather than presenting both sides.

Conclusion

The targeting of politically contentious issues to polarize voters before an election is not new. But, the computational aspects of its scale and targeting are. Computational propaganda campaigns perpetuated by political bots on Twitter can be scaled and targeted at speeds and with precision unthinkable a decade ago. The combination of algorithms, automation, and human curation, enables these campaigns to craft and target messages to those most susceptible to its manipulative appeal.

This research demonstrated political bots’ interaction tactics, such as their use of ‘astroturfing’ by both posting original tweets and retweeting other bot accounts to give a false sense of authenticity and anti-immigration consensus, and bots’ messaging tactics intended to manipulate the online public sphere. Bots’ messaging tactics relied heavily on negative emotional appeals by spreading harassing language and disinformation intended to evoke fear toward immigrants and mislead the public on Democrats’ stance on immigration (e.g., claims that Democrats were paying the caravan to enter the United States) and election integrity (e.g., claims that Democrats registered members of the caravan to vote in the 2018 U.S. midterm election). Bots appealed to intergroup enmity by spreading polarizing tweets and urging individuals to vote Republican. These findings provide a deeper understanding of political bots’ interaction and messaging tactics employed in the targeting of the immigration debate before the 2018 U.S. midterm election and point to the need for further research on intervention mechanisms to mitigate political bots’ tactics, such as the effectiveness of labeling automated and semi-automated accounts to promote transparency and accountability, labeling of misleading and harmful messaging to raise public awareness, and disabling of interaction features (e.g., liking and sharing) on misleading and harmful content to suppress its virality and spread.

Acknowledgments

We are grateful for the invaluable feedback and qualitative coding completed by the following individuals, many of whom were affiliated with the Human Rights Center at UC Berkeley: Gurshaant Bassi, Rachael Cornejo, Jennifer Cortez, Maria Di Franco Quinonez, Antonio Flores,Niusha Hajikhodaverdikhan, Christina Haley, Eliza Hollingsworth, Edward Kang, Ravleen Kaur, Maryam Khan, Kellie Levine, Vyoma Raman, Samantha Rubinstein, Samiha Shaheed, Gurbir Singh, Anish Vankayalapati, Michaela Vatcheva, Levi Vonk, and Andrew Wang.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the University of California Institute for Mexico and the United States (UC MEXUS) and the Consejo Nacional de Ciencia y Tecnologia de Mexico (CONACYT).

Notes on contributors

Brandie Nonnecke

Brandie Nonnecke (PhD, The Pennsylvania State University) is the director of the CITRIS Policy Lab at UC Berkeley.

Gisela Perez de Acha (MS, UC Berkeley) is a senior reporter at the Investigative Reporting Program, Berkeley.

Annette Choi (MS, UC Berkeley) is a graphics reporter at POLITICO Pro.

Camille Crittenden (PhD, Duke University) is the executive director of CITRIS and the Banatao Institute, University of California.

Fernando Ignacio Gutiérrez Cortés (PhD, Universidad Autonoma Metropolitana) is a professor in the School of Humanities and Education at Tecnológico de Monterrey.

Alejandro Martin del Campo (PhD, Tecnológico de Monterrey) is a professor of media and digital culture at Tecnologico de Monterrey.

Oscar Mario Miranda-Villanueva (PhD, Tecnológico de Monterrey) is a professor in media communications at Tecnologico de Monterrey.

References

  • Akhtar, S., & Morrison, C. M. (2019). The prevalence and impact of online trolling of UK members of parliament. Computers in Human Behavior, 99, 322–327. doi:10.1016/j.chb.2019.05.015
  • Al-Rawi, A., & Rahman, A. (2020). Manufacturing rage: The Russian Internet Research Agency’s political astroturfing on social media. First Monday, 25(9). Retrieved from https://journals.uic.edu/ojs/index.php/fm/article/download/10801/9723
  • Arif, A., Stewart, L. G., & Starbird, K. (2018). Acting the Part: Examining information operations within #BlackLivesMatter discourse. Proceedings of the ACM on Human-Computer Interaction Montréal, Canada, 2 ( CSCW), 1–27.
  • Assenmacher, D., Clever, L., Frischlich, L., Quandt, T., Trautmann, H., & Grimme, C. (2020). Demystifying Social Bots: On the Intelligence of Automated Social Media Actors. Social Media + Society, 6(3), 2056305120939264. doi:10.1177/2056305120939264
  • Bay, S., & Fredheim, R. (2019). Falling Behind: How Social Media Companies are Failing to Combat Inauthentic Behaviour Online. Latvia: NATO StratCom COE.
  • Bello, B. S., & Heckel, R. (2019). Analyzing the Behaviour of Twitter Bots in Post Brexit Politics. 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), Social Networks Analysis, Management and Security (SNAMS), 2019 Sixth International Conference On Granada, Spain, 61–66.
  • Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. New York, NY: Oxford University Press.
  • Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 US Presidential election online discussion. First Monday, 21(11).
  • Bradshaw, S. & Howard, P. N. (2017). Challenging Trust and Trust: A Global Inventory of Organized Social Media Manipulation. Computational Propaganda Project: Working Paper Series. Retrieved from https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/07/ct2018.pdf
  • Broniatowski, D. A., Jamison, A. M., Qi, S., AlKulaib, L., Chen, T., Benton, A., … Dredze, M. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health, 108(10), 1378–1384. doi:10.2105/AJPH.2018.304567
  • Bump, P. (2018, Oct. 19). Trump’s GOTV pitch: Democrats are paying immigrants to come vote for democrats. The Washington Post. Retrieved from https://www.washingtonpost.com/politics/2018/10/19/trumps-gotv-pitch-democrats-are-paying-immigrants-come-vote-democrats/
  • Caldarelli, G., De Nicola, R., Del Vigna, F., Petrocchi, M., & Saracco, F. (2020). The role of bot squads in the political propaganda on Twitter. Communications Physics, 3(81), Retrieved from https://www.nature.com/articles/s42005-020-0340-4.
  • Choi, D., Chun, S., Oh, H., Han, J., & Elbe-Bürger, A. (2020). Rumor propagation is amplified by echo chambers in social media. Scientific Reports, 10(1), 1–10. doi:10.1038/s41598-019-56847-4
  • Chung, M. (2019). The message influences me more than others: How and why social media metrics affect first person perception and behavioral intentions. Computers in Human Behavior, 91, 271–278. doi:10.1016/j.chb.2018.10.011
  • Conger, K. (2020a, May 30). Twitter had been drawing a line for months when Trump crossed it. The New York Times. Retrieved from https://www.nytimes.com/2020/05/30/technology/twitter-trump-dorsey.html?referringSource=articleShare.
  • Conger, K. (2020b, Nov. 5). Twitter has labeled 38% of Trump’s tweets since Tuesday. The New York Times. Retrieved from https://www.nytimes.com/2020/11/05/technology/donald-trump-twitter.html
  • Davis, C. A., Varol, O., Ferrara, E., Flammini, A., & Menczer, F. (2016, April). BotOrNot: A system to evaluate social bots. In Proceedings of the 25th international conference companion on world wide web Montréal, Canada (pp. 273–274).
  • De Saint Laurent, C., Glaveanu, V., & Chaudet, C. (2020). Malevolent creativity and social media: Creating anti-immigration communities on Twitter. Creativity Research Journal, 32(1), 66–80. doi:10.1080/10400419.2020.1712164
  • Ferrara, E., Chang, H., Chen, E., Muric, G., & Patel, J. (2020). Characterizing social media manipulation in the 2020 US presidential election. First Monday.
  • Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104. doi:10.1145/2818717
  • Ferree, M., Gamson, W., Gerhards, J., & Rucht, D. (2002). Four models of the public sphere in modern democracies. Theory and Society, 31(3), 289–324. doi:10.1023/A:1016284431021
  • Freelon, D., & Lokot, T. (2020). Russian Twitter disinformation campaigns reach across the American political spectrum. Harvard Kennedy School Misinformation Review, 1, 1.
  • Gorwa, R., & Guilbeault, D. (2020). Unpacking the social media bot: A typology to guide research and policy. Policy & Internet, 12(2), 225–248. doi:10.1002/poi3.184
  • Grover, T., Bayraktaroglu, E., Mark, G., & Rho, E. H. R. (2019). Moral and affective differences in us immigration policy debate on twitter. Computer Supported Cooperative Work (CSCW), 28(3–4), 317–355. doi:10.1007/s10606-019-09357-w
  • Howard, P. N., Woolley, S., & Calo, R. (2018). Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology & Politics, 15(2), 81–93. doi:10.1080/19331681.2018.1448735
  • Ireton, C., & Posetti, J. (Eds.) (2018). Journalism, ‘fake news’ & disinformation: Handbook for journalism education and training. United Nations Educational, Scientific and Cultural Organization (UNESCO) Series on Journalism Education. Retrieved from https://en.unesco.org/fightfakenews.
  • Jordan, M. (2018, Oct. 23). This isn’t the first migrant caravan to approach the U.S. What happened to the last one? The New York Times. Retrieved from https://www.nytimes.com/2018/10/23/us/migrant-caravan-border.html.
  • Keller, T. R., & Klinger, U. (2019). Social bots in election campaigns: Theoretical, empirical, and methodological implications. Political Communication, 36(1), 171–189. doi:10.1080/10584609.2018.1526238
  • Kennedy, B., Kogon, D., Coombs, K., Hoover, J., Park, C., Portillo-Wightman, G., … Dehghani, M. (2018). A typology and coding manual for the study of hate-based rhetoric. Retrieved from https://www.researchgate.net/publication/326488294_A_Typology_and_Coding_Manual_for_the_Study_of_Hate-based_Rhetoric.
  • Kollanyi, B., Howard, P. N., & Woolley, S. (2016). Bots and Automation over Twitter during the US Election. Computational Propaganda Project: Working Paper Series. Retrieved from http://geography.oii.ox.ac.uk/wp-content/uploads/sites/89/2016/11/Data-Memo-US-Election.pdf.
  • Lee, S., Ha, T., Lee, D., & Kim, J. H. (2018). Understanding the majority opinion formation process in online environments: An exploratory approach to Facebook. Information Processing & Management, 54(6), 1115–1128. doi:10.1016/j.ipm.2018.08.002
  • Luceri, L., Deb, A., Badawy, A., & Ferrara, E. (2019, May). Red bots do it better: Comparative analysis of social bot partisan behavior. In Companion Proceedings of the 2019 World Wide Web Conference San Francisco, CA, USA (pp. 1007–1012).
  • Lukito, J., Suk, J., Zhang, Y., Doroshenko, L., Kim, S., Su, M., … Wells, C. (2019). The wolves in sheep’s clothing: How Russia’s internet research agency tweets appeared in US News as Vox Populi. The International Journal of Press/Politics, 25(2), 196–216. doi:10.1177/1940161219895215
  • Mitter, S., Wagner, C., & Strohmaier, M. (2014). Understanding the impact of socialbot attacks in online social networks. arXiv preprint arXiv:2014.6289.
  • Mousavi, P., & Ouyang, J. (2021). Detecting hashtag hijacking for hashtag activism. In Proceedings of the 1st Workshop on NLP for Positive Impact. Bangkok, Thailand. (pp. 82–92).
  • Nonnecke, B., Martin, D. C., Singh, A., Wu, W., S., & Crittenden, C. (2019). Women’s reproductive rights: Computational propaganda in the United States. Institute for the Future. Retrieved from https://www.iftf.org/fileadmin/user_upload/downloads/ourwork/IFTF_WomenReproductiveRights_comp.prop_W_05.07.19.pdf.
  • Nyst, C., & Monaco, N. (2018). State-sponsored trolling: How governments are deploying disinformation as part of broader digital harassment campaigns. Institute for the Future. Retrieved from https://www.iftf.org/fileadmin/user_upload/images/DigIntel/IFTF_State_sponsored_trolling_report.pdf.
  • O’Carroll, T. (2017, January 24). Mexico’s misinformation wars. Medium. Retrieved from https://medium.com/amnesty‐insights/mexico‐s‐misinformation‐wars‐cb748ecb32e9#.n8pi52hot
  • Ortiz, S. M. (2020). Trolling as a collective form of harassment: An inductive study of how online users understand trolling. Social Media+ Society, 6(2), 2056305120928512.
  • Palma, B., & Evon, D. (2018, Nov. 2). Did Guatemalan authorities rescue a group of minors from caravan smugglers? Snopes. Retrieved from https://www.snopes.com/fact-check/guatemala-smugglers-children/.
  • Pearce, K. E., & Kendzior, S. (2012). Networked authoritarianism and social media in Azerbaijan. Journal of Communication, 62(2), 283–298. doi:10.1111/j.1460-2466.2012.01633.x
  • Pitropakis, N., Kokot, K., Gkatzia, D., Ludwiniak, R., Mylonas, A., & Kandias, M. (2020). Monitoring Users’ Behavior: Anti-Immigration Speech Detection on Twitter. Machine Learning and Knowledge Extraction, 2(3), 192–215. doi:10.3390/make2030011
  • Qiu, L. (2018, Oct. 20). Did democrats, or George Soros, fund migrant caravan? Despite Republican claims, no. The New York Times. Retrieved from https://www.nytimes.com/2018/10/20/world/americas/migrant-caravan-video-trump.html.
  • Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Patil, S., Flammini, A., & Menczer, F. (2011, March). Truthy: Mapping the spread of astroturf in microblog streams. In Proceedings of the 20th International Conference Companion on World Wide Web Hyderabad, India (pp. 249–252).
  • Rauchfleisch, A., & Kaiser, J. (2020). The False positive problem of automatic bot detection in social science research. Berkman Klein Center Research Publication, (2020–2023).
  • Roose, K. (2018, Oct. 24). Debunking 5 viral images of the migrant caravan. The New York Times. Retrieved from https://www.nytimes.com/2018/10/24/world/americas/migrant-caravan-fake-images-news.htm.
  • Roth, Y., & Pickles, N. (2020, May 11). Updating our approach to misleading information. Twitter. Retrieved from https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html
  • Sanderson, Z., Brown, M., Bonneau, R., Nagler, J., & Tucker, J. (2021). Twitter flagged Donald Trump’s tweets with election misinformation: They continued to spread both on and off the platform. The Misinformation Review.
  • Sang-Hun, C. (2013, June 14). South Korean intelligence agents accused of tarring opposition online before election. The New York Times. Retrieved from https://www.nytimes.com/2013/06/15/world/asia/south-korean-agents-accused-of-tarring-opposition-before-election.html.
  • Schmitt‐Beck, R. (2015). Bandwagon Effect. The International Encyclopedia of Political Communication . Hoboken, NJ: John Wiley & Sons, Inc. 1–5.
  • Schuchard, R., Crooks, A. T., Stefanidis, A., & Croitoru, A. (2019). Bot stamina: Examining the influence and staying power of bots in online social networks. Applied Network Science, 4(1), 55. doi:10.1007/s41109-019-0164-x
  • Sharma, S., Agrawal, S., & Shrivastava, M. (2018). Degree based classification of harmful speech using Twitter data. arXiv Preprint, arXiv:1806.04197, 1–5.
  • Stella, M., Ferrara, E., & De Domenico, M. (2018). Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences of the United States of America, 115(49), 12435–12440.
  • Stewart, L. G., Arif, A., & Starbird, K. (2018, February). Examining trolls and polarization with a retweet network. In Proc. ACM WSDM, workshop on misinformation and misbehavior mining on the web Los Angeles, CA.
  • Twitter (2018, January 19). Update on Twitter’s review of the 2016 US election. Retrieved from https://blog.twitter.com/en_us/topics/company/2018/2016-election-update.html.
  • Twitter (2020). Rules and Policies. Retrieved from https://help.twitter.com/en/rules-and-policies#twitter-rules.
  • Webb, H., Jirotka, M., Stahl, B. C., Housley, W., Edwards, A., Williams, M., … Burnap, P. (2017, June). The ethical challenges of publishing Twitter data for research dissemination. In Proceedings of the 2017 ACM on Web Science Conference Troy, New York, USA (pp. 339–348).
  • Woolley, S. C., and Howard, P. N. (Eds.). (2018). Computational propaganda: Political parties, politicians, and political manipulation on social media. Oxford, United Kingdom: Oxford University Press.
  • Woolley, S. C., & Howard, P. N. (2016). Political communication, computational propaganda, and autonomous agents: Introduction. International Journal of Communication 10, 4882–4890.
  • Yang, K. C., Varol, O., Davis, C. A., Ferrara, E., Flammini, A., & Menczer, F. (2019). Arming the public with artificial intelligence to counter social bots. Human Behavior and Emerging Technologies, 1(1), 48–61. doi:10.1002/hbe2.115