References

  • Akhtar, S., & Morrison, C. M. (2019). The prevalence and impact of online trolling of UK members of parliament. Computers in Human Behavior, 99, 322–327. doi:10.1016/j.chb.2019.05.015
  • Al-Rawi, A., & Rahman, A. (2020). Manufacturing rage: The Russian Internet Research Agency’s political astroturfing on social media. First Monday, 25(9). Retrieved from https://journals.uic.edu/ojs/index.php/fm/article/download/10801/9723
  • Arif, A., Stewart, L. G., & Starbird, K. (2018). Acting the Part: Examining information operations within #BlackLivesMatter discourse. Proceedings of the ACM on Human-Computer Interaction Montréal, Canada, 2 ( CSCW), 1–27.
  • Assenmacher, D., Clever, L., Frischlich, L., Quandt, T., Trautmann, H., & Grimme, C. (2020). Demystifying Social Bots: On the Intelligence of Automated Social Media Actors. Social Media + Society, 6(3), 2056305120939264. doi:10.1177/2056305120939264
  • Bay, S., & Fredheim, R. (2019). Falling Behind: How Social Media Companies are Failing to Combat Inauthentic Behaviour Online. Latvia: NATO StratCom COE.
  • Bello, B. S., & Heckel, R. (2019). Analyzing the Behaviour of Twitter Bots in Post Brexit Politics. 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), Social Networks Analysis, Management and Security (SNAMS), 2019 Sixth International Conference On Granada, Spain, 61–66.
  • Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. New York, NY: Oxford University Press.
  • Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 US Presidential election online discussion. First Monday, 21(11).
  • Bradshaw, S. & Howard, P. N. (2017). Challenging Trust and Trust: A Global Inventory of Organized Social Media Manipulation. Computational Propaganda Project: Working Paper Series. Retrieved from https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/07/ct2018.pdf
  • Broniatowski, D. A., Jamison, A. M., Qi, S., AlKulaib, L., Chen, T., Benton, A., … Dredze, M. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health, 108(10), 1378–1384. doi:10.2105/AJPH.2018.304567
  • Bump, P. (2018, Oct. 19). Trump’s GOTV pitch: Democrats are paying immigrants to come vote for democrats. The Washington Post. Retrieved from https://www.washingtonpost.com/politics/2018/10/19/trumps-gotv-pitch-democrats-are-paying-immigrants-come-vote-democrats/
  • Caldarelli, G., De Nicola, R., Del Vigna, F., Petrocchi, M., & Saracco, F. (2020). The role of bot squads in the political propaganda on Twitter. Communications Physics, 3(81), Retrieved from https://www.nature.com/articles/s42005-020-0340-4.
  • Choi, D., Chun, S., Oh, H., Han, J., & Elbe-Bürger, A. (2020). Rumor propagation is amplified by echo chambers in social media. Scientific Reports, 10(1), 1–10. doi:10.1038/s41598-019-56847-4
  • Chung, M. (2019). The message influences me more than others: How and why social media metrics affect first person perception and behavioral intentions. Computers in Human Behavior, 91, 271–278. doi:10.1016/j.chb.2018.10.011
  • Conger, K. (2020a, May 30). Twitter had been drawing a line for months when Trump crossed it. The New York Times. Retrieved from https://www.nytimes.com/2020/05/30/technology/twitter-trump-dorsey.html?referringSource=articleShare.
  • Conger, K. (2020b, Nov. 5). Twitter has labeled 38% of Trump’s tweets since Tuesday. The New York Times. Retrieved from https://www.nytimes.com/2020/11/05/technology/donald-trump-twitter.html
  • Davis, C. A., Varol, O., Ferrara, E., Flammini, A., & Menczer, F. (2016, April). BotOrNot: A system to evaluate social bots. In Proceedings of the 25th international conference companion on world wide web Montréal, Canada (pp. 273–274).
  • De Saint Laurent, C., Glaveanu, V., & Chaudet, C. (2020). Malevolent creativity and social media: Creating anti-immigration communities on Twitter. Creativity Research Journal, 32(1), 66–80. doi:10.1080/10400419.2020.1712164
  • Ferrara, E., Chang, H., Chen, E., Muric, G., & Patel, J. (2020). Characterizing social media manipulation in the 2020 US presidential election. First Monday.
  • Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104. doi:10.1145/2818717
  • Ferree, M., Gamson, W., Gerhards, J., & Rucht, D. (2002). Four models of the public sphere in modern democracies. Theory and Society, 31(3), 289–324. doi:10.1023/A:1016284431021
  • Freelon, D., & Lokot, T. (2020). Russian Twitter disinformation campaigns reach across the American political spectrum. Harvard Kennedy School Misinformation Review, 1, 1.
  • Gorwa, R., & Guilbeault, D. (2020). Unpacking the social media bot: A typology to guide research and policy. Policy & Internet, 12(2), 225–248. doi:10.1002/poi3.184
  • Grover, T., Bayraktaroglu, E., Mark, G., & Rho, E. H. R. (2019). Moral and affective differences in us immigration policy debate on twitter. Computer Supported Cooperative Work (CSCW), 28(3–4), 317–355. doi:10.1007/s10606-019-09357-w
  • Howard, P. N., Woolley, S., & Calo, R. (2018). Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology & Politics, 15(2), 81–93. doi:10.1080/19331681.2018.1448735
  • Ireton, C., & Posetti, J. (Eds.) (2018). Journalism, ‘fake news’ & disinformation: Handbook for journalism education and training. United Nations Educational, Scientific and Cultural Organization (UNESCO) Series on Journalism Education. Retrieved from https://en.unesco.org/fightfakenews.
  • Jordan, M. (2018, Oct. 23). This isn’t the first migrant caravan to approach the U.S. What happened to the last one? The New York Times. Retrieved from https://www.nytimes.com/2018/10/23/us/migrant-caravan-border.html.
  • Keller, T. R., & Klinger, U. (2019). Social bots in election campaigns: Theoretical, empirical, and methodological implications. Political Communication, 36(1), 171–189. doi:10.1080/10584609.2018.1526238
  • Kennedy, B., Kogon, D., Coombs, K., Hoover, J., Park, C., Portillo-Wightman, G., … Dehghani, M. (2018). A typology and coding manual for the study of hate-based rhetoric. Retrieved from https://www.researchgate.net/publication/326488294_A_Typology_and_Coding_Manual_for_the_Study_of_Hate-based_Rhetoric.
  • Kollanyi, B., Howard, P. N., & Woolley, S. (2016). Bots and Automation over Twitter during the US Election. Computational Propaganda Project: Working Paper Series. Retrieved from http://geography.oii.ox.ac.uk/wp-content/uploads/sites/89/2016/11/Data-Memo-US-Election.pdf.
  • Lee, S., Ha, T., Lee, D., & Kim, J. H. (2018). Understanding the majority opinion formation process in online environments: An exploratory approach to Facebook. Information Processing & Management, 54(6), 1115–1128. doi:10.1016/j.ipm.2018.08.002
  • Luceri, L., Deb, A., Badawy, A., & Ferrara, E. (2019, May). Red bots do it better: Comparative analysis of social bot partisan behavior. In Companion Proceedings of the 2019 World Wide Web Conference San Francisco, CA, USA (pp. 1007–1012).
  • Lukito, J., Suk, J., Zhang, Y., Doroshenko, L., Kim, S., Su, M., … Wells, C. (2019). The wolves in sheep’s clothing: How Russia’s internet research agency tweets appeared in US News as Vox Populi. The International Journal of Press/Politics, 25(2), 196–216. doi:10.1177/1940161219895215
  • Mitter, S., Wagner, C., & Strohmaier, M. (2014). Understanding the impact of socialbot attacks in online social networks. arXiv preprint arXiv:2014.6289.
  • Mousavi, P., & Ouyang, J. (2021). Detecting hashtag hijacking for hashtag activism. In Proceedings of the 1st Workshop on NLP for Positive Impact. Bangkok, Thailand. (pp. 82–92).
  • Nonnecke, B., Martin, D. C., Singh, A., Wu, W., S., & Crittenden, C. (2019). Women’s reproductive rights: Computational propaganda in the United States. Institute for the Future. Retrieved from https://www.iftf.org/fileadmin/user_upload/downloads/ourwork/IFTF_WomenReproductiveRights_comp.prop_W_05.07.19.pdf.
  • Nyst, C., & Monaco, N. (2018). State-sponsored trolling: How governments are deploying disinformation as part of broader digital harassment campaigns. Institute for the Future. Retrieved from https://www.iftf.org/fileadmin/user_upload/images/DigIntel/IFTF_State_sponsored_trolling_report.pdf.
  • O’Carroll, T. (2017, January 24). Mexico’s misinformation wars. Medium. Retrieved from https://medium.com/amnesty‐insights/mexico‐s‐misinformation‐wars‐cb748ecb32e9#.n8pi52hot
  • Ortiz, S. M. (2020). Trolling as a collective form of harassment: An inductive study of how online users understand trolling. Social Media+ Society, 6(2), 2056305120928512.
  • Palma, B., & Evon, D. (2018, Nov. 2). Did Guatemalan authorities rescue a group of minors from caravan smugglers? Snopes. Retrieved from https://www.snopes.com/fact-check/guatemala-smugglers-children/.
  • Pearce, K. E., & Kendzior, S. (2012). Networked authoritarianism and social media in Azerbaijan. Journal of Communication, 62(2), 283–298. doi:10.1111/j.1460-2466.2012.01633.x
  • Pitropakis, N., Kokot, K., Gkatzia, D., Ludwiniak, R., Mylonas, A., & Kandias, M. (2020). Monitoring Users’ Behavior: Anti-Immigration Speech Detection on Twitter. Machine Learning and Knowledge Extraction, 2(3), 192–215. doi:10.3390/make2030011
  • Qiu, L. (2018, Oct. 20). Did democrats, or George Soros, fund migrant caravan? Despite Republican claims, no. The New York Times. Retrieved from https://www.nytimes.com/2018/10/20/world/americas/migrant-caravan-video-trump.html.
  • Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Patil, S., Flammini, A., & Menczer, F. (2011, March). Truthy: Mapping the spread of astroturf in microblog streams. In Proceedings of the 20th International Conference Companion on World Wide Web Hyderabad, India (pp. 249–252).
  • Rauchfleisch, A., & Kaiser, J. (2020). The False positive problem of automatic bot detection in social science research. Berkman Klein Center Research Publication, (2020–2023).
  • Roose, K. (2018, Oct. 24). Debunking 5 viral images of the migrant caravan. The New York Times. Retrieved from https://www.nytimes.com/2018/10/24/world/americas/migrant-caravan-fake-images-news.htm.
  • Roth, Y., & Pickles, N. (2020, May 11). Updating our approach to misleading information. Twitter. Retrieved from https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html
  • Sanderson, Z., Brown, M., Bonneau, R., Nagler, J., & Tucker, J. (2021). Twitter flagged Donald Trump’s tweets with election misinformation: They continued to spread both on and off the platform. The Misinformation Review.
  • Sang-Hun, C. (2013, June 14). South Korean intelligence agents accused of tarring opposition online before election. The New York Times. Retrieved from https://www.nytimes.com/2013/06/15/world/asia/south-korean-agents-accused-of-tarring-opposition-before-election.html.
  • Schmitt‐Beck, R. (2015). Bandwagon Effect. The International Encyclopedia of Political Communication . Hoboken, NJ: John Wiley & Sons, Inc. 1–5.
  • Schuchard, R., Crooks, A. T., Stefanidis, A., & Croitoru, A. (2019). Bot stamina: Examining the influence and staying power of bots in online social networks. Applied Network Science, 4(1), 55. doi:10.1007/s41109-019-0164-x
  • Sharma, S., Agrawal, S., & Shrivastava, M. (2018). Degree based classification of harmful speech using Twitter data. arXiv Preprint, arXiv:1806.04197, 1–5.
  • Stella, M., Ferrara, E., & De Domenico, M. (2018). Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences of the United States of America, 115(49), 12435–12440.
  • Stewart, L. G., Arif, A., & Starbird, K. (2018, February). Examining trolls and polarization with a retweet network. In Proc. ACM WSDM, workshop on misinformation and misbehavior mining on the web Los Angeles, CA.
  • Twitter (2018, January 19). Update on Twitter’s review of the 2016 US election. Retrieved from https://blog.twitter.com/en_us/topics/company/2018/2016-election-update.html.
  • Twitter (2020). Rules and Policies. Retrieved from https://help.twitter.com/en/rules-and-policies#twitter-rules.
  • Webb, H., Jirotka, M., Stahl, B. C., Housley, W., Edwards, A., Williams, M., … Burnap, P. (2017, June). The ethical challenges of publishing Twitter data for research dissemination. In Proceedings of the 2017 ACM on Web Science Conference Troy, New York, USA (pp. 339–348).
  • Woolley, S. C., and Howard, P. N. (Eds.). (2018). Computational propaganda: Political parties, politicians, and political manipulation on social media. Oxford, United Kingdom: Oxford University Press.
  • Woolley, S. C., & Howard, P. N. (2016). Political communication, computational propaganda, and autonomous agents: Introduction. International Journal of Communication 10, 4882–4890.
  • Yang, K. C., Varol, O., Davis, C. A., Ferrara, E., Flammini, A., & Menczer, F. (2019). Arming the public with artificial intelligence to counter social bots. Human Behavior and Emerging Technologies, 1(1), 48–61. doi:10.1002/hbe2.115