1,313
Views
4
CrossRef citations to date
0
Altmetric
Forum

Farewell to Big Data? Studying Misinformation in Mobile Messaging Applications

ORCID Icon

Concerns about problematic information circulating on digital media platforms have dominated discussions among the public, politicians, and academics alike. Scholars have discussed the conceptual nuances behind different types of false or misleading content, which vary along different levels of falsehood as well as motivation and intent of the sender. For simplicity, I mainly refer to misinformation, i.e., false (or misleading) content shared accidentally, and disinformation, i.e., falsehoods intentionally shared to mislead. Problematic content can also include malinformation – real information used to inflict harm – as well as lies or bullshit, with the latter referring to a focus on persuading listeners without any regard for the truth (Carmi et al., Citation2020; MacKenzie & Bhatt, Citation2020).

The spread of different kinds of falsehoods during crucial events like the coronavirus pandemic and elections in various countries threatens the normative imperative of a well-informed public as a pillar of democratic citizenship. Messaging applications, however, have been less scrutinized than social media, despite their increasing popularity for news and political engagement. Here, I argue that messaging applications require an important shift in this research agenda, rendering several of the methods used to analyze social media inapplicable. More broadly, these changes also impact research focused on dissonant and divisive digital public spheres.

The role of digital platforms in facilitating the spread of false information has sparked global debates around the duties and responsibilities of these companies and the possibilities for regulation (Tromble & McGregor, Citation2019). Importantly, however, these debates are to some extent taking place in the dark, as the challenges of data collection have prevented scholars from fully diagnosing the true extent of misinformation and disinformation on these platforms, understanding their drivers, and explaining their effects.

Messaging applications have received considerably less attention, both in academia and in the regulatory arena. This could be partially due to timid presence in the United States – where much of the internationally visible research activity is concentrated – compared with their prominence in countries from the Global South. However, evidence that the January 6th, 2021, insurgency at the U.S. Capitol was organized through Facebook groups and Telegram (Hatmaker, Citation2021) is likely to intensify interest in the political uses of mobile messaging applications by politicians, activists, and citizens. A second factor that should contribute to spark – or renew – attention to private communication is the move by social media platforms toward encryption, broadly advocated by Mark Zuckerberg, which may introduce new challenges to data access for research.

Messaging applications challenge political communication research agendas in three fundamental ways: first, they change how we should study and understand dynamics of information spread; second, they demand new approaches to mitigate the threats of misinformation; and third, the scarcity of data sources in the context of encrypted private communication compels scholars to rethink the computational focus that largely influenced digital media research. Scholars need to shift from the big-data mentality that has shaped the past two decades of digital media research and think creatively about research methodology – by either relying on conventional social scientific methods, such as focus groups, surveys, interviews, and experiments, or developing innovative approaches.

People, Not Algorithms

Messaging applications challenge scholars to rethink how they understand the spread of problematic content. Research focused on social media highlights the role of algorithmic amplification in the spread of low-quality, malicious, or false content. In private messaging apps, however, algorithms are not the drivers of amplification: people are. Despite the emergence of media, government, and other business accounts, content sharing in a private messaging environment is largely driven by personal chats and groups. This explains why WhatsApp’s main actions to fight mis- and disinformation create friction for sharing and add cues for users to recognize, and potentially question, “viral” content or suspicious groups (i.e., groups they are added to by non-contacts).

Research investigating potential drivers of spread and virality on messaging apps has primarily focused on groups (Banaji et al., Citation2019; Resende et al., Citation2019). Messaging platforms typically enable users to create groups with two or more contacts – up to 1024 on WhatsApp (since November, 2022, doubling an earlier increase to 512 in March of the same year) and a whopping 200,000 on Telegram – which can be public (i.e., joined by anyone with a link), making it easier for researchers to find them and access their content. However, this approach is limited to investigating the content of public groups. The reach, let alone effects, of such content remain unknown.

Moreover, participation in public groups is the exception, not the norm, on these apps: according to WhatsApp, most users are in groups with a handful of contacts, and nine in ten messages are sent in one-to-one chats (Rossini et al., Citation2020). While groups play a role in disseminating misinformation and, perhaps more importantly, in enabling disinformation campaigns and malicious actors to coordinate downstream dissemination (i.e., mobilizing users to forward content to their personal contacts), research examining the content of such groups provides limited insight into how most people use these platforms, become exposed to false content, or contribute to its spread. Similar limitations apply to methods that leverage data from “tiplines”—i.e., WhatsApp accounts by fact-checkers – to investigate different types of falsehoods circulating on these apps (Kazemi et al., Citation2021).

In the absence of comprehensive data on content, information flows, or networks, how can researchers understand virality and dissemination on messaging apps? Research analyzing and mapping content in public groups might shed light on topics and messages that are shared across those spaces, but any claims about downstream effects – that is, messages going from these large groups to more private conversations that reflect the uses of the general population – are tentative. Survey-based research provides some insight into how the public uses these apps: people often share falsehood inadvertently (and, less frequently, on purpose) on WhatsApp and perceive themselves to be frequently exposed to false information – suggesting that it is, indeed, a problem that affects users in general, beyond large groups (Rossini et al., Citation2020).

If people are the main drivers of the spread of information, and absent data about how content flows within the network, a potential way forward lies in understanding individual behaviors and attitudes – using conventional social scientific methodologies and combining quantitative and qualitative approaches to grasp different aspects of how users engage with mis- and disinformation – including how people establish the credibility of information, what motivates sharing, and the role of sociability and social ties in influencing these dynamics.

Changing Behaviors, Not Content

The second challenge to this research agenda refers to mitigating the damaging effects of exposure to false, malicious, or misleading information. On social media, scholars and platforms alike have focused on content-oriented interventions, such as content labels and independent fact-checking. However, these approaches are less applicable to messaging applications because end-to-end encryption prevents these platforms from automatically reviewing and flagging content. Thus, understanding and mitigating the detrimental effects of exposure to mis- and disinformation requires a sharper focus on user-level interventions, the role of social ties, and content-agnostic nudges.

At the user-level, limited scholarship has experimented with digital literacy skills to identify false information, with mixed findings. For instance, a field experiment in India found that hour-long digital literacy sessions did not improve participants’ ability to identify misinformation on WhatsApp – and even backfired on partisans (Badrinathan, Citation2021), but an in-game experiment in four European countries found some evidence that an “inoculation” intervention, i.e., teaching strategies to identify falsehoods, enabled participants to better spot them immediately after playing the game (Maertens et al., Citation2020). These studies point to a complex problem: even if literacy interventions may work (Guess et al., Citation2020), there is little certainty that the effects last (Maertens et al., Citation2020), and such efforts may be undermined in politically polarized contexts – which are precisely where mis- and disinformation have a greater potential to disrupt democracy. Moreover, strategies that may work in the Global North, where overall levels of education and literacy are high, may not apply to the Global South. Comparative research that includes a more diverse pool of countries is needed to fill these gaps in our knowledge.

Another aspect that needs to be considered are the social dynamics of messaging applications. Information, true or false, circulates in the form of images, videos, and plain text – often with no verifiable source or link. The implications are two-fold: on the one hand, perceptions of credibility might be intertwined with levels of trust among social ties; and on the other hand, verification is somewhat more cumbersome, as users need to leave the messaging app to probe any information. In this context, one of the ways people may find out about falsehoods is through social corrections—i.e., being warned by their peers. My own research suggests that users routinely witness, perform, or suffer social corrections on WhatsApp at a higher rate than on Facebook – providing some relief that people are generally aware, and attentive, to false information. Research on public social media suggests social corrections may work (Bode & Vraga, Citation2018), although people may be reluctant to engage in these behaviors in public or semi-public digital settings (Cohen et al., Citation2020)—a problem that might be less prominent in messaging apps. Unlike more public social media, however, assessing the effectiveness of these “social” corrections in messaging apps requires considering the weight that different social ties are likely to have, as well as the various specific dimensions that characterize groups (e.g., tie strength, topic, purpose, and ideological congruence). For these reasons, vignette and survey experiments might not be externally valid – requiring some creativity in experimental design (e.g., Vermeer et al., Citation2020). Moreover, considering the important and hitherto unexplored differences between platforms, exploratory research using qualitative methods should be the starting point to provide valuable insight into how users experience and negotiate corrections, as well as how they navigate misinformation in different situations and social networks.

The Land of Data Scarcity – Or, Where Do We Go from Here?

The two challenges outlined so far lead to the third and perhaps most significant shift posed by private messaging: how to approach data collection in a context of (big) data scarcity. Understanding the underlying dynamics of spread, as well as studying potential interventions to mitigate the effects of mis- and disinformation, are challenging on messaging applications because of the lack of representative data about what circulates in personal and group chats, the absence of data to examine information flows, and the fact that communication is decentralized through multiple channels and audiences. A first wave of research on messaging apps has been largely focused on content (primarily from public groups), trying to replicate computational approaches and to circumvent the limits of data availability in attempts to adapt to private messaging applications some of the methods developed for social media platforms. Given the little scrutiny messaging applications have received – and the centrality of private communication therein—, it is unlikely that researchers will have more access to data, and studies focused on content will be limited by the scarcity of data sources and their lack of representativeness. Hence, to understand the role of messaging applications, research needs to move away from the (big data) methodologies that have dominated social media research.

Messaging applications are likely to continue growing, thus becoming important gateways for political conversation and engagement, and while they represent a novel challenge for scholars, we must also remember that studying private and small group communication is not new in political communication, and that conventional social scientific methods can provide both quantitative and qualitative insight into the use of messaging apps (Kligler-Vilenchik, Citation2019; Rossini et al., Citation2020; Vermeer et al., Citation2020). In the absence of digital trace data, political communication scholars must turn their attention to people as primary data sources to understand mis- and disinformation, focusing on users’ perceptions, practices, and behaviors. If the past decades have been marked by the proliferation of computational methods and big data in political communication (Theocharis & Jungherr, Citation2021), the move toward privacy and the centrality of messaging apps should represent a significant shift in how we study mis- and disinformation moving forward, renewing our focus on individuals as data sources.

Acknowledgments

The author would like to thank the guest editors, Karolina Koc-Michalska, Ulrike Klinger, Lance Bennett and Andrea Römmele for the invitation to contribute with a forum piece, and Cristian Vaccari for feedback in earlier versions of this article.

Disclosure Statement

No potential conflict of interest was reported by the author.

Additional information

Notes on contributors

Patrícia Rossini

Patrícia Rossini (PhD, Federal University of Minas Gerais) is a Senior Lecturer in Communication, Media & Democracy at the University of Glasgow, UK.

References

  • Badrinathan, S. (2021). Educative interventions to combat misinformation: Evidence from a field experiment in India. The American Political Science Review, 1–17. https://doi.org/10.1017/S0003055421000459
  • Banaji, S., Bhat, R., Agarwal, A., Passanha, N., & Pravin, M. S. (2019). WhatsApp vigilantes: An exploration of citizen reception and circulation of WhatsApp misinformation linked to mob violence in India. London School of Economics.
  • Bode, L., & Vraga, E. K. (2018). See Something, Say Something: Correction of Global Health Misinformation on Social Media. Health Communication, 33(9), 1131–1140. https://doi.org/10.1080/10410236.2017.1331312
  • Carmi, E., Yates, S. J., Lockley, E., & Pawluczuk, A. (2020). Data citizenship: Rethinking data literacy in the age of disinformation, misinformation, and malinformation. Internet Policy Review, 9(2). https://doi.org/10.14763/2020.2.1481
  • Cohen, E. L., Seate, A. A., Kromka, S. M., Sutherland, A., Thomas, M., Skerda, K., & Nicholson, A. (2020). To correct or not to correct? Social identity threats increase willingness to denounce fake news through presumed media influence and hostile media perceptions. Communication Research Reports, 37(5), 263–275. https://doi.org/10.1080/08824096.2020.1841622
  • Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences, 117(27), 15536–15545. https://doi.org/10.1073/pnas.1920498117
  • Hatmaker, T. (2021, January 13). Telegram blocks ‘dozens’ of hardcore hate channels threatening violence. TechCrunch. https://social.techcrunch.com/2021/01/13/telegram-channels-banned-violent-threats-capitol/
  • Kazemi, A., Garimella, K., Shahi, G. K., Gaffney, D., & Hale, S. A. (2021). Tiplines to Combat Misinformation on Encrypted Platforms: A Case Study of the 2019 Indian Election on WhatsApp. ArXiv: 2106 04726 [Cs]. https://doi.org/10.37016/mr-2020-91
  • Kligler-Vilenchik, N. (2019). Friendship and politics don’t mix? The role of sociability for online political talk. Information, Communication & Society, 24(1), 1–16. https://doi.org/10.1080/1369118X.2019.1635185
  • MacKenzie, A., & Bhatt, I. (2020). Lies, bullshit and fake news: Some epistemological concerns. Postdigital Science and Education, 2(1), 9–13. https://doi.org/10.1007/s42438-018-0025-4
  • Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2020). Long-Term Effectiveness of Inoculation Against Misinformation: Three Longitudinal Experiments. Journal of Experimental Psychology, Applied, 27(1), 1–16. https://doi.org/10.1037/xap0000315
  • Resende, G., Melo, P., Sousa, H., Messias, J., Vasconcelos, M., Almeida, J., & Benevenuto, F. (2019). (Mis)information dissemination in WhatsApp: Gathering, analyzing and countermeasures. The World Wide Web Conference on - WWW’. 19, 818–828. https://doi.org/10.1145/3308558.3313688
  • Rossini, P., Stromer-Galley, J., Baptista, E. A., & Oliveira, V. V. D. (2020). Dysfunctional information sharing on WhatsApp and facebook: The role of political talk, cross-cutting exposure and social corrections. New Media & Society, 23(8), 2430–2451. https://doi.org/10.1177/1461444820928059
  • Theocharis, Y., & Jungherr, A. (2021). Computational social science and the study of political communication. Political Communication, 38(1–2), 1–22. https://doi.org/10.1080/10584609.2020.1833121
  • Tromble, R., & McGregor, S. C. (2019). You break it, you buy it: The naiveté of social engineering in tech – and how to fix it. Political Communication, 36(2), 324–332. https://doi.org/10.1080/10584609.2019.1609860
  • Vermeer, S. A. M., Kruikemeier, S., Trilling, D., & de Vreese, C. H. (2020). WhatsApp with politics?!: Examining the effects of interpersonal political discussion in instant messaging apps. The International Journal of Press/politics, 26(2), 410–437. https://doi.org/10.1177/1940161220925020