2,389
Views
4
CrossRef citations to date
0
Altmetric
Research Article

What Explains the Spread of Misinformation in Online Personal Messaging Networks? Exploring the Role of Conflict Avoidance

Abstract

Online personal messaging platforms such as WhatsApp and Facebook Messenger are now hugely popular around the world. Yet their role in the spread and social correction of misinformation remains under-researched. We carried out in-depth, semi-structured interviews with the UK public (N = 102) to explore how social relationships and technological design interact and foster norms regulating how people respond when their everyday social ties share misinformation on these platforms. Conceptualizing messaging as hybrid public-interpersonal communication, we develop a framework that situates online political talk in the context of everyday social interaction. We show that, among our participants, a norm of conflict avoidance is particularly powerful on these platforms and makes people reluctant to speak out. Conflict avoidance should therefore be taken seriously as a contributor to the diffusion of misinformation in everyday life. Policymakers and other stakeholders, including news organizations, should explore new, tailored ways to empower citizens to challenge misinformation in these important online spaces, where automated and algorithmic interventions are impossible.

There is growing evidence that online personal messaging platforms such as WhatsApp and Facebook Messenger play an important role in the spread and correction of misinformation (Kligler-Vilenchik Citation2022; Malhotra and Pearce Citation2022; Pearce and Malhotra Citation2022; Rossini et al. Citation2021). In this study, we provide some new concepts and evidence to understand the diffusion of misleading content in these increasingly important but often hidden worlds. Our approach situates political talk in the context of everyday life and social norms. Norms shape the sharing and correction of misinformation (Bode and Vraga Citation2021; Chadwick, Vaccari, and O’Loughlin Citation2018; Malhotra and Pearce Citation2022; Tandoc, Lim, and Ling Citation2020). They also condition citizens’ reception of how news organizations fight online falsehoods (Andı and Akesson Citation2021). Our exploratory approach allows us to show how a norm of conflict avoidance in the specific context of personal messaging can make it difficult for people to engage in the social correction of misinformation.

We set out a relational-constructivist framework. By this we mean that people continually adjust their everyday interactive behavior and expression according to how they perceive others’ behavior and expression and how they themselves want to be perceived. This is especially important when people are anxious to prevent political disagreement damaging social relationships (Eliasoph Citation1998; Eliasoph Citation2000; Eveland, Morey, and Hutchens Citation2011). To develop our framework, we synthesize themes from prior research and conceptualize personal messaging as hybrid public-interpersonal communication. As we explain, much personal messaging involves communication between close ties but it also features rapid and subtle switching between interpersonal and semi-public contexts in an overarching context of privacy and curation of audiences. These technological factors converge with social relationship factors and can foster norms that make it difficult for people to respond when social ties share false and misleading information. Our findings highlight the importance of conflict avoidance as a key social norm.

We conducted in-depth semi-structured interviews with members of the public (N=102) who reflect the diversity of the UK population on age, gender, ethnicity, educational attainment, and a basic indicator of digital literacy. Our interpretive thematic analysis reveals that the norm of conflict avoidance can reduce people’s relational capacity and the social license necessary to challenge and correct misinformation. Relational capacity in this context is the degree to which people feel empowered to engage in correction when they recognize that interdependent social relationships make it relatively difficult to challenge others. In using the term social license we refer to how becoming empowered to correct misinformation derives, in part, from the norms signaled by others in a communication setting (Legros and Cislaghi Citation2020). If others signal a norm of conflict avoidance, it further reinforces that norm’s impact and makes it even more difficult to route around. Individuals perceive that others have not granted “permission” to correct misinformation.

Because a norm of conflict avoidance makes people reluctant to speak out, we argue that, subject to further empirical inquiry to test these ideas at the general population level, it should be considered a contributor to the spread of false and misleading information through personal messaging. On a more positive note, in contrast with public social media, the strong interpersonal trust that animates some (though not all) communication on messaging potentially makes these settings useful for reducing the spread of misinformation if people are empowered to overcome conflict avoidance. Our findings suggest that conflict avoidance could potentially become a new object of intervention for journalists. Anti-misinformation work that boosts citizens’ power to correct misinformation could situate itself in everyday social interactions and new meso news-spaces (Tenenboim and Kligler-Vilenchik Citation2020) created by news organizations.

Online Personal Messaging as a Misinformation Problem

Messaging platforms have grown rapidly in recent years to become hugely popular around the world. Globally, WhatsApp has more than two billion users (WhatsApp Citation2020). In the UK, it has 31.4 million users aged 18+—about 60% of the entire adult population, which is more than any of the public social media platforms. Facebook Messenger has 18.2 million UK users (OFCOM Citation2021). A study in Argentina highlights WhatsApp’s centrality as “a highly versatile, all-encompassing space of encounter, meaning-making, and coordination where entrance barriers are low and exit costs are high” (Matassi, Boczkowski, and Mitchelstein Citation2019, p. 13). Yet despite personal messaging having been condemned in popular commentary as awash with conspiracy theories and fake remedies, its role in the spread of misinformation is poorly understood. The lack of attention is partly explained by the difficulties of researching these platforms (Rossini et al. Citation2021; Swart, Peters, and Broersma Citation2018). They do not have public archives, and the most popular ones encrypt communication, so it is impossible to comprehensively identify and measure what circulates. Yet the lack of attention also derives from a further factor: misinformation on personal messaging is a deeply social problem for which there are no quick technological “fixes.” Automated interventions used on public social media, such as injecting fact checks into feeds or algorithmically down-ranking posts and accounts, have no relevance.

Conceptual Framework

Online Personal Messaging as Hybrid Public-Interpersonal Communication

We argue that personal messaging functions as hybrid public-interpersonal communication and this has implications for social norms of misinformation correction. Messaging weaves emotionally intimate digital connection into the fabric of everyday life. It is mainly used for interpersonal communication among strong-tie networks of family, friends, parents, co-workers, and local community members (Masip et al. Citation2021; Swart, Peters, and Broersma Citation2018; Vermeer et al. Citation2021; WhatsApp Citation2021). In these contexts, people rely on personal experiences and the emotional bonds of kinship and friendship as bases for what they think about others’ sharing habits, and what they feel able to say in response. Experiences are shaped by the iterative, mobile, and socially-networked context of smartphone use and perpetual, if sometimes ephemeral, connection (Ling Citation2012). Yet often the (mis)information shared via these platforms originates in the more remote public worlds of news, politics, and entertainment (Tenenboim and Kligler-Vilenchik Citation2020). It is then switched into one-to-one and group settings, and sometimes loses markers of provenance, such as cues about its source, purpose, and timing, along the way. Provenance of information is often difficult to establish on public platforms (Bimber and Gil de Zúñiga Citation2020) but on personal messaging it can be an even greater challenge. For example, with the WhatsApp “forward” design feature there are no meaningful metadata, such as a time stamp or even the identity of the original poster of the message. Yet the forward can be a driver of incidental exposure to messages of unclear provenance when people use the feature to switch public information into private networks (Masip et al. Citation2021; Valenzuela, Bachmann, and Bargsted Citation2021).

People may or may not use public sources to bolster their interpersonal reassurances and warnings about misinformation, exposing others to them. Links may or may not be forwarded to different, larger groups on the platform, in a further switching process—a move from an interpersonal audience to a broader, semi-public audience. For example, a message might be switched into a different personal messaging group chat with more members, or a user might switch from a one-to-one chat to a small group and then to a larger group to pick up information. Communication in interpersonal settings can be relayed to semi-public settings, for example if it is forwarded into larger messaging groups, such as those focused on work, neighborhoods, or school parents, that are popular on these platforms and may comprise dozens and sometimes (if rarely) hundreds of people. So, privacy and intimacy are important on personal messaging (Swart, Peters, and Broersma Citation2018) but they are not what makes it unique. Rather, it is that personal messaging involves rapid and subtle switching between engagement in interpersonal and semi-public communication, in an overarching context of relative privacy and sender control over the audiences for messages (Vermeer et al. Citation2021). The public world intervenes in personal messaging, but fully public reception settings do not exist there. The world of news and misinformation drifts across its contexts; people must reckon with how this can affect their interpersonal relationships.

Hybrid Public-Interpersonal Communication and Meso News-Spaces

Since messaging interactions are never fully private and never entirely separated from the public world of news making, personal messaging can nurture “meso news-spaces.” As Tenenboim and Kligler-Vilenchik (Citation2020, p. 265) wrote when introducing the concept:

sustained reciprocity can be accomplished through a continuous conversation between a journalist and audience members in a non-public online space. This kind of sustained reciprocity allows, in turn, a continuous co-construction of journalistic knowledge across the news-production process. All this takes place in what we term here a meso news-space: an online space, occurring between the private and public realms, where a group of people are involved in news-related processes.

Larger personal messaging groups, such as the WhatsApp group run by Israeli journalist Tal Schneider and analyzed by Kligler-Vilenchik and Tenenboim (Citation2020) in their case study, can fuse longstanding attractions of online communities—sociability, creative expression, and mutual exchange—with opportunities for journalists to engage audiences in the co-production of news. Meso news-spaces can blur the distinction between public and private: they operate in a backstage zone where group participants are given restricted access to journalists but participants know that what they do there will have implications for the public, frontstage of news production and publication (Kligler-Vilenchik and Tenenboim Citation2020; Tenenboim and Kligler-Vilenchik Citation2020).

In our study, the focus is on personal messaging contexts where news organizations and journalists were not directly present. This is the most common experience on personal messaging around the world (WhatsApp Citation2021). However, the case study of the meso news-space reveals some of the hybrid public-interpersonal communication we theorize is crucial for the development of social norms that regulate the correction of misinformation on personal messaging. The two concepts differ in that hybrid public-interpersonal communication does not primarily involve direct engagement with the news, but entails everyday, informal exchanges with close ties, where the news layers into these contexts (Swart, Peters, and Broersma Citation2018; Swart, Peters, and Broersma Citation2019). These can sometimes develop into meso news-spaces, either temporarily when people connected on these platforms for other reasons come to intensively discuss news for a period of time, or more permanently when users participate in groups specifically created to discuss and exchange news. However, our focus is on when news and public information layers into communication but engagement is much less purposive, structured, and connected with professional news production than occurs in meso news-spaces. That being said, the meso news-space also offers a useful metaphor to think about how news organizations might adapt their practice to make a difference to the spread of misinformation in personal messaging. We return to this point in our conclusion.

The Power of Social Norms

Hybrid public-interpersonal communication generates, and is regulated by, norms of behavior. Norms are relational and socially constructed. They spring from, but also shape, routine social interactions. People collectively create norms, but also adapt to fit in with them (Legros and Cislaghi Citation2020). And because personal messaging provides ongoing connection to others, unconstrained by the need for physical presence, it is well suited to the habitual interactions and practices (Masip et al. Citation2021) from which norms spring. When people respond to misinformation shared on personal messaging they also signal norms to others in their networks. This grants people social license to behave in particular ways, as more people perceive that adhering to a norm is less cumbersome, more personally beneficial, or more likely to help them fit in or enhance their status in some way (e.g., Bikhchandani, Hirshleifer, and Welch Citation1998; Goffman Citation1955).

Importantly, norms on personal messaging may differ from those in fully public online spaces. This, too, can shape whether misinformation is amplified. Misinformation can originate in obviously public domains, such as news, or public platforms such as Facebook, but then be switched into more private, interpersonal communication networks where different norms of sharing and challenging might apply and affect misinformation’s spread or correction. Or misinformation can originate in rumors, gossip, and misunderstandings in one-to-one or small-group personal messaging interactions before it is then switched into other, larger personal messaging groups. This might then grant misinformation a semi-public character that makes people feel they need different skills and capacities to challenge it (Pearce and Malhotra Citation2022). Norms of challenging misinformation differ across these contexts and can boost or limit people’s relational capacity to correct others. As personal messaging mainly involves the maintenance of social relationships, the norm that it is “not the done thing” to challenge false information—perhaps even if the individual privately believes it is “the right thing” to do—may mean misinformation flows and reaches more people.

Trust, Homophily, and Conflict Avoidance

Hybrid public-interpersonal communication also shapes how trust operates in relation to misinformation. Rossini and colleagues’ study of internet users in Brazil found people are more likely to correct misinformation on WhatsApp than on public platforms (Rossini et al. Citation2021). They suggest that people tend to use WhatsApp to connect with close ties and perceive it is “safer” to correct misinformation there. However, people can also defer to specific family and community members, such as parents or the elderly, and are less likely to correct them, as revealed by Malhotra and Pearce’s (Citation2022) study of WhatsApp use among young people in Delhi. These conflicting findings suggest the need to dig deeper into the obstacles to peer correction in personal messaging.

Most, though not all, discussion on personal messaging happens in small groups of people who know each other. According to WhatsApp’s analysis, “90% of messages sent on WhatsApp are between two people, and the average group size is fewer than 10 people” (WhatsApp Citation2021). These networks are often animated by strong interpersonal trust and broadly shared worldviews that derive from shared experience over time. The social science research on trust is large and variegated, but here we draw on the important distinction between the “altruistic” trust (Mansbridge Citation1999) that is more likely to operate in close, interpersonal networks and the “social” trust (Putnam, Leonardi, and Nanetti Citation1993; Uslaner Citation2002) that is more prevalent among networks featuring weaker ties.

Social trust helps strangers co-operate; close interpersonal relationships rely on it less. Communication in the context of high interpersonal trust and low social trust might decrease, not increase challenges to misinformation. This is because people are more likely to implicitly trust the information shared by their close ties and not require it to be explicitly reinforced by the credibility cues and signals of trustworthiness that are more important in communication environments comprising strangers (Masip et al. Citation2021). As Uslaner argues, social trust is what binds individuals to a larger community beyond the lived experiences they share with family and interpersonal ties. But as Putnam, Leonardi, and Nanetti (Citation1993) have shown, social trust is built through reciprocity and information sharing, particularly in social domains where co-operation is required to achieve goals. The problem here is that, among people with deep, shared, lived experiences, reciprocity and sharing “good quality” information are less likely to be valorized as the principal way to build and maintain trust. In the lexicon of Mansbridge’s model of “altruistic trust,” trust relationships with close ties have gift-like qualities. As she writes, “In altruistic trust, one trusts the other more than is warranted by the available evidence, as a gift, for the good of both the other and the community” (1999: 290, emphasis added). This kind of trust is almost taken for granted among close ties but most people deem it less suitable for social interactions among people they know less well, where a more “strategic” use of trust as reciprocity and information sharing is usually prevalent (Putnam, Leonardi, and Nanetti Citation1993). Misinformation could therefore be less likely to be challenged when it switches from the public world into interpersonal messaging contexts where altruistic trust operates.

Moreover, as the research on interpersonal political discussion has shown, most people prefer to avoid conflict with close and trusted social ties, especially over political issues (Eliasoph Citation1998; Eliasoph Citation2000; Eveland, Morey, and Hutchens Citation2011; Mutz Citation2002). Family ties, because they are mostly not based on conscious choice, are famously more complicated than friendship. Yet families still tend to enjoy high altruistic trust and are important sources of social support that usually involve an expectation that the relationships will continue long into the future. Taken together, these factors mean family interactions are also likely to encourage conflict avoidance. People find ways to minimize conflict, to resolve the emotional contradictions between the intimacy of private and interpersonal relationships and the perceived “rationality” of public, political debate (Eliasoph Citation2000). Most societies also have a prevailing norm of avoiding embarrassing oneself while offering others opportunities to do the same (Goffman Citation1955; Malhotra and Pearce Citation2022). Silences and changing topic are also key behaviors in this “face work” (Goffman Citation1955). When a friend shares false and misleading information, silences and changing topic provide routes to conflict avoidance. This maintains the social relationship but at a cost: it also makes the social correction of misinformation less likely.

Again, however, on personal messaging people often switch between sharing or encountering personal information and public information. This can make conflict avoidance even more important for coping with misinformation. When a close social tie switches misinformation into your personal messaging group, it may have originated in the public world but arrive in the context of ongoing personal relationships based on altruistic trust that you want to prevent becoming contaminated by accusations that people are spreading falsehoods. This can set up tensions between the world of the interpersonal and private and the world of broader collective public values and discussion. Conflict avoidance can provide a cordon sanitaire around interactions with close social ties (Eliasoph Citation2000) and help avoid dragging those social relationships into politics and controversy. Yet this again carries a cost: the misinformation will spread.

Research Questions

Based on the conceptual framework we have mapped, we ask the following exploratory research questions. First, among our participants, does a norm of conflict avoidance make it more difficult to challenge misinformation in personal messaging and, if so, how and why? Second, among our participants, what are the origins of conflict avoidance in the social relationships and technological design factors typical on messaging platforms?

Research Design, Data, and Method

We chose a qualitative and interpretive method that enabled us to capture social meanings and interactions that are inaccessible through digital trace data and not easily revealed by standardized survey questions. We conducted in-depth semi-structured interviews with members of the UK public who use messaging (N=102). We hired Opinium Research to recruit participants. Opinium maintains a panel of 40,000 people who agree to participate in surveys and market research. Those who met our criteria based on a short screening questionnaire were invited to participate in one-to-one interviews with us. Interviewees provided informed consent and were compensated with a £35 voucher. Loughborough University granted ethical approval.

Sampling and Recruitment

Participants used at least one of the following at least a few times a week: WhatsApp, Facebook Messenger, iMessages, Android Messages, Snapchat, Telegram, Signal. Figure S1 in the Supplementary Information (SI) shows the platforms our participants use: WhatsApp and Facebook Messenger are clear leaders. Screening recruitment ensured the full participant cohort roughly reflected the diversity of the UK population across gender, age, ethnicity, educational attainment, and basic digital literacy, though we note that average educational attainment was higher among this cohort than in the UK population. Figures S1 and S2 in the SI display all distributions. Quota matching is impossible and of little value in a qualitative study with 102 participants, but our method guaranteed we avoided over-recruiting from a narrow range of social groups, which is particularly important when using online panels. We also wanted to avoid focusing on one minority, such as heavy news consumers or conspiracy theory believers. The sample size allowed us to balance in-depth, interpretive analysis with covering the experiences of a reasonably representative group of people from a wide variety of backgrounds and walks of life. A separate project aim was to examine neighborhood factors, so we recruited participants who resided in three distinct regions: London, the East Midlands, and the North East. To ensure balance on demographics and digital literacy in the final sample we employed an iterative strategy with six recruitment and interview rounds on a rolling schedule from April to November 2021.

Procedure

As the Covid-19 pandemic restricted movement and in-person interactions, all interviews were held online on Zoom. They were semi-structured and guided by indicative themes we derived from our theoretical framework and pre-fieldwork pilot interviews. The interview guide covered the following topics: news habits, verification practices, trusted sources, the design of personal messaging platforms, the differences between personal messaging and public platforms, experiences of responding or not responding when others shared misinformation, and usage patterns and habits in different contexts such as family, friendships, workplace, local community, and diasporic relations. Understandably, due to the timing of fieldwork, misinformation about the pandemic loomed large, but other topics arose and we took care to tease out general patterns from testimony about specific topics.

Our relational-constructivist approach takes inspiration from Geertz’s classic argument that people are “suspended in webs of significance” they themselves have spun (Geertz Citation1973, p. 5). We asked participants to put their testimony in the context of what their social ties were doing and how they themselves interpreted the meaning of theirs and others’ practices. The aim was to capture relationships, experiences, routines, how participants made sense of their use of personal messaging, and how it layered into the texture of everyday life as well as connections with others and with public and political events. To avoid the risk participants would withdraw or withhold information if they felt they were being judged for sharing misinformation we adopted an active listening method while avoiding both direct challenge and insincere feigned agreement (Hall Citation2022). We encouraged participants to check their smartphones during the interviews as an aide memoire but were at all times conscious of privacy. On average, each interview lasted about an hour and five minutes. Interviews were video-recorded and fully transcribed.

We did not ask to join participants’ messaging chats and groups. Aside from the ethical issues—in contrast with public social media, messaging users have a clear expectation of privacy that they must voluntarily and temporarily suspend at their own discretion—studying everyday sharing of misinformation requires great care. We doubt that joining chats would have been effective in encouraging people to talk about their experiences (cf. Vermeer et al. Citation2021). The Supplementary Information file contains further detail on the choices we made to ensure data integrity.

Analysis

Using Nvivo, we first conducted emergent and open interpretive coding of the transcripts (Corbin and Strauss Citation1990). We were guided in part by the concepts we outlined in our conceptual framework and pilot interviews but at all times remained open to themes that emerged in the data. The project’s principal investigator (first author) created the initial coding scheme. This was checked, discussed, and augmented by the other team members. Weekly team meetings were held to discuss coding consistency and add and refine codes over a period of about 12 weeks. We then moved to axial coding (Corbin and Strauss Citation1990, p. 13) to explore intersections between themes. Finally, we moved to the selective coding phase by focusing on conflict avoidance as the central category and exploring its linkage with other themes in the data set (Corbin and Strauss Citation1990, p. 14) and our conceptual framework. All team members participated in the coding and agreed on the final coding scheme.

Our writing strategy is to report the social experiences of some exemplary participants in a detailed way that enables readers to get a rich, contextual understanding of how the norm of conflict avoidance plays out. This narrative approach fits with our focus on social relationships and everyday interactions. However, we also include briefer testimony from other participants to further substantiate key points. Interview material has been anonymized by removal or replacement of potential identifying details. All names are pseudonyms assigned by us.

Findings

The Strains of Hybrid Public-Interpersonal Communication

Bella is in her early thirties and lives in the East Midlands. She has young children in the local primary school and belongs to a school parents’ WhatsApp group. Recalling the time one of the members posted something she says was “really anti-vaccine” into the group, she explained that she and the other parents decided not to challenge it. Instead, Bella said, “That was just met with radio silence. […] there was just silence, just no-one said anything, and then the topic was changed.”

Bella described her actions and those of the other group members as “cowardly” but explained that “there are, like, 30 other mums in there and I didn’t want to […] call it out in front of 30 other school parents.” Bella lacks confidence to speak out in the presence of others in the parents’ WhatsApp group because she fears she will be judged harshly for provoking conflict among its members, especially when the stakes are high and there is the semi-public context of this larger group to consider.

Bella’s story reveals how the hybrid public-interpersonal nature of communication on personal messaging can impose constraints on people’s ability to challenge misinformation. A WhatsApp group of 30 is not large. Yet it is large enough to mean that Bella does not personally know the views, and cannot anticipate the reactions, of many of its members. This is a reason not to speak out. She sees the parents’ WhatsApp group as a semi-public context and is reluctant to make the shift from casual, interpersonal chat to a political debate about misinformation in the face of disagreement. A similar theme emerged with Christine, who is 59 and lives in the East Midlands and who said “I’m quite happy to debate with someone, but what I’m not happy to do is have some sort of slanging match online that is visible to other people.”

Julia is in her forties and lives in south London. She belongs to several WhatsApp groups, mostly with family, friends, and the parents of her children’s classmates at the local school. She also spends time messaging with a good friend who follows what Julia describes as “medical trends,” by which she means scientifically unsupported nutrition advice, such as fruit juices her friend says prevent cancer. Julia does not hold these views but over the years has become used to avoiding confrontation when her friend posts them on WhatsApp. Importantly, Julia says her friend’s posts do not damage their personal friendship.

While preparing for her child’s birthday party, Julia set up a new WhatsApp group to help coordinate the gathering. She added her friend to the group alongside family members and other parents. The following day Julia noticed her friend had posted misinformation into the birthday group. As she explained:

So I put her in the group of the birthday party and she started to share the things about ‘do not get vaccinated’ and, you know, those threads. Okay, I mean, I respect it, but, for example, that was not the right place and time to do that, but still, I mean, we’re all different and we all have our strengths and weaknesses, and the others did not appreciate it, I did not appreciate it, but still, everyone just let her talk and say that because she’s nice in other ways. So yeah, it’s a matter of our co-existing, I guess.

When, in the interview, Julia was asked if others in the WhatsApp group replied to her friend’s posts, she laughed slightly nervously and explained what happened next: “No, we all ignored that… I mean we simply know her, we know how good she is, but we know that she’s also very bad with this, so we just ignored that, and we just kept on talking about what time the birthday party was and what we could [do], if, ‘shall we bring some wine?,’ that’s it.”

Julia explained that she avoids confronting misinformation on WhatsApp because there is “something more important at the base” of the relationships she has there. She also finds it easier to deal with misinformation in personal messaging interactions than she does face-to-face. The low-stakes, constant connection makes it easier to avoid conflict, let conversation “fade” and cool, and move onto safer topics without addressing the problem overtly. As she put it:

You ignore it for a while and then you respond later or just don’t respond and respond with “Oh, by the way,” and something else. Of course, you cannot do that in person, but of course, at that point, the interest in the topic becomes a lot less hot. So it’s easier. [It] sort of fades away after a while, so you can find your way out and you can wriggle out of this situation much easier.

A similar approach was adopted by Eve, who is 43 and lives in the North East. She described how she turns off message notifications on her phone when some of her friends become “obsessive” (her term) about the unfounded belief that the Covid pandemic was an excuse for “the government knowing all your information.” By ignoring these messages, she avoids conflict and retains the friendships. Adrian, 51 and living in the North East, avoided correcting misinformation because, as he phrased it “you’re sort of belittling that person” in front of other group members. Diana, 64, and in the East Midlands said she is careful to avoid calling people out because “If it’s there for everybody else […] it looks harder. It looks like you’ve taken a much harder line than you perhaps would have done.”

These themes were revealed vividly by a participant experiencing extreme pressure from family. Jenny, a retiree in her 60s living in north London, told us she particularly enjoys the photos of her grandchildren posted in a WhatsApp group for her extended family. Yet her nephews and nieces often posted into the same family group their warnings, based on unfounded conspiracy theories, about the harmful effects of 5 G mobile internet technology. Jenny explained how these WhatsApp messages contained links to content from public platforms: unwanted public contamination of her family group. Jenny’s way of coping with these difficult situations is to protect herself but also avoid provoking conflict:

I think you can get carried away in the moment when you read some of the stuff, so, yeah, I just tend to sort of let it go over until I hear it going out live on the TV, on the actual news or something like that. […] They [nieces and nephews] keep sending links—’you’ve got to read this, you’ve got to read that’ […] D’you know what, it is really hard. […] They go on and on about it on the group and they’ll send us, they send us links and then they keep saying ‘you read it yet? Have you read it yet? You’ve really got to read it, you’ve really got to read it.’ To be honest, I’ve got to the stage now where I don’t because I just think ‘oh.’ I’ve stopped reading a lot of it.

Jenny’s anxiety about her relatives’ constant attempts at reinforcement highlights what, in less intimate contexts, may be a violation of conversational norms. Yet some members of this family group feel sufficiently at ease to impose themselves on others, leading Jenny to avoid conflict by further retreating from engaging and challenging these messages. This theme also emerged with Grace, a 26 year-old in the East Midlands, who described how her aunt posted Covid misinformation into small group family chats with her immediate family members: “I’ll just sort of leave them to it… I don’t really want to get into an argument about it,” she said.

The Problem with Letting Misinformation “Go Over”

For Jenny, avoiding conflict with family members expanded into a broader approach of avoiding the misinformation itself, by letting it “go over”—her phrase in the quotation above for how the posts disappear when she quickly scrolls on her phone. Yet this is based in her recognition that, because she is already aware of the misinformation contained in her nephews’ and nieces’ posts, she can avoid engaging with it. There is a paradox here: to be able to avoid reading these posts, Jenny needs to know in advance that the posts’ content is repetitive. As she said of her relatives, “they go on and on.” The posts continue to appear in the family group, Jenny sees them, knows what they say, but avoids speaking up against them. So, a 5 G conspiracy theory receives no obvious challenge despite it being constantly repeated, and, paradoxically, at least in part precisely because it is being constantly repeated. This, in turn, makes it more likely that other members of the group will feel less constrained if they want to share these posts in other groups, or less compelled to challenge them if they see them elsewhere. Tacit acceptance in the family group can inadvertently contribute to the misinformation’s legitimacy and further diffusion.

The challenges of coping with these awkward interactions meant that some participants simply decided to avoid conflict under all circumstances. Will, aged 27 and from the North East, when seeing what he called “fake news” said “I try to close up the app or in some cases I do leave the channel.” Josephine, from the North East and in her early fifties, explained her approach: “I don’t put my opinions on people or anything like that. […] You’ve just gotta be very, very careful on what you send out there, don’t you, and think before you send it. [I] don’t do it to anybody.” Harry, a 45 year-old in the East Midlands spoke of how he mutes messaging chats and decides “I’m now staying away from my phone” and will “let it skip by.” Jack, 24 and in the North East, said that when he sees a friend sharing what he thinks is false information he will often open links “just out of curiosity.” But, he added, “if it’s just rubbish I’ll not put a response, I’ll not challenge them on it, I’ll just sort of leave it.” And Dominique, in her late 30s and from the East Midlands, told us that her “trick” (her word) is to preview the first few lines of a message on her Apple watch but not read the whole message. That way, she can avoid letting the poster see that she has seen the message and decided to disregard it, an act she worries will cause conflict. This is an even more deliberate form of conflict avoidance, as Dominique is not just afraid to challenge her interlocutors but also acts out of fear they would challenge her for not responding to explicitly endorse what they posted.

Trust, Homophily, and Curating out Conflict

Other participants distinguished between how they behave on personal messaging and on public platforms. Weaker ties to Facebook “friends” meant that less altruistic trust (Mansbridge Citation1999) operates, and this, in turn, makes conflict more likely. Even though Facebook’s News Feed algorithm may present posts from close and regular contacts more prominently, some saw Facebook as a less personalized environment than the one they could curate on personal messaging.

However, operating this distinction based on levels of trust can also reinforce the norm of conflict avoidance. Deliberately restricting personal messaging to those who hold similar attitudes can pre-structure homophily so that it “naturally” leads to less conflict than on public social media. James, 38 and from the North East, described how he developed a distinction between Facebook and Facebook Messenger. “On wider Facebook I’ve seen quite a bit of [misinformation], but not so much on Messenger. Again, I think that’s where, if you’re gunna message it to someone direct, you’d probably suspect that they agree with your viewpoint before you started.” Julia said she draws distinctions between what she discusses in her larger friendship groups and in interactions with those she considers very close personal ties, such as her mother, whose views are similar to hers. Misinformation posted in a group can be easier to deal with if there is an expectation of only communicating about the posts in a different messaging context—in smaller scale interactions and with those with whom one already agrees. This approach is useful for generating social solidarity, but it may mean that misinformation goes unchallenged when and where the challenges are most needed. Anthony, in his early 60s and from London, avoids challenging posts on Facebook Messenger chats because he thinks it “becomes a bit counterproductive on that public forum” with multiple others, even if the group is relatively small and of course not fully public—it is personal messaging.

Drawing Boundaries between the Political and the Interpersonal

In an elaboration on this approach, Richard, who is in his fifties and from London, told us that when he encounters misinformation from colleagues at work his approach is “keep your mouth shut.” He explained that he distinguishes between exposure to false information in the political world—what he called “political” and media “hype”—and personal messaging interactions between his friends and work colleagues. He believes that, on personal messaging, he will not be able to effectively counteract the impact of his friends’ exposure to pandemic misinformation from public sources, and, even if he could, it would not be appropriate to do so: “People have gone on the media and gone ‘oh it’s gunna kill ya, there’s little microchips in it [the Covid vaccine]’ it’s like ‘oh God.’ Nah. It’s just, I don’t wanna get into that. I’ve just said, ‘just get me a beer’—bit easier [laughs].”

Richard made it clear that he avoids conflict over misinformation on personal messaging because he does not see a legitimate role for overcoming what he perceives as more powerful public and political influences shaping the views of his friends and workmates. This kind of conflict avoidance draws boundaries between a supposed formally public, political sphere, where misinformation is perceived as spreading and where the norm is that it is legitimate to challenge it, and the interpersonal world of personal messaging, where misinformation circulates but in Richard’s view should go unchallenged. This is based on the presumption that misinformation is “out there,” created and spread by elites and organized political actors “on the media” and not an accepted part of the interpersonal world for which Richard thinks personal messaging should be reserved. For Richard, this applies even if his friends do switch misinformation circulating in public, such as on social media or mainstream news, into personal messaging. This boundary drawing also involves a weary resignation that what he says on personal messaging will make no difference when compared with those stronger influences on people’s attitudes, i.e., politicians. It results in Richard not correcting even his closest friends and colleagues, on whom he could conceivably have a positive influence.

This was revealed by other participants. Carina, 42 and living in London said “if someone’s shared some fake news with me it’s definitely not worth, you know, getting into an argument about it […] I don’t think I can, and it’s not my place to, you know, try to change someone fundamentally.” Simon, 51 and in the North East, said he “would rather be reserved and not have the confrontation” because “WhatsApp in the groups is all about feeling you’ve got a bit of belonging, banter, craic, you know, and things like that, but not for confrontational (sic).”

Routes around Conflict Avoidance; Unclear Consequences

The interviews also revealed that participants employed routes around conflict avoidance. One involved opening up communication flows to share criticisms of misinformation in encounters that were less risky because altruistic trust was stronger. This strategy enabled participants to avoid overt conflict in larger groups, but also to learn about the experiences of others and exchange reliable information. Messaging between those with close personal ties can also provide a context for directly challenging misinformation. However, these routes around conflict avoidance have their limits when it comes to reducing the spread of misinformation, as we now explain.

Lydia is in her late twenties and works in a white-collar job in London. She was surprised when one of her friends shared what she felt was misinformation about Brexit in a small WhatsApp group with close friends. After that, Lydia and her friends quickly switched to one-to-one conversations to decide on the best way to mobilize news sources in a challenge to their friend’s post: “A few of us, like, sent some messages on the side, like ‘what the fuck?,’ you know, this, this is unexpected, like, what should we do, and then we’re, like, ‘no just send, send this, send this!’” Lydia and her friends responded collectively in the group with a number of sources to counter their friend’s post.

We term this particular form of switching scaling and gauging. To make sense of problematic information, one can switch between larger and smaller groups, or between groups and one-to-one messaging and then back to the group. The opposite is also possible: insights from larger groups can be used to make sense of misinformation in one-to-one and small group interactions with closer friends and family. The information shared varies depending on the context. Switching to messaging “on the side,” as Lydia labelled it, helps close ties make sense of misinformation they see in groups with larger memberships. Another participant, Luke, in his early forties, spoke of how he and colleagues used smaller chats to discuss “Covid denier” misinformation posted in their main WhatsApp work-related group.

However, it is unclear if gauging and scaling prevent the spread of misinformation. Consider the example of Georgios, who lives in London and is in his late forties. He belongs to a WhatsApp group of old university friends. He explained how he and friends scale and gauge to avoid conflict. Rather than confronting old friends who share misinformation in the group, Georgios and his friends switch to smaller groups to discuss it, before scaling their collective approach back up to the larger group:

We, outside the group, we do talk privately between us as well. […] So, we’re sort of highlighting there’s some sort of attention needed from someone and give them space. […] So, we share this type of information, when we know about someone else, in a more private way outside the [main] WhatsApp group.

Gauging and scaling can be helpful to break the silence imposed by conflict avoidance in other interactions. Georgios and his friends know each other well enough to be aware that directly confronting misinformation in the larger group might provoke a negative response. But this approach has limited impact on the spread of misinformation in the larger group. It removes the topic from conversation in that group, placing it off limits, such that Georgios and his friends end up with fewer opportunities to correct when they switch to the larger group. This pattern was also illustrated by Kurt, an East Midlands resident in his mid-forties. He and some of his friends had confronted a friend who had spread conspiracy theories about the pandemic. But as a result of the challenge, their friend said “‘Oh whatever,’ and kind of, the conversation kind of died down,” Kurt said. Similarly, Sandra, 64 and in the North East, said she does not challenge misinformation in a group but would send a one-to-one message to the person who posted it. Adam, 60 and in the North East, thought that WhatsApp was a “good way of putting your arguments down […] in a sort of plain way” without emotion. However, when he did this with a friend who had shared misinformation, they ended up having to “agree to disagree.” “I capped it at that point,” he said.

Confrontation’s Unintended Consequences

Later in the interview with Georgios, it became clear he had tried to directly confront misinformation on personal messaging. He recounted the story of a friend who contacted him on WhatsApp to elaborate on the widely circulated false conspiracy theory that Covid was a plot to insert microchips into people’s bodies. Georgios detailed how a particular variation of the conspiracy theory came into the orbit of his friend via a personal anecdote. In the story, a man attending a vaccination centre becomes lost and asks a nurse for directions. According to the story, the nurse already knows the name of the man and calls him by his name. The man did not give his name, so how could the nurse possibly know it? In the conspiracy theory narrative, this supposedly reveals the man is being tracked by a microchip in the vaccine.

At first, Georgios thought his friend was relaying the story to him as a joke. Then he realised it was serious. Georgios confronted his friend: “I said, ‘how can you rely on this type of, you know, hearsay?’ […] I was like, ‘you—you’re out of your mind. You’ve lost it!’”. Things did not go well. Georgios’ friend made a quick move to avoid further conflict by saying ‘Oh, I’ve got to go now,’ and immediately cut off the discussion.

Conflict avoidance can work both ways. It can discourage people from addressing conspiracy theories, but it can also enable those being corrected to exit and limit further engagement. Georgios explained that he remains close to this friend and they communicate often on personal messaging, but some subjects are now firmly off limits: “we still chat a lot on WhatsApp, yes […] I still do, I still do—not about the vaccines,” he said.

Interactions on personal messaging are never fully private, nor are they fully public. Misinformation, such as Georgios’ friend’s conspiracy theory story, can arrive from spaces beyond the interpersonal—in the form of links, memes, and anecdotes. As users bracket out this reality by placing discussion of misinformation out of bounds to avoid conflict, opportunities to engage in dialogue are lost. Yet direct confrontation also carries risks. The close bonds of friendship might persist, but only on the basis that the misinformation is not up for discussion.

Conclusion

We based our study on two exploratory research questions. First, does a norm of conflict avoidance make it more difficult for citizens to challenge misinformation in personal messaging and, if so, how and why? Second, what are the origins of conflict avoidance in the social relationships and technological design factors that are typical on messaging platforms? In this conclusion, we summarize how our findings above answer these questions. Then we close by discussing some broader implications of this research.

Addressing our first research question, we have shown that participants want to avoid political disagreements over misinformation tarnishing their social relationships (Eliasoph Citation1998; Eliasoph Citation2000). The process is shaped by what we conceptualize as the hybrid public-interpersonal character of the messaging context, where misinformation can flow from the public world into the interpersonal world where its reception is shaped by what Mansbridge (Citation1999) has termed “altruistic trust.” Since personal messaging layers misinformation into the intimacies of everyday relationships more heavily than public social media (Matassi, Boczkowski, and Mitchelstein Citation2019; Swart, Peters, and Broersma Citation2018), the need for conflict avoidance is more keenly felt. We have shown how, among our participants, conflict avoidance is a key mechanism that obstructs correcting misinformation shared on personal messaging.

Turning to our second research question—the origins of conflict avoidance in the social relationships and technological design typical of messaging platforms—we have identified how specific convergences of social and technological factors contribute to maintaining and reinforcing the norm of conflict avoidance. As we have shown, there are various behavioral patterns through which this mechanism operates: assessing group size and the strength of the social relationships in the group; various disengagement strategies including silence (“letting it fade and cool”), deliberate inattention such as letting misinformation “skip by,” muting, scrolling quickly, and silencing notifications; curating out potential misinformation-based conflict by deliberately restricting personal messaging to those who hold similar attitudes; and drawing metaphorical boundaries between the public, political world where it is acceptable to challenge misinformation and the interpersonal world where it is deemed unacceptable because personal messaging interactions are not perceived as the “normal,” appropriate arena to address the problem. These behaviors can leave misinformation unchallenged and uncorrected. They can disempower people and make it difficult to speak out against false information shared even by those to whom they are personally close—precisely because they are personally close to them or others who may witness the uncomfortable exchange in a small group. The semi-public character of discourse in larger messaging groups can also lead to a fear of standing out, of being seen to undermine group cohesion, and to anxiety that challenging misinformation in larger groups has higher social stakes.

In some cases, these social and technological convergences can spur participants to try to route around conflict avoidance, for example by quickly switching information from one group to another, scaling up and down between different groups, and gauging the norms signaled in other groups. This can create opportunities for learning and for solidarity, but mainly with those who are like-minded, and this ends up reinforcing conflict avoidance. Being personally close makes it more likely people will know in advance they share the same views. In those contexts, people can share criticisms of misinformation without worrying about provoking conflict outside that context. However, scaling down to smaller groups of the like-minded also means opportunities to open dialogue in the larger group recede. And, when confrontation replaces conflict avoidance, difficult topics can be placed off limits, leaving messaging interactions to continue, but only about “safer” topics.

In presenting these findings we add new dimensions to the emerging research into these important online platforms (Kligler-Vilenchik Citation2022; Malhotra and Pearce Citation2022; Masip et al. Citation2021; Matassi, Boczkowski, and Mitchelstein Citation2019; Pearce and Malhotra Citation2022; Rossini et al. Citation2021; Swart, Peters, and Broersma Citation2018; Swart, Peters, and Broersma Citation2019). Subject to further research, our findings could have broader implications. There could be value in promoting, rather than closing down, constructive and sensitive dialogue in these spaces (Kligler-Vilenchik and Tenenboim Citation2020). This is particularly relevant to larger groups, where greater numbers are potentially exposed to false and misleading information in one setting. But people will also need to find ways to balance social relationships with friends, family members, and the other communities to which they belong with the need to challenge falsehoods that can mislead and cause harm.

Encryption and the lack of public archives on personal messaging create serious obstacles to fact checking (Rossini et al. Citation2021). This suggests there is a need to explore different kinds of anti-misinformation work. For most people, even if misinformation flows across its networks, online personal messaging is a domain with fleeting and tenuous connections to the world of news and journalism (Swart, Peters, and Broersma Citation2018). Helping citizens speak out against misinformation therefore involves providing support, and advice to build people’s relational capacity to deal with uncomfortable social interactions when challenging falsehoods shared in intimate spaces. The strong bonds of interpersonal trust and solidarity that animate personal messaging may impose limits on correction but they might also provide a fertile context for discursive engagement (Vermeer et al. Citation2021). Journalists and fact checkers could experiment with meso news-spaces (Kligler-Vilenchik and Tenenboim Citation2020) dedicated to combating misinformation in these settings by going with the grain of what we have identified. They could encourage people to switch from the high-trust, one-to-one, and small group interactions to the larger groups where they could work together collectively in dialogue and avoid standing out as lone individuals. Equally, they could encourage people to scale down, and switch the information and lessons learned in the larger groups to the one-to-one and small-group interpersonal interactions that our findings have also shown are important.

Accurate information is a necessary precondition for this, but on its own information means little without the capacity to mobilize it. To be relevant on personal messaging, measures to combat misinformation should consider the importance of communication contexts away from formal news production and assign a more positive role for dialogue and peer endorsement—what ordinary people, in everyday settings, say or do not say to each other. This might involve stepping back from directly intervening in online information flows, in favor of indirect approaches focused on developing people’s relational capacities to challenge misinformation. Alternatively, it could involve the reorganization of some aspects of news work to enable journalists to more directly inform everyday interactions, by building meso news-spaces more connected to everyday life (Kligler-Vilenchik and Tenenboim Citation2020). Citizens need support to overcome the emotional contradictions and the subtle silencing effects they experience when members of their interpersonal networks share misinformation, which our study has uncovered. This could be achieved without encouraging antagonism or blind confidence in opinions. Norms are sticky, and we cannot expect citizens to disregard interpersonal relationships. Overcoming the norm of conflict avoidance in the context of misinformation could be a collective endeavor grounded in mutual support, and not as behavior expected of lone, “heroic” individuals or automated responses. It ought to involve promoting an empathetic orientation. This approach may crystallize new social norms of dialogic civic responsibility, maintained by people in their everyday interactions on personal messaging. It will also rest on allowing for the unique role of personal messaging as a hybrid public-interpersonal communication environment that generates subtle vulnerabilities to misinformation.

We close with some caveats. Fully exploring these broader implications of our findings requires further research, and we want to highlight some limitations of this study. This is an exploratory study and our findings cannot be generalized at the UK population level. We deliberately mobilized qualitative and interpretive methods to provide theoretical insights and a platform for further research. We conducted our interviews online and not in person, which constrained our ability to introduce physical props and to fully read participants’ body language, though this also enabled us to better read facial expressions (i.e., without the masks that would have been required during the pandemic, assuming participants would have agreed to attend in-person meetings of course). We rely on people’s self-reports of their experiences, which may sometimes be inaccurate. That being said, an advantage of the semi-structured in-depth interview is that it allows for follow-up questions, which can enable a participant explain their personal experience in all of its variegated richness.

Supplemental material

Supplemental Material

Download MS Word (110.4 KB)

Acknowledgements

We thank the project’s participants, the scholars who commented on earlier ideas we presented at the International Communication Association Annual Conference in Paris in May, 2022, and the Leverhulme Trust for its financial support for this ongoing project.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research is supported by a Leverhulme Trust Research Project Grant (RPG-2020-019).

References

  • Andı, S., and J. Akesson. 2021. “Nudging Away False News: Evidence from a Social Norms Experiment.” Digital Journalism 9 (1): 106–125.
  • Bikhchandani, S., D. Hirshleifer, and I. Welch. 1998. “Learning from the Behavior of Others: Conformity, Fads, and Informational Cascades.” Journal of Economic Perspectives 12 (3): 151–170.
  • Bimber, B., and H. Gil de Zúñiga. 2020. “The Unedited Public Sphere.” New Media & Society 22 (4): 700–715.
  • Bode, L., and E. K. Vraga. 2021. “Correction Experiences on Social Media during Covid-19.” Social Media + Society 7 (2): 205630512110088.
  • Chadwick, Andrew, Cristian Vaccari, and Ben O’Loughlin. 2018. “Do Tabloids Poison the Well of Social Media? Explaining Democratically Dysfunctional News Sharing.” New Media & Society 20 (11): 4255–4274.
  • Corbin, J., and A. Strauss. 1990. “Grounded Theory Research: Procedures, Canons, and Evaluative Criteria.” Qualitative Sociology 13 (1): 3–21.
  • Eliasoph, N. 1998. Avoiding Politics: How Americans Produce Apathy in Everyday Life. Cambridge: Cambridge University Press.
  • Eliasoph, N. 2000. “Where Can Americans Talk Politics: Civil Society, Intimacy, and the Case for Deep Citizenship.” The Communication Review 4 (1): 65–94.
  • Eveland, W. P., A. C. Morey, and M. Hutchens. 2011. “Beyond Deliberation: New Directions for the Study of Informal Political Conversation from a Communication Perspective.” Journal of Communication 61 (6): 1082–1103.
  • Geertz, C. 1973. The Interpretation of Cultures: Selected Essays by Clifford Geertz. New York: Basic Books.
  • Goffman, E. 1955. “On Face Work: An Analysis of Ritual Elements in Social Interaction.” Psychiatry 18 (3): 213–231.
  • Hall, N.-A. 2022. “Trajectories towards Political Engagement on Facebook around Brexit: Beyond Affordances for Understanding Racist and Right-Wing Populist Mobilisations Online.” Sociology. Advance online publication. https://doi.org/10.1177/00380385221104012.
  • Kligler-Vilenchik, N. 2022. “Collective Social Correction: Addressing Misinformation through Group Practices of Information Verification on WhatsApp.” Digital Journalism 10 (2): 300–318.
  • Kligler-Vilenchik, N., and O. Tenenboim. 2020. “Sustained Journalist-Audience Reciprocity in a Meso News-Space: The Case of a Journalistic WhatsApp Group.” New Media & Society 22 (2): 264–282.
  • Legros, S., and B. Cislaghi. 2020. “Mapping the Social-Norms Literature: An Overview of Reviews.” Perspectives on Psychological Science : A Journal of the Association for Psychological Science 15 (1): 62–80.
  • Ling, R. 2012. Taken for Grantedness: The Embedding of Mobile Communication into Society. Cambridge, MA: MIT Press.
  • Malhotra, P., and K. Pearce. 2022. “Facing Falsehoods: Strategies for Polite Misinformation Correction.” International Journal of Communication 16: 2303–2324.
  • Mansbridge, J. 1999. “Altruistic Trust.” In M. E. Warren (Eds.), Democracy and Trust (pp. 290–309). Cambridge: Cambridge University Press
  • Masip, P., J. Suau, C. Ruiz-Caballero, P. Capilla, and K. Zilles. 2021. “News Engagement on Closed Platforms. Human Factors and Technological Affordances Influencing Exposure to News on WhatsApp.” Digital Journalism 9 (8): 1062–1084.
  • Matassi, M., P. J. Boczkowski, and E. Mitchelstein. 2019. “Domesticating WhatsApp: Family, Friends, Work, and Study in Everyday Communication.” New Media & Society 21 (10): 2183–2200.
  • Mutz, D. C. 2002. “The Consequences of Cross-Cutting Networks for Political Participation.” American Journal of Political Science 46 (4): 838–855.
  • OFCOM 2021. Online Nation Annual Report, https://www.ofcom.org.uk/__data/assets/pdf_file/0013/220414/online-nation-2021-report.pdf
  • Pearce, K., and P. Malhotra. 2022. “Inaccuracies and Izzat: Channel Affordances for the Consideration of Face in Misinformation Correction.” Journal of Computer-Mediated Communication 27 (2): 1–19.
  • Putnam, R. D., R. Leonardi, and R. Y. Nanetti. 1993. Making Democracy Work: Civic Traditions in Modern Italy. Princeton: Princeton University Press.
  • Rossini, P., J. Stromer-Galley, E. A. Baptista, and V. Veiga de Oliveira. 2021. “Dysfunctional Information Sharing on WhatsApp and Facebook: The Role of Political Talk, Cross-Cutting Exposure and Social Corrections.” New Media & Society 23 (8): 2430–2451.
  • Swart, J., C. Peters, and M. Broersma. 2018. “Shedding Light on the Dark Social: The Connective Role of News and Journalism in Social Media Communities.” New Media & Society 20 (11): 4329–4345.
  • Swart, J., C. Peters, and M. Broersma. 2019. “Sharing and Discussing News in Private Social Media Groups.” Digital Journalism 7 (2): 187–205.
  • Tandoc, E., D. Lim, and R. Ling. 2020. “Diffusion of Disinformation: How Social Media Users Respond to Fake News and Why.” Journalism 21 (3): 381–398.
  • Tenenboim, O., and N. Kligler-Vilenchik. 2020. “The Meso News-Space: Engaging with the News between the Public and Private Domains.” Digital Journalism 8 (5): 576–585.
  • Uslaner, E. 2002. The Moral Foundations of Trust. Cambridge: Cambridge University Press.
  • Valenzuela, S., I. Bachmann, and M. Bargsted. 2021. “The Personal is the Political? What Do WhatsApp Users Share and How It Matters for News Knowledge, Polarization and Participation in Chile.” Digital Journalism 9 (2): 155–175.
  • Vermeer, Susan A. M., Sanne Kruikemeier, Damian Trilling, and Claes H. de Vreese. 2021. “WhatsApp with Politics?! Examining the Effects of Interpersonal Political Discussion in Instant Messaging Apps.” The International Journal of Press/Politics 26 (2): 410–437.
  • WhatsApp 2020. Two Billion Users. https://blog.whatsapp.com/two-billion-users-connecting-the-world-privately
  • WhatsApp 2021. How WhatsApp Helps Fight Child Exploitation. https://faq.whatsapp.com/154956905959033/?locale=en_US