12,040
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Misinformation and professional news on largely unmoderated platforms: the case of telegram

ORCID Icon, ORCID Icon, & ORCID Icon

ABSTRACT

To date, there is little research to measure the scale of misinformation and understand how it spreads on largely unmoderated platforms. Our analysis of 200,000 Telegram posts demonstrates that links to known sources of misleading information are shared more often than links to professional news content, but the former stays confined to relatively few channels. We conclude that, contrary to popular received wisdom, the audience for misinformation is not a general one, but a small and active community of users. Our study strengthens an empirical consensus regarding the spread of misinformation and expands it for the case of Telegram.

Introduction

The quality of news and information that individuals encounter online remains a source of critical contemporary concern. It is by now well documented that digital platforms facilitate the discovery of news content of varying quality, ranging from professional trusted news outlets to those promoting misinformation or outright propaganda (Lazer et al., Citation2018; Lewandowsky, Ecker, Seifert, Schwarz, & Cook, Citation2012). Such misinformation has potentially toxic implications for trust in public institutions, science and indeed democracy. In many countries, digital misleading information 1 is linked to intensified political conflict, heightened ethnic tensions and resulted in crises while weakening confidence in democratic institutions and electoral outcomes (Bradshaw & Howard, Citation2018). Misinformation may also have the power to set the wider news agenda and hence shape public discourse as a whole (Vargo, Guo, & Amazeen, Citation2018). The fact that misleading information sometimes seems to outperform facts on social channels (Vosoughi, Roy, & Aral, Citation2018) has led many to call for platforms to intervene in the news environment to limit the spread of misleading information (Donovan, Citation2020).

One of the major responses to the spread of misleading information by the companies that own mainstream digital platforms was to increase their efforts to regulate and moderate users’ content. Moderation of user content such as posts and comments, whether on the websites of news organizations or on social media platforms, has become one of the prominent areas of debates within digital journalism (Bechmann, Citation2020; Masullo & Kim, Citation2020). However, the wide variety of what might be described as “fringe” or “alt-tech” equivalent of mainstream social media (Freelon, Marwick, & Kreiss, Citation2020; Zelenkauskaite, Toivanen, Huhtamäki, & Valaskivi, Citation2021) – platforms such as Gab, Minds, Parler or Telegram – have less developed and active systems of content moderation.

One of the most common assumptions is that such largely unmoderated platforms have few community norms or are run by technology companies that put little effort into content moderation, and as such are most likely to see a flourishing of misinformation. Consequently, these platforms can be viewed as places where mainstream platforms’ norms and regulations can be “evaded” (Vasu, Ang, Terri-Anne-Teo Jayakumar, Faizal, & Ahuja, Citation2018). Indeed, the producers of misleading content may deliberately migrate to such places when they are removed from more mainstream platforms as a result of moderation decisions, thus fragmenting their presence online (Rogers, Citation2020). This creates critical challenges for effective rebuttal of misinformation: as Lewandowsky and colleagues (2012) have argued, the fragmentation of society on the internet is one of the factors that make misinformation so resilient. Under-moderated platforms have also become a growing concern for professional media organizations as journalists partly give up their gatekeeping function to platforms (Ferrucci & David Wolfgang, Citation2021) that are not necessarily ready (or sometimes willing) to pick up this responsibility.

Consequently, research attention is starting to turn to these largely unmoderated platforms. Previous studies of misleading information on largely unmoderated platforms have primarily focused on WhatsApp, an app used chiefly as a messaging service. These studies have commonly examined a relatively small number of public groups used by political actors (Narayanan et al., Citation2018; Treré, Citation2020). For instance, Garimella and Eckles (Citation2020) found that 10% of shared images in Indian public groups on WhatsApp during the 2019 general election contained misinformation (see also Reis, Melo, Garimella, & Benevenuto, Citation2020). Gab, a platform with an audience smaller than WhatsApp and considered largely unmoderated, was found to be dominated by extremist political voices (Zhou, Dredze, Broniatowski, & Adler, Citation2019).

In this article, we investigate the extent to which misinformation genuinely flourishes on largely unmoderated platforms by exploring perhaps the largest of them in terms of audience (at least among those that enable public access to pages or groups so that everyone can follow them) – Telegram. While arguably a largely unmoderated platform (Rogers, Citation2020), it is nevertheless widely used: Telegram reported its global audience as more than 500 million users at the time of writing (Durov’s Channel, Citation2021). This is a larger audience than that of Twitter, which has around 350 million users (Kemp, Citation2020). While Telegram is perhaps best known as a messaging service that is similar in its functionality to WhatsApp, it is an important venue for news consumption due to its one-to-many broadcasting affordances. In some countries, such as Singapore, it has become one of the most popular digital platforms for news consumption (Lou, Tandoc, Hong, Pong, & Sng, Citation2021).

In addition to being a place where people consume news, Telegram has often been highlighted as a venue where misinformation spreads easily, partly driven by its relatively lax approach to content moderation (Luk et al., Citation2020; Ng & Loke, Citation2020; P & Ma, Citation2020). Indeed, some researchers have argued that misleading content, hate speech and radical content are allowed to exist there “without opposition from alternative viewpoints” (Freelon et al., Citation2020). Others suggested that “the absence of moderation” on Telegram increases “likelihood of users’ radicalization” (Urman & Katz, Citation2020), with some describing this platform as one of the “dark corners of the internet” (Rogers, Citation2020) or even an “enemy of democracy” (Owen, Citation2019; Treré, Citation2020).

There are concerns that Telegram’s role in the media system of mature democracies and its moderation policy enabled extreme groups to flourish on this platform. For example, previous research showed that Telegram had become a refuge for UK and US-based extreme political celebrities and commentators (Knuutila, Herasimenka, Bright, Nielsen, & Howard, Citation2020). Articles from sources linked to British far-right activist Tommy Robinson, for example, have been shown to receive more than twice the number of views than articles from the Daily Mail, a popular mainstream English-speaking tabloid paper (Knuutila et al., Citation2020). Other studies also demonstrated that far-right activists from Germany, the UK and Sweden used Telegram to organize and disseminate information (Davey & Davey, Citation2020). Rogers (Citation2020) argued that extreme political activists use Telegram for broadcasting rather than recruiting.

However, while such actors undoubtedly exist on the platform, what is thus far absent from the literature is a systematic investigation of how big the problem of misleading information is on Telegram compared to the prevalence of professional news. This is the gap we seek to fill in this article. Our research question is simple:

RQ: Are misleading information producers more successful than professional news producers on Telegram?

Our work is structured as follows. In the theory section, we review existing work on misleading content distribution and elaborate hypotheses about why this content might be more successful than professional news – and what “successful” means. Following this, we describe our methods and dataset, which consists of around 200,000 published on Telegram in 2018 and 2019. We then present our results. Rather than finding that Telegram is simply awash with misinformation, our analysis instead suggests a more nuanced picture, showing how a largely unmoderated platform has been integrated into professional media ecologies where leading media organizations appear to be able to compete for wider audiences with misleading sources and to win this competition. We show that trusted professional news content can dominate political information compared to sources that occasionally spread misleading content even when moderation is minimal.

We argue that the established view of largely unmoderated platforms as vast breeding grounds for misinformation actors is unsustainable and should be evaluated on a case-by-case basis for each particular platform and media system. Indeed, misinformation is present on unmoderated platforms. However, the requirements for the regulation and moderation of user content should be applied based on the evaluation of affordances for misinformation spreading, while the students of misinformation should pay greater attention to the agency of users who spread misinformation.

Theorizing distribution of misleading content

Misinformation comes in many forms. However, incorrect information presented in a news-like format online is one of the ones that currently causes the most concern (Lazer et al., Citation2018; Neudert, Howard, & Kollanyi, Citation2019). This type of misinformation has the appearance of professional news with references to credible sources, as well as headlines written in a news tone with time and location stamps. However, while professional news outlets “adhere to the standards and best practices of professional journalism, with known fact-checking operations and credible standards of production including clear information about real authors, editors, publishers, and owners” (Neudert et al., Citation2019, p. 6), misleading sources often contain “deceptive and incorrect propaganda purporting to be real news” (Neudert et al., Citation2019). Neudert and colleagues defined this type of misleading information as “junk news.” Indeed, much contemporary misinformation has also been described as “fake news” – though this term often appears to function more as a political accusation than a well-operationalized concept. In this way, contemporary misinformation is semi-parasitic on the existing news environment: mimicking its form and profiting from its established set of values and credibility (Waisbord, Citation2018).

In this paper, we address the question of whether misleading content producers are more successful than professional news producers on Telegram. There are several different ways to operationalize the idea of “success,” such as analysis of how often duplicates – repeated content – appear in online communities (Zelenkauskaite et al., Citation2021). A combination of platform affordances distinct to Telegram – an ability to collect information on views for each piece of content in channels, as well as the ability to construct large and more comprehensive datasets – prompted us to focus on three criteria for success: whether links to a source are viewed more often, whether these links are shared more often, and whether communities they are shared in are more active as content contributors. We will discuss each of these in turn below.

On a platform, a view occurs whenever a piece of content appears on screen for a user. While not all views will be associated with the user noticing and internalizing the content, we know that the user cannot be affected by the content if they do not view it. There are some reasons to expect that misleading sources ought to be viewed more than professional news. First, misleading content is often packaged in sensational, simplistic and emotional formats which are designed to capture attention (Mustafaraj & Metaxas, Citation2017; Neudert et al., Citation2019; Yeo & McKasy, Citation2021). Misleading content producers are not constrained by fact-checking and hence can design news stories and headlines to be as eye-catching and appealing as possible. The emotive nature of many misleading stories is especially important because content emphasizing emotions (often negative) is often more widely viewed on social media (Berryman & Kavka, Citation2018). This may help explain why they appear to spread more widely. In addition, misleading sources often adopt a “clickbait” style of presentation (Mustafaraj & Metaxas, Citation2017), wherein the article’s title does not reveal the full facts of a story but instead encourages people to click on it (Chen, Conroy, & Rubin, Citation2015). Again, such a style can help boost the views of a particular piece. These lines of thinking lead us to develop our first hypothesis:

H1: If posts contain a link to known sources of misleading information, they will attract more Telegram views than posts with links to professional news sources.

In addition to operationalizing success through views, we can also consider the act of content sharing – when a user decides to actively redistribute the information by reposting it in their own channel(s). News consumption and news sharing are different activities, and a diversity of processes may drive them (Bright, Citation2016; Trilling, Tolochko, & Burscher, Citation2017). While consumption is a mainly private activity, news sharing is a public one that links an individual to the ideas and values being expressed in the news.

There are reasons to think that the act of sharing might be more likely to occur when misleading sources are involved. First, audiences can perceive misleading sources as more novel than professional news, and novelty has been shown to drive sharing behavior (Bright, Citation2016). Second, as noted above, misleading content is often sensationalist and has a highly emotional tone. In addition to driving news reading, such a style has also been shown to drive news sharing behavior (Kilgo, Lough, & Riedl, Citation2020). Third, misleading content is often hyper-partisan, clearly favoring one side. Existing work has shown that strong partisan appeals can drive news sharing behavior, as this allows people to demonstrate their group identity (Wischnewski, Bruns, & Keller, Citation2021). Finally, those producing misleading content may be more likely to incorporate networks of trolls or bots to boost the number of times it is shared (Howard, Citation2020; Shao et al., Citation2018). This line of thinking leads us to develop our second hypothesis:

H2: If posts make use of known sources of misinformation, they have a higher probability of being shared on Telegram than posts making use of professional news sources.

Another way to consider content producers as successful is to compare the numbers of online communities where users actively share their content. A wide variety of work on other platforms has shown that misleading content sharing is concentrated in specific communities of individuals rather than spread more widely (Grinberg, Joseph, Friedland, Swire-Thompson, & Lazer, Citation2019), and that these individuals cluster together into communities (Bessi et al., Citation2015). While this might mean that it remains a minority pursuit, research has suggested that the exposed individuals might be more deeply affected and more polarized from the rest of society (Del Vicario et al., Citation2016). In these communities, individuals may actively contribute to the problem of misinformation by deliberately sharing news that they know to be false or misleading just to please others or advance their own political agenda (Chadwick, Vaccari, & O’Loughlin, Citation2018). Again, this may be motivated by partisan considerations (Osmundsen, Bor, Vahlstrup, Bechmann, & Petersen, Citation2021). In this way, misleading content can be seen as a kind of “collaborative work” (Starbird, Arif, & Wilson, Citation2019). This collaborative work results in signaling the importance of misleading content to a wider social network, and this network might appear to pay more attention to this content than to information shared by a credible source (Bakshy, Messing, & Adamic, Citation2015).

Channels are key public venues through which information can be disseminated and networks of users are linked to each other on Telegram. Channels enable a broadcasting mode of communication when their administrators share posts that can be viewed but not interacted with by their audience (in 2020, a few interactive options such as comments were added to Telegram channels). The reliance on channels as a key public broadcasting venue restricts the virality of information shared on this platform because users cannot see what their “friends” have publicly shared (Urman, Ho, & Katz, Citation2020). In contrast to other social media platforms with Facebook-style friend-focused timelines, the followers of Telegram channels are more likely to see individual posts. Therefore, the work of a few individuals involved in spreading misleading messages through Telegram channels can have a potentially higher impact compared to similar activities on other platforms with Facebook-style timelines. This can further motivate them to remain active contributors of misleading content. The above reasoning leads us to develop our third hypothesis:

H3: If a channel has a high proportion of misleading sources shared by users, the channel will be highly active, with a greater reach of its posting.

Just a few active sources of misleading content can contribute to the majority of all misleading information spread on a popular social platform (Hindman & Barash, Citation2018). For example, in one large-scale study in the US, misinformation sources “received about 13% as many Twitter links as a comparison set of national news outlets did, and 37% as many as a set of regional newspapers” (Hindman & Barash, Citation2018). This suggests that clusters where misinformation is shared could be less numerous than clusters where predominantly professional news sources or other credible information circulate.

In addition, these misinformation clusters are often divided into smaller communities along ideological and national lines. For example, Urman and Katz (Citation2020) have shown that the structure of a far-right network on Telegram – groups that were often involved in sharing misleading content – was divided into several distinct communities, which replicated their structures on other platforms (Froio & Ganesh, Citation2019). Moreover, these clusters normally occupy a relatively small proportion of a platform network-at-large (Cinelli, Cresci, Galeazzi, Quattrociocchi, & Tesconi, Citation2020). Reduced sizes of misinformation clusters can be explained by the processes of “motivated reasoning” which can lead individuals to seek out and accept information compatible with their beliefs, thus making them more susceptible to disinformation which appeals to their position (Flynn, Nyhan, & Reifler, Citation2017; Lodge & Taber, Citation2013). Such information often appears in smaller restricted social media communities. Hence, the audiences of misinformation clusters can represent relatively closed communities with fewer channels present. This has brought us to the fourth hypothesis

H4: Misleading sources are confined to a smaller set of channels than professional news sources

Methodology

The study is based on an open-access dataset that includes 317 million Telegram messages sent to 28,000 public Telegram channels between 2015 and 2019 (Baumgartner, Zannettou, Squire, & Blackburn, Citation2020). Telegram does not have a central directory of all channels. Hence, the researchers who created the dataset used a snowball method; they started with a list of 250 English-language channels. Some of those united users who preferred discussing politics; other topics local news or crypto-currencies. They then identified more channels and groups by looking at those from which posts had been shared.

Out of this dataset, we focused on messages covering the most recent yearlong period available in the dataset – from October 1, 2018, to September 30, 2019. This period is most relevant for our research aims. Before this, English-speaking audiences of Telegram were smaller, while the problem of misinformation was not so prevalent, with many actors spreading misleading content having not yet been “purged” from mainstream social media platforms (Rogers, Citation2020) and thus not focusing their attention on largely unmoderated alternatives like Telegram. The dataset contained 24.7 million messages from this period. For our analysis, we extracted all messages that contained hyperlinks to any website, which was 6.8 million messages. We focused only on content that was posted in channels rather than groups. Using these posts, we can assess the distribution of both misleading and professional news sources.

We excluded from the analysis content that contained short Uniform Resource Locators (URLs) such as bit.ly. The five most popular link shorteners represented about 7% of all links on Telegram channels in our sample. This does not include branded short URLs used by the professional sources, such as bbc.in, that we included in the analysis. We analyzed a random sample of 10,000 bit.ly URLs – the most prominent link shortener – and found that very few of them referred to the sample we analyzed. The only domain from the sample was infowars.com, which was linked 83 times. Previous research has found that a set of the most popular websites pointed to by short URLs is likely to remain stable over time (Antoniades et al., Citation2011). In addition, other studies have demonstrated that some shortened links could expire, start redirecting to a new location, or the shortener service stop functioning (Walker & Agarwal, Citation2016). Hence, we believe that attempting to expand the short URLs is not likely to improve the validity of our data substantially.

For the lists of professional news and misleading sources, we relied on curated lists also used by Pierri and colleagues (Citation2020). See Appendices A1 and A2 for the full lists and their descriptive statistics. These lists have been maintained since 2016, when they were composed to study misleading information during the US election. It contains websites that have also been featured in several other studies (e.g. Grinberg et al., Citation2019; Shao et al., Citation2018). Within this list, professional or mainstream sources were defined as reliable news that mentioned “factual, objective and credible information” (Pierri et al., Citation2020), while misleading sources are defined as ones that contain “misleading content, false and/or hyper-partisan news as well as hoaxes, conspiracy theories, click-bait and satire.” We reviewed the list of misleading sources and removed websites dedicated to humor and parody which had originally been included. This resulted in a list of 94 misleading sources (Claim Sources, Citation2020).

The professional news sources list consisted of 14 “US most trusted news sources” composed by the Pew Research Center (Mitchell, Gottfried, Kiley, & Matsa, Citation2014) based on survey results of US audiences. It features prominent domestic media outlets and international sources visible in the US, such as the BBC. We selected only those sources maintained by media organizations that the US audience viewed as highly trustworthy: according to the survey, at least twice as many people trusted than distrusted them. This ensured that we could contrast the selected professional sources to sources disseminating false or incorrect narratives. Most of these sources were also featured in multiple other relevant lists as, for example, most visited news sources by the US population in the yearly Reuters Institute Digital News Reports.

The list of professional sources appears shorter than the list of misleading sources. Yet we have chosen adopt this approach as it is a widely used collection featured in other studies on misleading online information (Grinberg et al., Citation2019; Pierri et al., Citation2020). Moreover, we could expect that the scale of news organizations that maintain professional news sources will give them an advantage due to the volume of URLs they produce every day, the recognition of their brand, as well as their reach across the internet. For example, almost all of the professional sources in our sample reached between 9%-19% of the US audience, while the most prominent misleading source according to Reuters Institute’s study (Newman, Fletcher, Kalogeropoulos, & Kleis Nielsen, Citation2019), Breitbart, reached only 7% of the audience, with many of the other sources reaching an even smaller percentage of the US audience. Hence, while it is always possible to think of an alternative configuration of the professional sources list, this exercise would likely capture a very similar range of trusted news organizations.

There are several principal measures in our study. First, we analyzed views – the approximate number of users who saw a post from any device. Multiple views by the same user are grouped as a single view provided that they are within a four-day period. If a user sees a post again after four days, Telegram counts this as another view (Telegram.org, n.d.). When a post is shared from one channel to another, views from all channels are added to the post’s count of views.

We also examined whether a post has been shared from one channel to another. The Telegram Application Programme Interface (API) does not offer a share count for individual posts. However, each post that has been shared includes metadata containing a unique identifier for the original post. Hence, we were able to calculate the number of shares each post has received within the channels in the dataset. We also recorded the number of participants in each channel: the number of people who have subscribed to receive updates. Just like with shares, this data point comes from the Telegram API. For every channel, we also calculated the indicator of “reach,” defined as the number of views per subscriber, to describe how actively the subscribers were following the channel’s content.

Descriptive statistics for the dataset can be found in . Only 3% of the posts with hyperlinks in them contained links to either the professional news or misleading sources in our study: the majority linked to other websites, mostly in other languages such as Farsi, Russian and Arabic. The average post with a link to a professional news source in our dataset was viewed over 180 times, and a post with a misleading source was viewed 1,729 times, though this variable is highly skewed. Few of the posts we studied were ever shared. The average channel had a little over 11,000 participants.

Table 1. Descriptive statistics.

Results

Our analysis involves a series of linear and logistic regressions, reported in . Each regression has fixed effects for the year, month, and day of the week of the post, as many of our dependent variables are likely to be sensitive to temporal patterns in online activity. For all regressions, we checked normality of the dependent variable, variance inflation factors for evidence of multicollinearity, plots of residuals versus fitted values and checked for the presence of outliers in the data. The diagnostics suggested using robust standard errors, which have been employed throughout (we used HC1 robust errors), and the log transformation of all numeric variables that enabled good approximation with normality (variables were incremented by one before transformation).

Table 2. Linear model of views per post (M1) and logistic model of whether the post was shared (M2).

Table 3. Linear model of reach for different types of Telegram channel.

We begin our analysis by addressing Hypothesis 1, that posts containing a link to known sources of misleading information will attract more Telegram views than posts with links to professional news sources. reports a linear regression where the dependent variable is the log of the number of views of an individual post on Telegram (Model 1). The key independent variable is Source Type: Professional (the reference level) or Misleading. We can see that misleading sources typically received fewer views per post than professional ones (14% less). Hence, Hypothesis 1 is not supported.

We will now move to Hypothesis 2, that posts that make use of known sources of misinformation have a higher probability of being shared on Telegram than posts making use of professional news sources. This is also addressed in . As the outcome variable in this model is categorical, we use logistic regression and ask whether the chance of being shared was different between professional and misleading sources (Model 2). Notably, posts that contained a link to a misleading source were on average much more likely to be shared than posts with their professional counterparts (over four times more likely). Hence, Hypothesis 2 is supported.

We now move onto our third Hypothesis, which is that if a channel has a high proportion of misleading sources shared by users, the channel will be highly active. We address this in that presents two further linear regression models. Each one looks at the reach of sources posted in Telegram channels in our study. Reach is defined by the number of the views of posts in a channel divided by the number of participants. It offers a good measure of how active these channels were.

Model 3.1 considers the reach of channels that had misleading sources shared within them compared to other channels. We can see that channels with misleading sources were about on average 18% more active than other channels. If we examine the proportion of sources in the group (Model 3.2), we can see that as the proportion of misleading sources in a channel goes up, so does the activity level of the channels: a channel with 100% of all its sources being misleading would be about 34% more active on average than a channel without any misleading sources. Both models offer support to Hypothesis 3.

We now move on to our fourth and final Hypothesis, that misleading sources are confined to a smaller set of channels than professional news sources. This is addressed in , which presents a breakdown of statistics of different types of posts observed according to the sources contained within them. We can see that professional sources appeared in approximately three times as many Telegram channels as misleading sources. Nevertheless, if a channel shared links to misleading sources, such sources would like to appear in this channel again and again, meaning that they had a higher rate of shared posts per channel. Hence, Hypothesis 4 is also supported.

Table 4. Number of distinct Telegram channels for different types of a post.

Conclusion

This study addressed a key area of misinformation research – the proliferation of misleading sources on digital platforms compared to professional news sources (Freelon & Wells, Citation2020; Howard, Citation2020). We focused on Telegram – one of the fastest-growing platforms that combines the affordances of messengers and broadcasting social media like YouTube and Twitter but historically conducted relatively little regulation of its content.

We found that although links to known sources of misleading information were shared more often than links to professional news sources, misleading content did not attract more Telegram views than posts with links to professional news. We also found Telegram channels that had a high proportion of misleading sources were more active, with a greater reach of their posts, than those sharing links to professional news. However, misleading sources were confined to a smaller set of channels than professional news sources overall.

Several important conclusions can be derived from our findings. First, contrary to a widespread image contracted by non-systematic observations, our research demonstrates that not all largely unmoderated platforms have become toxic environments where misinformation outperforms professional news. We showed that on Telegram, the audience of US-focused professional news sources was potentially more extensive than the audience of US-focused sources that shared misleading content, though misinformation activity might still outperform in terms of content shares. This clarifies the emerging literature on Telegram that indeed showed that misinformation is present on this platform (Rogers, Citation2020; Urman & Katz, Citation2020). However, the scale and the reach of this information seem to be less dramatic than is frequently portrayed by pundits and commentators. This point also has important theoretical implications. Previous studies have argued that misleading information has the potential to reach a larger audience due to its sensationalized nature. The data presented here challenges this theory, showing that, while misleading information can enjoy success, high-quality news seems to be more successful even in a largely unmoderated platform. Future work may consider how other ‘news values’ (such as authority and reputation) still carry weight in the contemporary information society.

Second, platforms without one of key preconditions for content virality – algorithmically curated timelines – can still see misinformation disseminated virally. We found that misleading sources were shared on Telegram more often than professional news, and communities with links to misleading information were more active content contributors than communities with trusted information. These results are also consistent with previous research on user engagement with misleading information compared to professional news on other platforms like Facebook or Twitter that showed that the total audience of misleading sources was typically smaller but was more engaged (Au et al., Citation2020). This supports an existing theory which proposes that users consuming misinformation are potentially more deeply affected by the news than their mainstream counterparts (Del Vicario et al., Citation2016).

A closer look at the affordances and the platform’s design helps explain how the communities of active disseminators of misleading content function. Unlike Facebook or YouTube, Telegram offers no algorithmic timeline or recommendations, which could surface content to users who are not subscribed to particular channels. However, despite the absence of the algorithmic curation of content that encourages the viral spread of information, Telegram misinformation communities managed to disseminate content across their network through sharing content with links to misleading sources. One of the most commonly-proposed solutions to misinformation on social media is to curb the power of platform owners to encourage the virality of content on their algorithmically curated timelines (Caplan, Hanson, & Donovan, Citation2018). However, our research shows that misinformation can be virally distributed even on platforms without an algorithmic timeline if active communities are involved in spreading such content.

Third, our findings correspond to previous research suggesting that a few active sources of misleading information can contribute to the absolute majority of all such information distributed on a platform (Cinelli et al., Citation2020; Hindman & Barash, Citation2018); indeed, similar research has been conducted in other domains of online life such as comments under news articles (Zelenkauskaite & Balduccini, Citation2017). The processes of motivated reasoning can account for the existence of these normally tiny but active communities (Flynn et al., Citation2017; Lodge & Taber, Citation2013). However, we should be cautious about these active misinformation distributors – previous research identified that information from such communities can spread far beyond their restricted audiences. Urman et al. (Citation2020) found in their study of Telegram protest networks that smaller groups and their leading users can be very influential in linking an information network to the outside world, as well as helping to build large cohesive communities. Like these protest networks, Telegram communities that build their media ecology around misleading sources can potentially build larger networks or spread information with a greater speed. These findings emphasize the role of user agency rather than algorithm-related affordances in the battle between professional and misleading news on digital platforms.

Limitations and Future Research

The dataset we analyzed did not claim to be comprehensive: there were likely many more than 28,000 public Telegram channels and groups in existence at the time. Access to comprehensive and complete datasets appears to be a common limitation for studies based on social media data. The issue of completeness has long been recognized as a significant challenge for researchers studying communication (Lacy, Watson, Riffe, & Lovejoy, Citation2015). Indeed, “obtaining a uniform random sampling may be difficult or impossible when acquiring” this data (Olteanu et al., Citation2016). A common approach for testing completeness is to compare several datasets. However, since the dataset used was the largest available during our study, we had to limit our analysis to the data on-hand.

There are good reasons to be confident that our dataset contains the most important English-language public Telegram channels. By progressively adding more channels and groups through their snowballing method, the creators of the dataset reduced the likelihood that any large venue is missing, as messages from larger venues would eventually be shared to one of the channels and groups already in the list. The dataset also contains many channels that are relatively small: 14.8% have fewer than 100 followers, and 52.6% have fewer than 1,000 followers (Baumgartner et al., Citation2020, p. 844). This indicates that the snowballing approach did not prioritize larger channels to the exclusion of smaller channels. Still, we must stress that our sample is limited to those communities discovered by crawling, and therefore might overlook some smaller channels. In addition, it could also be beneficial to use multiple lists of sources to replicate the analysis, providing a valuable avenue for future research.

Another limitation of our work is that we limited our analysis to exposure to information alone, and it lacks other metrics of impact, which could provide a different picture. For example, we are not able to determine if misinformation is more persuasive than the high-quality sources we study, or if the misinformation identified in this sample goes on to propagate further on other social platforms beyond our scope. Indeed, while Telegram is sometimes labeled a “fringe” platform, recent research has started to question the extent to which it is really disconnected from the rest of the media environment (Zelenkauskaite & Niezgoda, Citation2017; Zelenkauskaite et al., Citation2021). Hence, when other impact metrics are available, it could be possible to reassess the impact of the misinformation we are studying.

Further inquiries should examine the profile of those users who are engaged in misinformation sharing on largely unmoderated platforms, especially the most productive users described by Graham and Wright (Citation2014) as “superspreaders.” Telegram does not offer many tools for quantitative analysis of user profiles; additional analysis of this type may be beneficial in this regard. This could address questions like whether these users are more loyal audiences of misleading information producers compared to professional news and what drives them. Our research was limited to a single year period on one largely unmoderated platform. Further studies should test these results for similar platforms, as well as investigate the role of timing considering the rapid growth of the audiences of largely unmoderated platforms. Finally, by expanding both the list of professional news and the list of misleading sources under investigation, future research can avoid a limited focus on a single country case within a limited period. A study that would focus on the audiences beyond the US and beyond the English-speaking context could also further clarify and test our results.

Notes

1. There is no consensus on a fixed definition of misleading information (Ecker et al., Citation2022; Pierri et al., Citation2020). Differing definitions are in use including the term disinformation, which is often specifically used for the subset of misinformation that is spread intentionally, and more research is need into the effects of differing terminology (Ecker et al., Citation2022). Hence, we do not draw a sharp distinction between different types of inaccurate, false, or deceptive information. We use the term misleading information as an umbrella term referring to any information that turns out to be inaccurate or false.

Acknowledgments

The authors gratefully acknowledge suggestions provided by Kevin Munger, Rasmus Kleis Nielsen and two anonymous reviewers that helped to improve the manuscript.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Open Society Foundations [OR2019-63102], The Adessium, Civitates, Craig Newmark Philanthropies, Luminate, Ford Foundations, and the Oxford Martin Programme, University of Oxford.

Notes on contributors

Aliaksandr Herasimenka

Aliaksandr Herasimenka is a postdoctoral researcher at the Programme on Democracy and Technology at the Oxford Internet Institute, University of Oxford. His work investigates.

Jonathan Bright

Jonathan Bright is an Associate Professor and Senior Research Fellow at the Oxford Internet Institute who specialises in computational approaches to the social and political sciences.

Aleksi Knuutila

Aleksi Knuutila is an anthropologist and data scientist who studies new forms of political culture, communities, and communication. He is a postdoctoral researcher at Oxford Internet Institute.

Philip N. Howard

Philip N. Howard is statutory Professor of Internet Studies at Balliol College at the University of Oxford.

References

  • Antoniades, D., Polakis, I., Kontaxis, G., Athanasopoulos, E., Ioannidis, S., Markatos, E. P., & Karagiannis, T. (2011). web: The web of short urls. Proceedings of the 20th International Conference on World Wide Web, 715–724. 10.1145/1963405.1963505
  • Au, H., Bright, J., & Howard, P. N. (2020). Coronavirus Misinformation: Weekly Briefings (Coronavirus Misinformation Weekly Briefings). Computational Propaganda Project. Data Memo. https://comprop.oii.ox.ac.uk/research/coronavirus-weekly-briefings/
  • Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132. doi:10.1126/science.aaa1160
  • Baumgartner, J., Zannettou, S., Squire, M., & Blackburn, J. (2020). The Pushshift Telegram Dataset. ArXiv:2001.08438[Cs]. Accessed 24 April, 2020. https://arxiv.org/abs/2001.08438
  • Bechmann, A. (2020). Tackling Disinformation and Infodemics Demands Media Policy Changes. Digital Journalism, 8(6), 855–863. doi:10.1080/21670811.2020.1773887
  • Berryman, R., & Kavka, M. (2018). Crying on YouTube: Vlogs, self-exposure and the productivity of negative affect. Convergence, 24(1), 85–98. doi:10.1177/1354856517736981
  • Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2015). Science vs Conspiracy: Collective Narratives in the Age of Misinformation. PLOS ONE, 10(2), e0118093. doi:10.1371/journal.pone.0118093
  • Bradshaw, S., & Howard, P. N. (2018). The Global Organization of Social Media Disinformation Campaigns. Journal of International Affairs, 71(1.5). Accessed 21 July, 2019. https://jia.sipa.columbia.edu/global-organization-social-media-disinformation-campaigns
  • Bright, J. (2016). The Social News Gap: How News Reading and News Sharing Diverge. Journal of Communication, 66(3), 343–365. doi:10.1111/jcom.12232
  • Caplan, R., Hanson, L., & Donovan, J. (2018). Dead Reckoning. Navigating Content Moderation After “Fake News.” Data & Society Research Institute. https://apo.org.au/sites/default/files/resource-files/2018-02/apo-nid134521.pdf
  • Chadwick, A., Vaccari, C., & O’Loughlin, B. (2018). Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing. New Media & Society, 20(11), 4255–4274. doi:10.1177/1461444818769689
  • Chen, Y., Conroy, N. J., & Rubin, V. L. (2015). Misleading Online Content: Recognizing Clickbait as ‘False News.’ Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection, Seattle, Washington, USA, 15–19. 10.1145/2823465.2823467
  • Cinelli, M., Cresci, S., Galeazzi, A., Quattrociocchi, W., & Tesconi, M. (2020). The limited reach of fake news on Twitter during 2019 European elections. PLOS ONE, 15(6), e0234689. doi:10.1371/journal.pone.0234689
  • Claim Sources. (2020). https://docs.google.com/spreadsheets/d/1S5eDzOUEByRcHSwSNmSqjQMpaKcKXmUzYT6YlRy3UOg/edit?usp=embed_facebook
  • Davey, J., & Davey, J. (2020). A Safe Space to Hate: White Supremacist Mobilisation on Telegram. London, UK: Institute for Strategic Dialogue. Accessed 1 July, 2020. https://www.isdglobal.org/isd-publications/a-safe-space-to-hate-white-supremacist-mobilisation-on-telegram/
  • Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., … Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences of the United States of America, 113(3), 554–559. doi:10.1073/pnas.1517441113
  • Donovan, J. (2020). Social-media companies must flatten the curve of misinformation. Nature. Nature. doi:10.1038/d41586-020-01107-z
  • Durov’s Channel. (2021). @durov. https://t.me/durov/147
  • Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., … Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1, 13–29. doi:10.1038/s44159-021-00006-y
  • Ferrucci, P., & David Wolfgang, J. (2021). Inside or out? Perceptions of how Differing Types of Comment Moderation Impact Practice. Journalism Studies, 22(8), 1010–1027. doi:10.1080/1461670X.2021.1913628
  • Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Political Psychology, 38(S1), 127–150. doi:10.1111/pops.12394
  • Freelon, D., & Wells, C. (2020). Disinformation as Political Communication. Political Communication, 37(2), 145–156. doi:10.1080/10584609.2020.1723755
  • Freelon, D., Marwick, A., & Kreiss, D. (2020). False equivalencies: Online activism from left to right. Science, 369(6508), 1197–1201. doi:10.1126/science.abb2428
  • Froio, C., & Ganesh, B. (2019). The transnationalisation of far right discourse on Twitter. European Societies, 21(4), 513–539. doi:10.1080/14616696.2018.1494295
  • Garimella, K., & Eckles, D. (2020). Images and misinformation in political groups: Evidence from WhatsApp in India. Harvard Kennedy School Misinformation Review. doi:10.37016/mr-2020-030
  • Graham, T., & Wright, S. (2014). Discursive equality and everyday talk online: The impact of “superparticipants.” Journal of Computer-Mediated Communication, 19(3), 625–642. doi:10.1111/jcc4.12016
  • Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. presidential election. Science, 363(6425), 374–378. doi:10.1126/science.aau2706
  • Hindman, M., & Barash, V. (2018). Disinformation, ‘Fake News’ and Influence Campaigns on Twitter.Editors & Knight Foundation. Accessed 17 May, 2021. https://knightfoundation.org/reports/disinformation-fake-news-and-influence-campaigns-on-twitter/
  • Howard, P. N. (2020). Lie machines: How to save democracy from troll armies, deceitful robots, junk news operations, and political operatives. Yale University Press.
  • Kemp, S. (2020). Social Media Users Pass 4 Billion: Digital 2020 October Statshot Report [Hootsuite]. Social Media Marketing & Management Dashboard. Accessed 14 January, 2021. https://blog.hootsuite.com/social-media-users-pass-4-billion/
  • Kilgo, D. K., Lough, K., & Riedl, M. J. (2020). Emotional .appeals and news values as factors of shareworthiness in Ice Bucket Challenge coverage. Digital Journalism, 8(2), 267–286. doi:10.1080/21670811.2017.1387501
  • Knuutila, A., Herasimenka, A., Bright, J., Nielsen, R., & Howard, P. N. (2020). Junk News Distribution on Telegram: The Visibility of English-language News Sources on Public Telegram Channels ( Data Memo 2020.5). Project on Computational Propaganda. https://comprop.oii.ox.ac.uk/research/posts/junk-news-distribution-on-telegram-the-visibility-of-english-language-news-sources-on-public-telegram-channels/
  • Lacy, S., Watson, B. R., Riffe, D., & Lovejoy, J. (2015). Issues and Best Practices in Content Analysis. Journalism & Mass Communication Quarterly, 92(4), 791–811. doi:10.1177/1077699015607338
  • Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., … Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. doi:10.1126/science.aao2998
  • Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest, 13(3), 106–131. doi:10.1177/1529100612451018
  • Lodge, M., & Taber, C. S. (2013). The Rationalizing Voter. Cambridge University Press. doi:10.1017/CBO9781139032490
  • Lou, C., Tandoc, E. C., Jr., Hong, L. X., Pong, X. Y., & Sng, N. G. (2021). When Motivations Meet Affordances. News Consumption on Telegram. Journalism Studies, 22(7), 934–952. doi:10.1080/1461670X.2021.1906299
  • Luk, T. T., Zhao, S., Weng, X., Wong, J. Y.-H., Wu, Y. S., Ho, S. Y., … Wang, M. P. (2020). Exposure to health misinformation about COVID-19 and increased tobacco and alcohol use: A population-based survey in Hong Kong. Tobacco Control. doi:10.1136/tobaccocontrol-2020-055960
  • Masullo, G. M., & Kim, J. (2020). Exploring “Angry” and “Like” Reactions on Uncivil Facebook Comments That Correct Misinformation in the News. Digital Journalism, 1–20. doi:10.1080/21670811.2020.1835512
  • Mitchell, A., Gottfried, J., Kiley, J., & Matsa, K. E. (2014). Political Polarization & Media Habits. Pew Research Center’s Journalism Project. Accessed 17 September, 2020. https://www.journalism.org/2014/10/21/political-polarization-media-habits/
  • Mustafaraj, E., & Metaxas, P. T. (2017). The Fake News Spreading Plague: Was it Preventable? Proceedings of the 2017 ACM on Web Science Conference, 235–239. 10.1145/3091478.3091523
  • Narayanan, V., Kollanyi, B., Hajela, R., Barthwal, A., Marchal, N., & Howard, P. N. (2018). News and Information over Facebook and WhatsApp during the Indian Election Campaign. Accessed 17 November, 2019 ( Data Memo 2019.2.). Project on Computational Propaganda. https://comprop.oii.ox.ac.uk/research/india-election-memo/
  • Neudert, L.-M., Howard, P., & Kollanyi, B. (2019). Sourcing and Automation of Political News and Information During Three European Elections, 5(3 ) . doi:10.1177/2056305119863147
  • Newman, N., Fletcher, R., Kalogeropoulos, A., & Kleis Nielsen, R. (2019). Reuters Institute Digital News Report 2019. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2019-06/DNR_2019_FINAL_0.pdf
  • Ng, H. X. L., & Loke, J. Y. (2020). Analysing Public Opinion and Misinformation in a COVID-19 Telegram Group Chat. IEEE Internet Computing, 11. 10.1109/MIC.2020.3040516
  • Olteanu, A., Varol, O., & Kıcıman, E. (2016). Towards an open-domain framework for distilling the outcomes of personal experiences from social media timelines. Tenth International AAAI Conference on Web and Social Media.
  • Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2021). Partisan Polarization Is the Primary Psychological Motivation behind Political Fake News Sharing on Twitter. American Political Science Review, 115(3), 999–1015. doi:10.1017/S0003055421000290
  • Owen, T. (2019). How Telegram Became White Nationalists’ Go-To Messaging Platform. Vice. Accessed 14 January, 2021. https://www.vice.com/en/article/59nk3a/how-telegram-became-white-nationalists-go-to-messaging-platform
  • P, B., & Ma, B. (2020). COVID-19 Related Misinformation on Social Media: A Qualitative Study from Iran. Journal of Medical Internet Research. doi:10.2196/18932
  • Pierri, F., Piccardi, C., & Ceri, S. (2020). Topology comparison of Twitter diffusion networks effectively reveals misleading information. Scientific Reports, 10(1), 1372. doi:10.1038/s41598-020-58166-5
  • Reis, J. C. S., Melo, P., Garimella, K., & Benevenuto, F. (2020). Can WhatsApp benefit from debunked fact-checked stories to reduce misinformation? Harvard Kennedy School Misinformation Review. doi:10.37016/mr-2020-035
  • Rogers, R. (2020). Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media. European Journal of Communication, 35(3), 213–229. doi:10.1177/0267323120922066
  • Shao, C., Ciampaglia, G. L., Varol, O., Yang, K.-C., Flammini, A., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1), 4787. doi:10.1038/s41467-018-06930-7
  • Starbird, K., Arif, A., & Wilson, T. (2019). Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 127:1–127:26. doi:10.1145/3359229
  • Telegram, O. (nd.). Telegram FAQ. Telegram. Retrieved from 29 June 2020,https://telegram.org/faq
  • Treré, E. (2020). The banality of WhatsApp: On the everyday politics of backstage activism in Mexico and Spain. First Monday, 25(1). doi:10.5210/fm.v25i12.10404
  • Trilling, D., Tolochko, P., & Burscher, B. (2017). From Newsworthiness to Shareworthiness: How to Predict News Sharing Based on Article Characteristics. Journalism & Mass Communication Quarterly, 94(1), 38–60. doi:10.1177/1077699016654682
  • Urman, A., Ho, J. C., & Katz, S. (2020). “No Central Stage”: Telegram-based activity during the 2019 protests in Hong Kong. SocArXiv. doi:10.31235/osf.io/ueds4
  • Urman, A., & Katz, S. (2020). What they do in the shadows: Examining the far-right networks on Telegram. Information, Communication & Society, 1–20.
  • Vargo, C. J., Guo, L., & Amazeen, M. A. (2018). The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media & Society, 20(5), 2028–2049. doi:10.1177/1461444817712086
  • Vasu, N., Ang, B., Terri-Anne-Teo Jayakumar, S., Faizal, M., & Ahuja, J. (2018). Fake News: National Security in the Post-Truth Era. Nanyang Technological University. https://globalresilience.northeastern.edu/fake-news-national-security-in-the-post-truth-era/
  • Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. doi:10.1126/science.aap9559
  • Waisbord, S. (2018). Truth is What Happens to News. Journalism Studies, 19(13), 1866–1878. doi:10.1080/1461670X.2018.1492881
  • Walker, S., & Agarwal, S. (2016). The Missing Link: A Preliminary Typology for Understanding Link Decay in Social Media. In IConference 2016 Proceedings. doi:10.9776/16590
  • Wischnewski, M., Bruns, A., & Keller, T. (2021). Shareworthiness and Motivated Reasoning in Hyper-Partisan News Sharing Behavior on Twitter. Digital Journalism, 9(5), 549–570. doi:10.1080/21670811.2021.1903960
  • Yeo, S. K., & McKasy, M. (2021). Emotion and humor as misinformation antidotes. Proceedings of the National Academy of Sciences, 118(15). 10.1073/pnas.2002484118
  • Zelenkauskaite, A., & Balduccini, M. (2017). “Information warfare” and online news commenting: Analyzing forces of social influence through location-based commenting user typology. Social Media+ Society, 3(3). doi:10.1177/2056305117733224
  • Zelenkauskaite, A., & Niezgoda, B. (2017). “Stop Kremlin trolls:” Ideological trolling as calling out, rebuttal, and reactions on online news portal commenting. First Monday, 22(5). doi:10.5210/fm.v22i5.7795
  • Zelenkauskaite, A., Toivanen, P., Huhtamäki, J., & Valaskivi, K. (2021). Shades of hatred online: 4chan duplicate circulation surge during hybrid media events. First Monday, 26(1).
  • Zhou, Y., Dredze, M., Broniatowski, D. A., & Adler, W. D. (2019). Elites and foreign actors among the alt-right: The Gab social media platform. First Monday, 24(9). doi:10.5210/fm.v24i9.10062