3,141
Views
0
CrossRef citations to date
0
Altmetric
Articles

‘Do your own research': affordance activation and disinformation spread

ORCID Icon, &
Pages 1212-1228 | Received 19 Oct 2022, Accepted 03 Jul 2023, Published online: 30 Aug 2023

ABSTRACT

Affordances are the perception of what a technical artifact can do. They bridge a technically-determinist perspective with social constructivist theory, acknowledging the material aspects of technology while allowing for user agency. Yet most affordance theory separates out the engagement process into producers and consumers. On one hand, this lens is essential because it considers how an end user interprets, engages, and utilizes technology through their social structure. It highlights how engagement is both constrained and enabled by the creator, but also documents how such engagement might differ from a creator’s intention(s) completely. On the other hand, this framework doesn’t consider the interactional dimensions of affordances theory. This paper fills this gap, relying on sociotechnical theory to analyze three case studies across three different platforms (Twitter, Google Scholar, and Yandex). In doing so, we explain how pundits, propagandists, and conspiracy theorists ‘activate affordances’ to validate their claims. When audiences are primed to ‘do their own research,’ disinformation becomes a more entangled, participatory process.

Introduction

In February 2022, The New York Times published an in-depth report on the role DuckDuckGo plays in confirming misleading claims spread by conspiracy theorists (Thompson, Citation2022). The article explained how high-profile conservative personalities are actively endorsing DuckDuckGo as their recommended search engine. By testing search returns between DuckDuckGo and Google, the journalist demonstrated that DuckDuckGo is more likely to amplify inaccurate information. This perspective emphasized how platforms can accidentally become safe havens for disinformation. It also highlighted a pattern observed in our own research, whereby affordances are activated in a way that furthers the spread of problematic content.

Affordances are bound by programmatic capabilities but also relative to the cultural specificities of a technology and rooted in whether users can perceive the intended possibilities (Gibson, Citation1977; Norman, Citation2014). Part of the reason why conservative pundits originally hailed DuckDuckGo was that the company did not seem to downrank conspiratorial content (Thompson, Citation2022). By encouraging their audiences to use DuckDuckGo instead of Google, they activated that affordance, creating a participatory environment whereby audiences could interact with conspiratorial content and legitimize false claims. By enabling everyday people to participate in the dissemination and amplification of disinformation, this example provides an opportunity to examine how audiences become actively involved in the spread of disinformation. To date, most research tends to emphasize the effects disinformation has on education, information, and democracy by mapping networked infrastructure, studying sociotechnical interactions, or creating educational interventions. This study bridges the gap between affordance theory and disinformation research to explain how technological loopholes are activated through participatory engagement.

Using data from three case studies of vastly different networks and platforms (Twitter, Google Scholar, and Yandex), we explain how a technology’s affordance allows people to engage in participatory disinformation spread, whereby false claims seem verifiably true. This argument builds on research which explains that all technology is shaped by and works in tandem with the social world in which it is used (Anderson, Citation2021; Baym, Citation2015; Boyd, Citation2010; Bucher, Citation2012; Bucher, Citation2017; Bucher & Helmond, Citation2018; Ellison & Vitak, Citation2015; Nagy & Neff, Citation2015; Tufecki, Citation2018). By comparing findings across these three cases, we examine how affordance activation creates an opportunity for disinformation dissemination.

Theoretical background

Affordance theory

The theory of affordances is rooted in the ecological sciences and considers the way inanimate and animate objects are adapted by people to best meet the needs of their environment (Gibson, Citation1977). The concept was later applied to sociology and human–computer interaction to consider how technology is both functional and relational. What we can do with technology is constrained but also co-constructed, sometimes beyond the object’s intended ‘made for’ use (Hutchby, Citation2001; Norman, Citation2014). Researchers have since applied affordance theory to consider how social structures are formed in and through information infrastructure (Baym, Citation2015; Boyd, Citation2010).

People interact with technological systems, so affordances can be more than just perceptible features; sometimes they are hidden, only revealed through active exploration (Gaver, Citation1991). However, most research on this exploratory process tends to emphasize end-user agency, focusing on how people (or communities) put technology to use in their everyday lives (see Bucher & Helmond, Citation2018 for a comprehensive review). By unpacking the ‘platform specificities’ (Bucher & Helmond, Citation2018) of three different technologies (Twitter, Google Scholar, and Yandex), this paper sheds light on how disinformation is embedded in affordance theory and how the spread of false claims is bound by complex, interdependent, sociotechnical processes.

Sociotechnical theory came about in the late 1940s/early 1950s when Trist and Bamforth (Citation1951) argued organizations need to optimize both technical processes and the social systems operating within them to function well. By recognizing the interdependence between social and technical systems, Bijker (Citation1995) explained how they are linked together by micro and macro structures. Herta Herzog drew on sociotechnical frameworks to understand how audiences conceptualized Orson Welles’ broadcast, War of the Worlds. While common narratives imply a tricked public, Herzog found that a majority responded to the broadcast by further investigating the claims (Anderson, Citation2021; Cantril, Citation1940). By effectively ‘doing their own research,’ Herzog found that the message was mediated by external forces (Anderson, Citation2021).

Given the decline of trust in broadcast media (Rainie et al., Citation2022; Gottfried & Liedke, Citation2022), we sought to better understand how affordance theory and sociotechnical theory intersect with ‘doing your own research’ (Tripodi, Citation2022) and enable participatory disinformation processes. High-speed internet connectivity, mobile devices, and the internet of things have made search an invisible part of everyday life (Haider & Sundin, Citation2019). Its ubiquity makes it easier to exploit Goffman’s (Citation1974) theorization of frames – interpretations or perspectives that help people make sense of the world around them. As our cases make clear, frames shape the kinds of claims certain audiences consider ‘relevant’ and influence the way they engage and interact with technological affordances.

Disinformation

For the last few years, scholars have differentiated between mis/dis/mal information by focusing on intent (Freelon & Wells, Citation2020; Jack, Citation2017). By grounding itself around intent, the field subsequently emphasized effects – network, sociotechnical, and educational. Network studies explained the role systems and platforms play in amplifying false claims. Studies within this realm tend to rely on web scraping and large data sets to understand how networks of actors manipulate platforms to identify and track patterns of disinformation, shed light on where false claims originate, and visualize how problematic content circulates throughout the internet (Benkler et al., Citation2018; Chen et al., Citation2021; Freelon & Wells, Citation2020; Kirdemir et al., Citation2020; Nisbet et al., Citation2021; Ognyanova et al., Citation2020; Uyheng et al., Citation2022).

Sociotechnical researchers try to understand why and how mis/disinformation is believed by highlighting the role epistemology plays in information validation (Anderson, Citation2021; Marwick, Citation2018; Tripodi, Citation2018; Yin et al., Citation2018). These studies find that tactics for spreading disinformation often exploit the human desire to fact-check information and encourage audiences to engage in participatory practices to create alternative facts (Lee et al., Citation2021; Marwick & Partin, Citation2022; Starbird et al., Citation2019; Tripodi, Citation2021; Tripodi, Citation2022). As Roberts (Citation2019) explains, falsities are layered with truth, and are deeply contextual.

Education research focuses on the efficacy of interventions and information evaluation. This line of research builds on the idea of ‘lateral reading,’ a conceptual tool that describes patterns identified in professional fact-checkers who quickly leave a website in question and use other sources online to determine a claim’s validity (Breakstone et al., Citation2021; Brodsky et al., Citation2021; McGrew et al., Citation2019; Wineburg et al., Citation2022). Likewise, scholars have proposed how ‘critical media literacy’ could encourage people to be more reflexive in their consumption of online information (Jiang & Vetter, Citation2020; Kellner & Share, Citation2005).

Although affordance theory is central to contemporary studies of online media, little research has sought to connect it to disinformation research. This paper bridges these fields by examining the ways in which affordances are activated to reinforce and legitimize lies. We use the phrase affordance activation in a deliberate attempt to move the field of disinformation research away from an emphasis on intent. Indeed, our data cannot prove whether the originator of the claims fully understood the capacities of the technologies they relied on to spread their messaging. What our data does document is how disinformation is spread through a participatory process dependent on a technology’s affordance. While we agree that intent is important, intent is nearly impossible to prove (Tannenbaum, Citation2013). Likewise, we align with Tannenbaum’s (Citation2013) argument that emphasizing intent redirects focus away from impact and privileges those already in a position of power. By focusing on how affordances are activated to enable participation, we document how the spread of false claims is dependent on interactive engagement.

Drawing on Marwick’s (Citation2018) ‘sociotechnical model of media effects,’ we argue that ‘affordance activation’ complicates the notion that combating disinformation is about identifying intent or accuracy. Whether or not ideologues are spreading information they know is incorrect or are ‘true believers’ trying to convey ‘facts,’ our cases document how audiences subsequently engage with the affordance, not with the claim. This participatory process gives ‘agency to nonhumans’ whereby the technology further authorizes, allows, and amplifies the spread of inaccuracies online (Latour, Citation2005, p. 72).

Methods

We draw on three cases to examine how various technological affordances furthered the spread of false claims and/or problematic content. Case studies provide context-dependent knowledge which moves beyond statistical generalizability to logical generalizability (Luker, Citation2010; Small, Citation2009). Descriptive case study research is particularly useful when attempting to understand how a contemporary phenomenon works (Yin, Citation2014). Therefore, each author identified an example of affordance activation that had recently surfaced in their respective research. Case One focuses on a tweet sent by the co-creator of The Federalist which claims that the platform is silencing conservatism. Case Two examines how a pseudo-scientific journal espousing white supremacist ideas leverages Google Scholar to normalize its political discourse. Case Three demonstrates how QAnon participants used Yandex, a Russia-based search engine, to legitimize the conspiracy that Wayfair (an e-commerce company based in the USA that sells furniture and home goods) was trafficking children.

Our model draws on Marwick’s sociotechnical theory of media effects (Marwick, Citation2018) to explain how ‘affordance activation’ is bound to the actors (and the frames they rely on to engage their audiences); the messages (what the disinformation is about and how it is presented); and the affordance (technological capabilities through which disinformation is spread and consumed). In the following case studies, we identify actors, messages, and affordances to understand this complex interplay. Each case study draws on a unique information environment and platform and is connected to networks identified by scholars as key spreaders of disinformation – the right-wing information ecosystem, white supremacists, and conspiracy theorists (Benkler et al., Citation2018; DiMaggio, Citation2022; Lewis, Citation2018; Marwick & Lewis, Citation2017; Tripodi & Ma, Citation2022).

First, we conducted individual analysis to identify the frames being leveraged to spread false claims. Second, we document how audiences were encouraged to participate in an interactive model of disinformation spread. As a result, we demonstrate how the spread of inaccurate information is not bound to the claim, or its legitimacy, but rather through a participatory verification process - one enabled by an affordance.

Regardless of the actors’ intentions, these cases demonstrate how activated affordances increase a claim’s impact and perceived veracity. Much like original affordance theory, our cases explain how creators utilize the capacities and constraints of social media, scholarly citation, and search engines to enable their audiences to ‘validate’ unsubstantiated claims, white supremacist arguments, and conspiratorial logic.

Case studies

Case one - Twitter

Shortly after the 2020 U.S. Presidential election, thousands of then-President Donald Trump’s supporters traveled to Washington, D.C. to attend a ‘Stop the Steal’ rally. As NPR coverage of the Public Hearing of the United States House Select Committee on the January 6 Attack reveals, Twitter played a central role in the event. Not only did Trump use Twitter to spread the lie that the election was stolen and promote attendance at ‘Stop the Steal,’ he also sent tweets throughout the day on January 6, 2021. Following the attempted insurrection against the government of the United States, many platforms were prompted to remove prominent political figures associated with the day’s events (Delkic, Citation2022). Citing a ‘risk of further incitement of violence,’ Twitter banned Trump on January 8, 2021 (Conger & Isaac, Citation2021)

Prominent figures within the right-wing media ecosystem bemoaned the de-platforming of Trump (Zaru, Citation2021). Rather than contend with Trump’s role in an attempted insurrection of the U.S. government, pundits, and politicians doubled-down on the years-long claim that Big Tech was silencing conservatism (Vaidhyanathan, Citation2019). On January 9, 2021, the co-founder of The Federalist, posted a tweet implying that Twitter’s decision to ban Trump was akin to censoring conservatism. To make the argument, he evoked George Orwell’s 1949 dystopian novel by trying to hashtag ‘1984.’

Hashtags (#) are an important part of creating a common conversation on Twitter. Adding a hashtag to the beginning of a phrase creates a link to all the other tweets that include the same hashtag. This technological affordance gives a conversation longevity and is also connected to what topics ‘trend.’ As such, ‘hashtag activism’ is an important organizing tool for activists trying to galvanize social movements (Jackson et al., Citation2020). By claiming that Twitter was denying conservatives the ability to hashtag 1984, the author alleged that the company was making it more difficult for widespread discussion to occur.

Conservative analogies of media censorship often make references to George Orwell’s novel 1984. However, the fear that Big Tech is ‘watching’ us and controlling speech does not accurately reflect Orwell’s narrative (Orwell, Citation1949/2021). Complaints that Big Tech wields too much control fall more precisely under the reasons why invisible hand economics are ineffective. Today’s networked sphere is much more along the lines of Aldous Huxley’s vision of society in Brave New World people willingly consuming the addictive drug soma to avoid dealing with the realities of their daily struggles (Tufecki, Citation2018). Moreover, researchers have testified before Congress that the opposite is true – far from conservatism being stifled online, pundits have an acute understanding of how search engine optimization works and are using affordances to maximize exposure to their content (‘Google and Censorship through Search Engines’, Citation2019; ‘Stifling Free Speech: Technological Censorship and the Public Discourse’, Citation2019).

Nonetheless, conservative pundits drew on their framing of 1984 to sow confusion and distract from an attempted insurrection, arguing that Twitter’s decision to ban Trump after January 6, 2021 was just another example of Big Tech silencing conservatism.

‘Twitter won’t let you hashtag #1984, a dystopian novel about an evil Big Tech government that spies on everyone, censors and manipulates speech, punishes wrong-thought, and tortures dissidents for sport. There’s Orwellian, and then there’s banning references to Orwell Orwellian.’

This tweet activated the hashtag affordance, serving as an invitation for others to test the claim and bear witness to Twitter’s supposed censorship. Followers quickly engaged – retweeting, replying, or quote tweeting the claim that Twitter would not allow people to hashtag 1984. Many of these subsequent tweets served to support the claim that Twitter was silencing conservatism, because they also were unable to activate the hashtag affordance.

Activating the hashtag affordance helped amplify the claim. Shortly after the original post, 1984 began trending on Twitter. When one clicked on the Trending Topics headline, the tweets were either the original tweet or others testing the claims in a reply or quote tweet. The fact that other Twitter users were also unable to hashtag 1984 validated the narrative that Twitter was trying to silence conservatism. Because this narrative then dominated the trending topics, it further substantiated the original claims as factual.

What those who quote tweeted failed to realize (and perhaps the author of the tweet himself) was that Twitter wasn’t banning 1984 from being hashtagged because it was ‘Orwellian.’ Twitter does not allow any series of numbers to be hashtagged (e.g., #2021, #123). You must include a letter to activate the affordance. Eventually someone responded with this information, successfully hashtagging #NineteenEightyFour. While this reply received upwards of 1000 likes, it paled in comparison to the 20,000 likes and 12,000 retweets garnered by the original tweet. Ironically, fact-checking the claim only added to the perception that information could not be trusted (Mason et al., Citation2018).

Regardless of the original author’s intent or understanding of the affordance, the tweet was shared by tens of thousands of other users within hours. Users engaged with the claim in the comments, demonstrating that they tested it on their own and that #1984 did not work for them either. By interacting with the platform, but not understanding the limitation of the hashtag affordance, people were able to ‘prove’ the argument and validate the frame that Big Tech silences conservatism.

Case two - Google Scholar

Google Scholar is a freely accessible database where people can search for what is widely considered more accurate, empirically tested information. Nonprofit media literacy organizations like Common Sense promote Google Scholar as a credible source of scholarly research for high school students and people without access to academic libraries. When users search for information on Google Scholar, they are relying on both a shared understanding of the legitimacy of academic scholarship and the affordance of search results. However, these technological affordances create an opportune environment for citation gaming - an algorithmic, sociotechnical process that elevates disinformation. As early adopters of digital tools, white supremacist actors mimic the standards and practices of mainstream political organizations via cloaked propaganda websites, online political advocacy groups, and pseudoscientific research organizations (Daniels, Citation2009; Daniels, Citation2018; Garcia, Citation2020; Lyons, Citation2017). This case focuses on how white supremacist publications and organizations leverage the affordance of a publicly accessible academic search engine to normalize and legitimize their rhetoric as scholarly.

The Occidental Quarterly (TOQ) is a quarterly magazine published by the Charles Martel Society. In circulation since 2001, their website describes the publication as a ‘Journal of Western Perspectives on Man, Culture, and Politics.’ TOQ is part of a network of white supremacist organizing led by the Regnery family, a wealthy, white nationalist publishing family active in radical right-wing politics for generations (Hemmer, Citation2016). William H. Regnery II has become known in the last several years as the recluse who funded the alt-right (Daniels, Citation2018; Hawley, Citation2017). Unlike other organizations founded by Regnery II, namely the National Policy Institute, TOQ is rarely mentioned in news media and maintains relative obscurity as a radical right publication.

Google Scholar yields hundreds of TOQ results, and the highest cited articles contain several instances of explicit racism and conspiracy theories, from rewriting the history of the Civil Rights Movement to promoting Holocaust denial. Google Scholar’s search engine is ripe for manipulation because of the way content is added to the database through an automated web crawler. This is the first way TOQ articles enter the platform. In addition to university repositories and journal publishers, anyone can upload a paper to their website to be picked up. The criteria for inclusion is simple: a PDF file that ends in.pdf, the title in large font on the first page, the authors listed on a separate line, and a bibliography section. Many TOQ articles that appear on Google Scholar are not from the journal, but rather PDFs on personal websites. Google Scholar’s search algorithm is a ‘black box;’ the way search results are picked up, sorted, and ranked is proprietary information the company largely keeps secret. The lack of clarity around this process, or whether or not there are safeguards in place for content moderation, is part of the affordance of the platform.

There is notable overlap between TOQ leaders/writers and two anti-immigration advocacy groups that adopt the aesthetics of neutral research organizations or think tanks: The Center for Immigration Studies (CIS) and the Federation for American Immigration Reform (FAIR). Both promote a vision for a majority white America, and both are designated by the Southern Poverty Law Center as hate groups with known ties to white supremacist eugenicists. The founder of FAIR, John Tanton, also publishes the Social Contract Press (SCP), whose managing editor is Kevin Lamb, former editor and writer for TOQ. Several writers and editors across CIS, FAIR, and TOQ are members of the Charles Martel Society, and their work is interchangeable, with CIS and FAIR citing TOQ and SCP authors. The whitepapers produced by these groups appear in Google Scholar similarly to the articles published by TOQ, and this distillation of bad data across several different authors affiliated with named organizations creates a layered network of purveyors of disinformation. Leaders of these groups appear on bipartisan broadcast news networks as ‘experts,’ and on these programs they often cite their consortium’s illegitimate research to promote ideological talking points to unknowing audiences, who can then search Google Scholar and find ‘data’ to prove their false claims. Because knowledge of the peer-review process is limited to those with experience in academic research, the general public cannot differentiate between legitimate and illegitimate research on the platform.

TOQ authors also engage in citation gaming, a process that takes place when an academic author uses their university affiliation to publish in a mainstream journal and cite TOQ articles. For example, a former professor of psychology wrote articles for peer-reviewed Sage and Elsevier psychology journals in 2013 and 2014 that cite his 2004 TOQ article. Self-citation is a normal practice in academic research, but because of the affordance of the Google Scholar platform, TOQ authors can use this as a strategy to incorporate their pseudoscience into legitimate citation networks. Mentions in legitimate journals boost the legitimacy of TOQ, through both the number and quality of citations. Early internet research noted the importance of the politics of search engines and the limitations of market responses (Introna & Nissenbaum, Citation2000) and more recently, scholars have contended with the politics of citation practices (Ahmed, Citation2017; Chakravartty et al., Citation2018; Mills, Citation2021). Google Scholar’s algorithmic rankings and the citation networks it produces combine these concerns, creating an easily gameable system that is continuously fortified by user interaction. Through the ease of Google Scholar’s inclusion process and the practice of citation gaming, TOQ articles are more likely to be cited by scholars outside of the organization.

Because TOQ is embedded in Google Scholar citations, unaffiliated authors in peer-reviewed journals bolster TOQ credibility through ‘convenience citations’ (Garcia, Citation2020). Convenience citations happen when scholars rely on Google Scholar to cite a well-known theoretical concept but fail to engage with the full article to see how the concept is being applied. For example, anti-racist papers in the Journal of Diversity in Higher Education and Sociology of Race and Ethnicity cited Richard Lynn’s TOQ article on ‘pigmentocracy.’ Pigmentocracy, the idea that societies are stratified by skin color, is a useful concept found elsewhere in peer-reviewed literature. To these mainstream researchers publishing in peer-reviewed journals, a pigmentocracy is a racial stratification system that exacerbates inequality and harms people of color. In Lynn’s article, however, he uses explicit racial slurs and argues that pigmentocracies are good and necessary due to the laziness and lower IQ of darker skinned people. Similar studies that claim the biological inferiority of non-white races can be found in adjacent racist publications Mankind Quarterly and the Journal of Social, Political, and Economic Studies, where authors manipulate data to promote eugenics (Adams & Pilloud, Citation2021). Once a race science article is cited, a network of citations is created that raises the legitimacy of the original article and its standing in Google Scholar search results. In this way, and often unknowingly, users play a significant role in amplifying this ideology masked as science.

Though some citations consider the groups critically in a larger network of radical right-wing advocacy groups (Canizales & Vallejo, Citation2021; Larsen, Citation2007), using their ‘journals’ as formal citations also boosts their ranking in the algorithm. Unlike TOQ, which largely flies under the radar as an organization, FAIR and CIS utilize these citations to bolster the public image of the organization and the legitimacy of their leaders, with ‘researchers’ from the groups engaging as experts on legacy media platforms. In addition to spreading disinformation at the user search level, their presence on Google Scholar provides radical right-wing organizational leaders and the general public easy access to racist pseudoscience.

Case three - proving Q right

The right-wing conspiracy theory QAnon holds that Donald Trump is battling a network of democratic elites who are Satan-worshiping pedophiles. While this theory is clearly not true, its tremendous staying power – a recent PRRI poll found that 16% of Americans and 25% of Republicans believed the theory – is partially due to QAnon participants’ ability to generate a vast amount of ‘proof’ of their claims (Marwick & Partin, Citation2022; PRRI Staff, Citation2022). QAnon adherents are eager to proselytize. One such tactic became popular during the summer of 2020, when a QAnon theory claiming that over-priced cabinets sold by the furniture website Wayfair were trafficked children spread rapidly on social media. Screenshots and videos showing products selling for more than ten thousand dollars, with product names like Anabel and Samiyah (which supposedly corresponded to the names of the missing children), were posted on Twitter, YouTube, and TikTok. People skeptical of this theory were told to use the Russian search engine Yandex to look up product SKUs (stock-keeping unit, a bar code number that identifies a product) from Wayfair, with the keywords ‘SRC USA.’ ‘SRC USA’ appears in a 2016 Instagram photo posted by the actor Tom Hanks, who QAnon participants believe is part of the cabal.

For example, one YouTube video demonstrates a ‘New Baby Photo Album’ on Wayfair, selling for $11,199.99, with the SKU w001855399. When you search this numerical value on Yandex, a page titled ‘IMG SRC Girl Heels Images Usseek Com Nude Picture BLueDols,’ is returned which includes a picture of a prepubescent girl in high heels and a short skirt. There are dozens of TikTok and YouTube videos showing people conducting Yandex searches to ‘prove’ the conspiracy theory.

However, all this proves is that Russia has much more lax child pornography laws than the United States. Imgsrc is a Russian website similar to 4chan, where users can upload any image they want to share. While many of the photos are mundane pictures of cats and vacations, the site has long been notorious for hosting ‘soft-core’ child pornography, such as pictures of children in bathing suits (Gilbert, Citation2019). While search results from the site are blocked by Google, Bing, and most global search engines, Yandex, which is based in Russia, prioritizes sites hosted in Russia. In Russia, there is no legal definition of what constitutes child pornography, and it is not illegal to possess. Thus, Yandex allows such ‘soft core’ images to be freely accessed.

Any single string of numbers – in fact, any characters at all – plus ‘SRC’ searched on Yandex would return worrisome pictures of children. The association with Tom Hanks is purely coincidental. Hanks frequently posts quirky pictures of lost gloves on the street on his Instagram feed; in this context, ‘SRC USA’ is a marker spray-painted on the street by city utility workers identifying underground infrastructure (it stands for ‘Standard Requirements Code Underground Service Alert,’ but QAnons believed it referred to ‘Satanic Ritual Abuse’). However, by linking this obscure text string to SKUs and Tom Hanks, QAnon believers generated ‘proof’ that Q’s theories are true. The use of this technique as a persuasive tactic of conversion was discussed on a Reddit support group for people whose friends and family have fallen to QAnon:

… . During the wayfair debacle, when qanon accounts both big and small were encouraging the curious to look up the product skus plus the neutral-sounding letter code src on the russian search engine yandex, to ‘reveal’ images of child pornography and thus ‘confirm’ wayfair's involvement with the cabal … getting curious normies to search that term was 100% a trick to win converts through dishonesty and manipulation.

Thus, the affordances of Yandex, a site that Americans rarely use, contribute to the participatory nature of disinformation generated by the QAnon community.

Findings

Our data demonstrate how frames prime an audience to engage in an active exploration process bound by affordances. In Case 1, the co-founder of The Federalist primed his audience with a censorship frame and activated the narrative around the hashtag 1984. Due to the affordances of the platform – letters are required to hyperlink a hashtag – engaging with the tweet provided ‘evidence’ that Twitter silences conservatism. In Case Study 2, the writers and editors of TOQ present their publication as a valid academic source. Since many TOQ articles further the overall aim of modern white supremacists – that there are biological differences between races, and the white race is superior – legitimizing TOQ as a scientific, peer-reviewed publication suggests that the ideas in its pages are also correct (Saini, Citation2019). This furthers the overall ideological goal of the creators, not through directly espousing propaganda but by relying on a shared understanding of the legitimacy of academic publishing. This user-affordance interaction allows audiences to draw conclusions for themselves. Similarly, in Case Study 3, QAnon supporters spread the Wayfair conspiracy with the overall intent of emphasizing the accuracy of Q’s predictions and the overall validity of the QAnon conspiracy. Understanding the frames that drove this exploration shed light on how both the creators and consumers of the media content define both the problems and solutions in society (Goffman, Citation1959). It is for this reason that Gitlin (Citation1980) noted the importance of studying media frames to unearth persistent patterns of cognition and provide insight into the conceptual tools people draw on when processing new information.

Nonetheless, activating affordances are constrained by the temporal nature of news cycles and ‘data voids’ (Golebiewski & boyd, Citation2019). As Golebiewski and boyd (Citation2019) explain, a ‘data void’ is an extremely obscure word or phrase not frequently queried. Since little to no information matches the phrase, it is easy to manipulate the top returns. While it is a concept primarily associated with search engines, the strategy is compounded by ‘search-adjacent’ recommendation systems (e.g., Google Scholar or Twitter trending-topics). Because they are difficult to detect, data voids run unchecked until they are filled with quality content.

Our cases reveal that activating affordances and taking advantage of data voids legitimize lies because the engagement is less about the veracity of the claim and more about interacting with the affordance. In case one, hashtags are enabled to ‘prove’ that Twitter is silencing conservatism by rallying followers around the query 1984 – a phrase that signals the deeper frame of government oversight. In case two, white supremacists leverage Google Scholar around phrases like ‘pigmentocracy’ to make it seem like their ideas are supported by rigorous academic scholarship. In case three, QAnon conspiracy theorists rely around the query SRC USA and encourage their audiences to use a search engine with less rigorous information integrity resources. In doing so, QAnons quickly legitimized a conspiracy that pedophiles are embedded in elite society. Users who are less familiar with Twitter hashtags, Google Scholar rank order, or Yandex search algorithms become easy prey for manipulators spreading disingenuous claims, because the messages are framed as exploratory processes designed to ‘reveal truth.’ In each of these cases, audiences are not directly engaging with the underlying claims, but rather testing a void within an affordance. Since the affordances proves ‘true,’ it further verifies the deeper frame that the claim embodies.

By grounding their claims in existing frames and activating affordances, creators can indirectly espouse incorrect, hateful, or conspiratorial beliefs and potentially convert others to their worldview. In these cases, activating affordances invited audiences to participate in the spread of disinformation while creating plausible deniability. When audiences seem to validate disinformation on their own, claims become further detached from the creator (Phillips & Milner, Citation2021). Such a strategy further complicates the existing emphasis on intent within the field of disinformation studies. For example, it is ‘true,’ that one cannot hashtag 1984 on Twitter. When followers tested the claim and spread this information, they were also sharing the (incorrect) claim that Twitter was silencing conservatism.

Taken together, these case studies show how media manipulation is a complex interaction between online creators, their audiences, and technological affordances. By drawing on three different case studies, we demonstrate how unsubstantiated claims, white supremacist beliefs, and conspiratorial logic are all validated via participatory mechanisms. Regardless of the content creators’ intent or audience perceptions, the affordance enabled legitimacy and helped to spread misleading, hateful, or false claims. Such a finding emphasizes the interlocking nature of intent and impact, further complicating the perceived distinctions between mis/dis/mal information ().

Table 1. Affordance activation across platforms.

Analysis / discussion

The throughline between these cases reflect observations from over 80 years ago. Much like the listeners of War of the Worlds, chronicled in Hadley Cantril and Herta Hertzog’s famous Citation1940 study, people in each of our cases are still inclined to check information claims. The difference is that now they are more reliant on systems with which they may be less familiar. Instead of switching the radio station or calling a friend, fact-checking has become an invisible part of everyday life that is reliant on platforms to help in people’s investigations (Haider & Sundin, Citation2019). Conspiracy theorists and ideologues benefit from search as an everyday practice and a technology’s affordances to spread disinformation.

Tripodi (Citation2022) documents how propagandists spread ideas by inviting audiences to take part in the investigatory process. By encouraging audiences to ‘do their own research,’ this tactic gives people a sense of autonomy and strengthens their belief in false ideas. As Tripodi demonstrates, part of why conservative audiences are more inclined to fact-check the news is that their information systems frame liberals as sheep who are spoon fed lies by the biased mainstream media. By encouraging their audiences to ‘Google it,’ media manipulators empower audiences to believe that they oversee the narrative and are responsible for finding the truth.

Likewise, hashtagging, collective citation practices, and Yandex searches are also forms of ‘participatory disinformation’ (Starbird et al., Citation2019; Starbird et al., Citation2023) Technologies like social media and search engines require action by a user to operate, but as affordance literature notes, the power of the interaction can be wielded in many forms (Norman, Citation2014). As such, affordances are ripe for exploitation. By anticipating the engagement and interaction of their audiences and encouraging them to play with the platforms and networks in which they are embedded, manipulators can easily legitimize false claims and activate audiences to participate in an ideologically linked network.

Leveraging affordances is part of why the call for followers to use DuckDuckGo, which we describe in the introduction, was so effective. Shapiro, Rogan, and Bongino took a play from the ‘Propagandists’ Playbook,’ seeding the internet with problematic content and tagging it around a set of curated keywords to exploit search engine optimization (Tripodi, Citation2022). Encouraging their audiences to do their own research on a search engine with less emphasis on search integrity helped to further ensure audiences will find their content. Likewise, when DuckDuckGo announced its decision to downrank disinformation, Breitbart labeled it ‘Diet Google’ and has since encouraged its audience to switch to Yandex (Mak, Citation2022).

Business scholars find that when consumers build their own merchandise, they value the product more than an already assembled item of similar quality (Norton et al., Citation2012). Our case studies reveal that propagandists, pseudoscientists, conspiracy theorists, and ideologues draw on a similar strategy. By providing a tangible do-it-yourself quality to the process of information seeking, this ‘IKEA Effect of misinformation’ empowers audiences with a faux autonomy, making them feel like they are drawing their own conclusions (Tripodi, Citation2022). By activating technological affordances such as hashtags, citation systems, and search engine optimization, the manipulation process encourages audiences to engage with false claims in ways that further confirm their beliefs. Such a strategy is incredibly effective. Problematic content is stickier when false narratives are tied to participatory processes (Golebiewski & boyd, Citation2019; Lee et al., Citation2021; Marwick & Partin, Citation2022; Starbird et al., Citation2019; Starbird et al., Citation2023; Tripodi, Citation2021; Zade et al., Citation2022).

Utilizing Marwick’s (Citation2018) sociotechnical model of media effects, we explain how frames are wrapped in data voids and affordances are leveraged to legitimize false claims. By analyzing the different actors, messages, and affordances in these instances of media manipulation, we explain how messages crafted around information seeking encourage users to engage in a participatory process whereby technological interactions increase the reach, scope, and legitimacy of false claims.

Media manipulation is about manipulating the frame to control the narrative, a participatory process that integrates audience curiosity and is enabled by the technology at hand (Marwick & Lewis, Citation2017). Not unlike Marshall McLuhan’s famous adage that the ‘medium is the message,’ in these contexts, these media hold their own communicative capacities. By activating the capabilities of each medium to spread disinformation, media manipulators can shape and control how (dis)information flows.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

Dr. Marwick and Dr. Tripodi’s research is supported by the Center for Information Technology and Public Life, Technology, and Public Life and its philanthropic supporters, including the John S. and James L. Knight Foundation, Luminate, the William and Flora Hewlett Foundation, and the Carnegie Corporation of New York.

Notes on contributors

Francesca B. Tripodi

Francesca B. Tripodi is an assistant professor in the School of Information and Library Science at UNC–Chapel Hill and a principal investigator at the Center for Information Technology and Public Life.

Lauren C. Garcia

Lauren C. Garcia is a former PhD student at the University of Virginia.

Alice E. Marwick

Alice E. Marwick is an Associate Professor in the Department of Communication and a principal researcher at the Center for Information, Technology, and Public Life at the University of North Carolina at Chapel Hill.

References

  • Adams, D. M., & Pilloud, M. A. (2021). Perceptions of race and ancestry in teaching, research, and public engagement in biological anthropology. Human Biology, 93(1), 9. https://doi.org/10.13110/humanbiology.93.1.01
  • Ahmed, S. (2017). Living a feminist life (D. Couling, Trans.). Duke University Press.
  • Anderson, C. W. (2021). Fake news is not a virus: On platforms and their effects. Communication Theory, 31(1), 42–61. https://doi.org/10.1093/ct/qtaa008
  • Baym, N. K. (2015). Personal connections in the digital age. Polity Press.
  • Benkler, Y., Farris, R., & Roberts, H. (2018). Network propaganda (Vol. 1). Oxford University Press.
  • Bijker, W. E. (1995). Of bicycles, bakelites, and bulbs: Toward a theory of sociotechnical change. MIT Press.
  • Boyd, D. (2010). Social network sites as networked publics: Affordances, dynamics, and implications. In Z. Papacharissi (Ed.), A networked self (0 ed., pp. 47–66). Routledge.
  • Breakstone, J., Smith, M., Connors, P., Ortega, T., Kerr, D., & Wineburg, S. (2021). Lateral reading: College students learn to critically evaluate internet sources in an online course. Harvard Kennedy School Misinformation Review.
  • Brodsky, J. E., Brooks, P. J., Scimeca, D., Todorova, R., Galati, P., Batson, M., Grosso, R., Matthews, M., Miller, V., & Caulfield, M. (2021). Improving college students’ fact-checking strategies through lateral reading instruction in a general education civics course. Cognitive Research: Principles and Implications, 6(1), 23. https://doi.org/10.1186/s41235-021-00291-4
  • Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180. https://doi.org/10.1177/1461444812440159
  • Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086
  • Bucher, T., & Helmond, A. (2018). The affordances of social media platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The SAGE handbook of social media (pp. 233–253). SAGE Publications Ltd.
  • Canizales, S. L., & Vallejo, J. A. (2021). Latinos & racism in the Trump era. Daedalus, 150(2), 150–164. https://doi.org/10.1162/daed_a_01852
  • Cantril, H. (1940). The invasion from Mars; a study in the psychology of panic. Princeton University Press.
  • Chakravartty, P., Kuo, R., Grubbs, V., & McIlwain, C. (2018). #CommunicationSoWhite. Journal of Communication, 68(2), 254–266. https://doi.org/10.1093/joc/jqy003
  • Chen, E., Chang, H., Rao, A., Lerman, K., Cowan, G., & Ferrara, E. (2021). COVID-19 misinformation and the 2020 U.S. presidential election. Harvard Kennedy School Misinformation Review.
  • Conger, K., & Isaac, M. (2021, January 8). Twitter permanently bans Trump, capping online revolt. The New York Times.
  • Daniels, J. (2009). Cloaked websites: Propaganda, cyber-racism and epistemology in the digital era. New Media & Society, 11(5), 659–683. https://doi.org/10.1177/1461444809105345
  • Daniels, J. (2018). The algorithmic rise of the “alt-right”. Contexts, 17(1), 60–65. https://doi.org/10.1177/1536504218766547
  • Delkic, M. (2022, May 10). Trump’s banishment from Facebook and Twitter: A timeline. The New York Times.
  • DiMaggio, A. R. (2022). Conspiracy theories and the manufacture of dissent: QAnon, the ‘big lie’, COVID-19, and the rise of rightwing propaganda. Critical Sociology, 48(6), 1025–1048. https://doi.org/10.1177/08969205211073669
  • Ellison, N. B., & Vitak, J. (2015). Social network site affordances and their relationship to social capital processes. In S. S. Sundar (Ed.), The handbook of the psychology of communication technology (1st ed., pp. 203–227). Wiley.
  • Freelon, D., & Wells, C. (2020). Disinformation as political communication. Political Communication, 37(2), 145–156. https://doi.org/10.1080/10584609.2020.1723755
  • Garcia, L. C. (2020). From the margins to the center: Legitimation strategies from an alt-right case study. Virginia Commonwealth University.
  • Gaver, W. W. (1991). Technology affordances. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Reaching Through Technology - CHI ‘91, 79–84.
  • Gibson, J. J. (1977). The theory of affordances. In R. Shaw, & J. Bransford (Eds.), Perceiving, acting, and knowing: Toward an ecological psychology (pp. 67–82). Erlbaum.
  • Gilbert, B. (2019, September 28). A US soldier working at Mar-a-Lago uploaded photos of an underage girl to a Russian website—a closer look at the site reveals a horrific underworld. Business Insider.
  • Gitlin, T. (1980). The whole world is watching: Mass media in the making & unmaking of the new left. University of California Press.
  • Goffman, E. (1959). The Presentation of Self in Everyday Life. New York: Anchor.
  • Goffman, E. (1974). Frame analysis: An essay on the organization of experience. Harper & Row.
  • Golebiewski, M., & boyd, D. (2019). Data voids: Where missing data can easily be exploited. Data & Society Research Institute.
  • Google and Censorship through Search Engines. (2019). Hearing before the senate committee on the judiciary, subcommittee on the constitution, 116th congress.
  • Gottfried, J., & Liedke, J. (2022). U.S. adults under 30 now trust information from social media almost as much as from national news outlets. Pew Research Center. https://www.pewresearch.org/short-reads/2022/10/27/u-s-adults-under-30-now-trust-information-from-social-media-almost-as-much-as-from-national-news-outlets/
  • Haider, J., & Sundin, O. (2019). Invisible search and online search engines: The ubiquity of search in everyday life (1st ed.). Routledge.
  • Hawley, G. (2017). Making sense of the alt-right (p. 232). Columbia University Press.
  • Hemmer, N. (2016). Messengers of the right: Conservative media and the transformation of American politics. University of Pennsylvania Press.
  • Hutchby, I. (2001). Technologies, texts and affordances. Sociology, 35(2), 441–456. https://doi.org/10.1177/S0038038501000219
  • Introna, L. D., & Nissenbaum, H. (2000). Shaping the web: Why the politics of search engines matters. The Information Society, 16(3), 169–185. https://doi.org/10.1080/01972240050133634
  • Jack, C. (2017). Lexicon of lies: Terms for problematic information (p. 22). Data & Society Research Institute.
  • Jackson, S. J., Bailey, M., & Welles, B. F. (2020). #HashtagActivism: Networks of race and gender justice. MIT Press.
  • Jiang, J., & Vetter, M. A. (2020). The good, the bot, and the ugly: Problematic information and critical media literacy in the post digital era. Postdigital Science and Education, 2(1), 78–94. https://doi.org/10.1007/s42438-019-00069-4
  • Kellner, D., & Share, J. (2005). Toward critical media literacy: Core concepts, debates, organizations, and policy. Discourse: Studies in the Cultural Politics of Education, 26(3), 369–386. https://doi.org/10.1080/01596300500200169
  • Kirdemir, B., Agarwal, N., & Chair, M.-E. (2020). Social media, news, polarization, and disinformation in times of crisis: A case study on Turkey. 5.
  • Larsen, S. (2007). The anti-immigration movement: From shovels to suits. NACLA Report on the Americas, 40(3), 14–18. https://doi.org/10.1080/10714839.2007.11722307
  • Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.
  • Lee, C., Yang, T., Inchoco, G. D., Jones, G. M., & Satyanarayan, A. (2021). Viral visualizations: How coronavirus skeptics use orthodox data practices to promote unorthodox science online. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–18.
  • Lewis, R. (2018). Alternative influence: Broadcasting the reactionary right on YouTube (p. 61). Data & Society Research Institute.
  • Luker, K. (2010). Salsa dancing into the social sciences: Research in an age of info-glut. Harvard University Press.
  • Lyons, M. N. (2017). Ctrl-Alt-Delete: An antifascist report on the alternative right. In K. Kersplebedeb (Ed.), Ctrl-Alt-Delete: An antifascist report on the alternative right. Kersplebedeb Publishing.
  • Mak, A. (2022, March 15). The DuckDuckGo users furious at its response to the war in Ukraine. Slate. https://slate.com/technology/2022/03/duckduckgo-russian-disinformation-downranking.html
  • Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online (p. 106). Data & Society Research Institute.
  • Marwick, A. E. (2018). Why do people share fake news? A sociotechnical model of media effects. Georgetown Law Technology Review, 2(2), 474–512.
  • Marwick, A. E., & Partin, W. C. (2022). Constructing alternative facts: Populist expertise and the QAnon conspiracy. New Media & Society, 14614448221090200.
  • Mason, L. E., Krutka, D., & Stoddard, J. (2018). Media literacy, democracy, and the challenge of fake news. Journal of Media Literacy Education, 10(2), 1–10. https://doi.org/10.23860/JMLE-2018-10-2-1
  • McGrew, S., Smith, M., Breakstone, J., Ortega, T., & Wineburg, S. (2019). Improving university students’ web savvy: An intervention study. British Journal of Educational Psychology, 89(3), 485–500. https://doi.org/10.1111/bjep.12279
  • Mills, M. (2021). Introduction: “Citation networks as antidiscriminatory practice”. Catalyst: Feminism, Theory, Technoscience, 7(2), 4. https://doi.org/10.28968/cftt.v7i2.37645
  • Nagy, P., & Neff, G. (2015). Imagined affordance: Reconstructing a keyword for communication theory. Social Media + Society, 1(2), 205630511560338. https://doi.org/10.1177/2056305115603385
  • Nisbet, E. C., Mortenson, C., & Li, Q. (2021). The presumed influence of election misinformation on others reduces our own satisfaction with democracy. Harvard Kennedy School Misinformation Review.
  • Norman, D. A. (2014). The design of everyday things, revised and expanded edition. MIT Press.
  • Norton, M. I., Mochon, D., & Ariely, D. (2012). The IKEA effect: When labor leads to love. Journal of Consumer Psychology, 22(3), 453–460. https://doi.org/10.1016/j.jcps.2011.08.002
  • Ognyanova, K., Lazer, D., Robertson, R. E., & Wilson, C. (2020). Misinformation in action: Fake news exposureis linked to lower trust in media, higher trust in government when your side is in power. Harvard Kennedy School Misinformation Review.
  • Orwell, G. (2021). Nineteen eighty–four. Penguin Classics. (Original work published 1949).
  • Phillips, W., & Milner, R. (2021). You are here: A field guide for navigating poliarized speech, conspiracy theories, and our polluted media landscape. The MIT Press.
  • PRRI Staff. (2022). The persistence of QAnon in the post-Trump era: An analysis of who believes the conspiracies (p. 14). Public Religion Research Institute (PRRI).
  • Rainie, L., Keeter, S., & Perrin, A. (2022). Trust and distrust in America. Pew Research Center. 2019.
  • Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press.
  • Saini, A. (2019). Superior: The return of race science. Beacon Press.
  • Small, M. L. (2009). `How many cases do I need?'. Ethnography, 10(1), 5–38. https://doi.org/10.1177/1466138108099586
  • Starbird, K., Arif, A., & Wilson, T. (2019). Disinformation as collaborative work. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–26. https://doi.org/10.1145/3359229
  • Starbird, K., DiResta, R., & DeButts, M. (2023). Influence and Improvisation: Participatory Disinformation during the 2020 US Election. Social Media + Society, 9(2), 205630512311779. https://doi.org/10.1177/20563051231177943
  • Stifling Free Speech: Technological Censorship and the Public Discourse. (2019). Hearing before the committee on the judiciary, subcommittee on the constitution, 116th congress. committee on the judiciary, subcommittee on the constitution.
  • Tannenbaum, M. (2013). “But I didn't mean it!” Why it's so hard to prioritize impacts over intents. Scientific American – Psychological Science in the Public Interest.
  • Thompson, S. A. (2022, February 23). Fed up with Google, conspiracy theorists turn to DuckDuckGo. The New York Times.
  • Tripodi, F. (2018). Searching for alternative facts: Analyzing scriptural inference in conservative news practices. Data & Society Research Institute.
  • Tripodi, F., & Ma, Y. (2022). You’ve got mail: How the Trump administration used legislative communication to frame his last year in office. Information, Communication & Society, 25(5), 669–689. https://doi.org/10.1080/1369118X.2021.2020873
  • Tripodi, F. B. (2021). ReOpen demands as public health threat: A sociotechnical framework for understanding the stickiness of misinformation. Computational and Mathematical Organization Theory, 28(4), 321–334.
  • Tripodi, F. B. (2022). The propagandists’ playbook: How conservative elites manipulate search and threaten democracy. Yale University Press.
  • Trist, E. L., & Bamforth, K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting. Human Relations, 4(1), 3–38. https://doi.org/10.1177/001872675100400101
  • Tufecki, Z. (2018). Twitter and tear gas: The power and fragility of networked protest. Yale University Press.
  • Uyheng, J., Bellutta, D., & Carley, K. M. (2022). Bots amplify and redirect hate speech in online discourse about racism during the COVID-19 pandemic. Social Media + Society, 8(3), 205630512211047. https://doi.org/10.1177/20563051221104749
  • Vaidhyanathan, S. (2019, July 28). Why conservatives allege big tech is muzzling them. The Atlantic.
  • Wineburg, S., Breakstone, J., McGrew, S., Smith, M. D., & Ortega, T. (2022). Lateral reading on the open internet: A district-wide field study in high school government classes. Journal of Educational Psychology, 114(5), 893–909. https://doi.org/10.1037/edu0000740
  • Yin, L., Roscher, F., Bonneau, R., Nagler, J., & Tucker, J. A. (2018). Your friendly neighborhood troll: The internet research agency’s use of local and fake news in the 2016 U.S. Presidential campaign [Data Report]. NYU Center for Social Media and Politics.
  • Yin, R. K. (2014). Case study research: Design and methods (Fifth edition.). SAGE.
  • Zade, H., Wack, M., Zhang, Y., Starbird, K., Calo, R., Young, J., & West, J. D. (2022). Auditing google’s search headlines as a potential gateway to misleading content: Evidence from the 2020 US election. Journal of Online Trust and Safety, 1(4). https://doi.org/10.54501/jots.v1i4.72
  • Zaru, D. (2021, January 13). Trump Twitter ban raises concerns over “unchecked” power of big tech. ABC News.