ABSTRACT
In 2022, the U.S. Supreme Court ended a federal right to abortion access in the country. Ensuing disparate abortion legalization throughout the country made pertinent the question of whether and how digital platforms would promote or demote abortion-related content. In this study, we explore folk theories in the domain of abortion-related speech. We investigate conceptualizations and heuristics of how users think platform governance and shadowbanning – a practice in which content is demoted without a user knowing about it – work. Through in-depth interviews with 19 pro- and anti-abortion activists in the United States, we examined how activists thought platforms adjudicated their content. We found that, while the two ideologically disparate communities of activists developed differing folk theories on platform governance, they overlapped in thinking that they were censored by platforms precisely because of what they believed in and had shared on social media. This feeds into a general assumption that platforms’ decisions are ideologically motivated, a phenomenon we refer to as ideological suspicion. Our results show that when users do not know how platforms render content decisions about abortion-related speech, this can lead to troubling feelings of marginalization, isolation, and censorship, but it can also motivate activists to ensure their beliefs are heard in a hostile platform environment.
Acknowledgements
The authors thank the interview participants for their willingness to speak with us and for providing critical insights. We also want to extend our gratitude to reviewers at this journal for valuable feedback.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 Three interviewees declined to disclose their age.
2 In four cases, interviewees were extremely concerned about privacy and security, and interviews were conducted over Instagram Direct and Signal. In three of these cases, interviews were conducted using text rather than recording.
3 We want to thank reviewers for making this salient point.
Additional information
Funding
Notes on contributors
Martin J. Riedl
Martin J. Riedl (Ph.D., University of Texas at Austin) is an assistant professor in the School of Journalism and Media at the University of Tennessee, Knoxville, and an affiliate fellow at the Center for Media Engagement at the University of Texas at Austin. His research investigates platform governance and content moderation, digital journalism, as well as the spread of false and misleading information on social media [email: [email protected]].
Zelly C. Martin
Zelly C. Martin (M.A., University of Texas at Austin) is a Ph.D. candidate at the University of Texas at Austin in the School of Journalism and Media and a researcher at the propaganda research lab at the Center for Media Engagement. She specializes in the study of mis/disinformation and data surveillance on emerging platforms, especially as they relate to reproductive rights [email: [email protected]].
Samuel C. Woolley
Samuel C. Woolley (Ph.D. University of Washington) is a faculty member in the School of Journalism and Media and fellow of the R.P. Doherty, Sr. Centennial Professorship at the Moody College of Communication, at the University of Texas at Austin. His research focuses on how new media tools get used for both freedom and control [email: [email protected]].