ABSTRACT
People often turn to self-help behaviors when formal processes of the state deteriorate, becoming inaccessible or ineffective. This deterioration can often include real or alleged inaccuracies in the courts that lower trust and confidence in the judicial system. Increasingly, one potential source of error in the courts is algorithmic, with more and more facets of the judicial system incorporating actuarial assessments. In this paper, I examine whether trust and confidence, separate from legitimacy, and the source of judicial error – humans or algorithms – matter for declared support of self-help behaviors, such as naming and shaming on social media, protesting, and violent economic protesting. In the experiment, respondents read information about identical levels of judicial error made by either a human or algorithm. They then indicated their attitudes towards the judicial systems and self-help behaviors. Respondents that read about algorithm-error had greater odds of supporting some self-help behaviors. In addition, the level of trust in the courts, and not legitimacy, mattered most for support of self-help behaviors. The paper discusses potential mechanisms behind the differences between the human- and algorithmic-error groups as well as the distinction between trust and legitimacy for self-help behaviors.
Acknowledgments
I would like to thank Eric Jardine, James Hawdon, and Kurt Luther for their comments at various stages throughout the project. I would also like to thank Micah Roos and the participants of the Power, Knowledge Culture workshop for their suggestions to improve the manuscript, as well as the two anonymous reviewers for their helpful comments. Any remaining errors are my own.
Disclosure statement
No potential conflict of interest was reported by the author.
Additional information
Notes on contributors
Leanna Ireland
Leanna Ireland is a doctoral student in sociology at Virginia Tech. Her research interests are at the intersection of technologies, crime, and society, such as the incorporation of biometrics into American K-12 schools and the global use of privacy-enhancing technologies. She held a Deloitte Data Analytic Fellowship in 2019 and currently holds a SSHRC Doctoral Fellowship 2019-2021.