2,184
Views
9
CrossRef citations to date
0
Altmetric
Articles

The logic of the surface: on the epistemology of algorithms in times of big data

Pages 2096-2109 | Received 13 May 2019, Accepted 29 Jan 2020, Published online: 11 Feb 2020
 

ABSTRACT

The image of big data and algorithms in society is obviously ambivalent. On the one hand, algorithms are seen as a tool of empowerment that allows us, for example, to render society transparent and thus governable, to the extent that the social sciences might even become obsolete. On the other hand, algorithms seem to assume a mysterious agency in the black box of the computer so that their operations are invisible and inscrutable to us: artificial intelligence is seen as something that one day will have the power to dominate us. Beyond these two extreme positions that both overestimate and underestimate how algorithms might change our way of seeing things and being in the world, the present article introduces a third perspective. Algorithms, it holds, indeed follow their own ‘style of reasoning’ and thus create new realities. At the same time, however, they ‘reduce reality’, as they lack access to the world of human sense making. Algorithms have no secrets but deploy a ‘logic of the surface’. As they paint a behaviorist picture of human modes of existence, algorithms and big data might change our self-understanding. Engaging in epistemological questions will help us to capture the ontological implications of algorithmic reasoning.

Acknowledgement

The author would like to thank Rainer Mühlhoff, the anonymous reviewers of this journal and the editor of this volume for their insightful comments on an earlier version of this article.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes on contributor

Susanne Krasmann is Professor of Sociology at Universität Hamburg, Hamburg. Her main research areas are: Security Dispositifs; Law and Its Knowledge; the Future of Algorithms; Vulnerability and the Political.

Notes

1 As Sciortino (Citation2017, p. 262) observes, Ian Hacking’s notion echoes some features of Foucault’s concept of the episteme in that it focuses on how historical modes of thinking, forms of knowledge, related institutions, practices of truth-speaking and ways of seeing are intrinsically interlinked.

2 Speaking of algorithms, what distinguishes today’s digital world from previous programming is the availability of big data and their quality, which has radically changed how algorithms are designed, trained, and executed; furthermore, the pervasiveness and speed of algorithmic decision making, and finally the exponential rise of self-learning algorithms (Amoore & Piotukh, Citation2016). Throughout this article I will mainly refer to the latter type of self-learning machines, without always explicitly making a distinction between supervised learning where ‘the algorithms learn from a ground truth model of data labelled by humans’ (Amoore, Citation2019, p. 5) and unsupervised or ‘deep learning’ that starts without an initial theory, hypothesis, model or norm (Johns, Citation2017).

3 Currently, this becomes obvious, for example, when algorithms are deemed to extinguish unwanted political or illegal messages or images on the internet.

4 Typically, this also holds for automated content analysis of communications where particular words or phrases are supposed to indicate suspicious behavior.

5 An infamous example is Amazon’s recruiting engine, which learned to rate candidates for technical posts by observing patterns in resumes submitted to the company over a 10-year period. As it turned out, the system only echoed the gender bias of the previous decisions that assessed women in general more poorly. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

6 As Amoore (Citation2019, p. 17) points out, the demand that big companies should unveil the secret code of algorithms is naïve in that it overlooks ‘the obscured unrecognizability of all forms of self’.

7 As Luciana Parisi (Citation2019, pp. 111–2) explains, we may call what algorithms do ‘hypothesis-making’. Abductive reasoning in computing is when algorithms learn from incomplete information and through hypothetical processing. Data may then be tracked not merely ‘retroactively but also speculatively, by inventing hypothesis that can lead to new rules, axioms, truths’.

8 For example, it is still difficult for self-learning machines of driverless cars to anticipate a pedestrian’s action when that person is hesitating at a crossing.

9 Unlike opacity, secrecy, according to Simmel’s (Citation1906, p. 449) definition, is ‘consciously willed concealment’, and thus depends on intention; yet, presentation is also part of the logic of secrecy. The secret is talkative not about its content but about itself: it tells us that there is a secret (Krasmann, Citation2019).

10 Fazi (Citation2019, p. 21) draws on the logician Kurt Gödel’s work in his employment of the notion of incompleteness.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 304.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.