2,402
Views
20
CrossRef citations to date
0
Altmetric
Articles

From API to AI: platforms and their opacities

Pages 1989-2006 | Received 21 Jul 2017, Accepted 08 May 2018, Published online: 11 Jun 2018
 

ABSTRACT

Accounts of social network platforms have often stressed their programmability. The economic, social and technical fabric of social media has been directly associated with the code that modulates their connectivity and constructs opaque forms of property and capitalisation. The programmability of contemporary platforms is shifting in significant respects. By describing some recent shifts in programming practice at Facebook, the paper explores how predictive and machine learning approaches arise from a constitutive opacity present in all platform ensembles. It suggests that the growth in predictive programmability can be understood in terms of an increasingly experimental interplay between processes of platformisation and infrastructuralisation. Understanding these changes in programmability might be useful in analysing the relations between large information-communication ensembles and contemporary forms of life. It conceptualises, in particular, opacity as a constitutive problem for platforms rather than a proprietary limitation.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes on contributor

Adrian Mackenzie (Professor in Technological Cultures, Department of Sociology, Lancaster University) researches cultural intersections in science, media and technology. His most recent book is Machine Learners: Archaeology of a Data Practice (MIT Press 2017).

Notes

1 The journalist Steven Levy offers a useful sketch of some of the AI developments at Facebook (Levy, Citation2017).

2 The actual programming work carried out at Facebook is difficult to gauge. How many developers will be needed to connect 3 or 6 billion people? Facebook currently employs about 15,000 people and reports 1.9 billion active users (Facebook, Citation2017a).

3 Although it is not my focus here, paying attention to the geography of servers would highlight something of the politics of the ‘live archive’ that Facebook implements in pursuit of stronger connectivity. The demand for ‘everything always on’ feeds directly into the advert-intensive operations of the platform.

4 Again, these experimental arrangements are not unusual. Google Research several years ago reported on something similar [TODO: add Google reference here]. For our purposes, the key point here is the prediction becomes part of a workflow that can then be pipelined into live infrastructure if the experiments work out.

5 Again, these experimental arrangements are not unusual. Google Research several years ago reported on something similar [TODO: add Google reference here]. For our purposes, the key point here is the prediction becomes part of a workflow that can then be pipelined into live infrastructure if the experiments work out.

6 I will not dwell on any of these details here. A more extensive discussion of them can be found in Mackenzie (Citation2017).

7 Hive refers to software developed by Facebook to search large distributed data collections processed on Hadoop using familiar query languages.

8 In earlier work, I analysed how Facebook models user retention. Much of this analytical predictive work is increasingly routine as part of social media platforms (See Mackenzie & McNally, Citation2013).

9 Facebook announced it was making the BigSur hardware open source in 2015 (Lee, Citation2015).

10 Note that BigSur is a geographical entity, the ‘greatest meeting of land and water in the world’ according to Wikipedia.

11 The BigSur GPUs derive from video display cards developed by the Taiwanese company NVIDIA for personal computers during the early 1990s.

12 The name DeepMask attests to one of the primary operations undertaken: to mask all those parts of the image that lie outside the boundaries of the object in the image.

13 This level of intensive object recognition entails significant computational processing: ‘our model takes around 5 days to train on a Nvidia Tesla K40m’ (Pinheiro et al., Citation2015, p. 5). After training, the predictive model requires around 1.6s to infer all the objects in an image using a state of the art GPU card (Pinheiro, Citation2015, 8). Given both the training time (the time taken for the model to stabilise its parameters) and the inference time (the time taken by the model to recognise objects in an image), we can see why ‘BigSur’ might be important. Many GPUs will be needed to locate the objects in contexts commonly found in the images.

14 Facebook is interested and perhaps needs to expand its users, and without much further ado has begun to engineer planetary coverage with satellites, drones or UAVs called ‘Aquila’ and other forms of infrastructure such as millimeter wavelength microwave links and cellular telephone equipment such as OpenCellular (see Ali, 2016).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 304.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.