9,505
Views
176
CrossRef citations to date
0
Altmetric
Articles

Situating methods in the magic of Big Data and AI

ORCID Icon &
Pages 57-80 | Received 09 Aug 2017, Accepted 18 Aug 2017, Published online: 19 Sep 2017
 

ABSTRACT

“Big Data” and “artificial intelligence” have captured the public imagination and are profoundly shaping social, economic, and political spheres. Through an interrogation of the histories, perceptions, and practices that shape these technologies, we problematize the myths that animate the supposed “magic” of these systems. In the face of an increasingly widespread blind faith in data-driven technologies, we argue for grounding machine learning-based practices and untethering them from hype and fear cycles. One path forward is to develop a rich methodological framework for addressing the strengths and weaknesses of doing data analysis. Through provocatively reimagining machine learning as computational ethnography, we invite practitioners to prioritize methodological reflection and recognize that all knowledge work is situated practice.

Acknowledgements

We are grateful to Robyn Caplan, Andrew Selbst, and Caroline Jack for their insightful comments on an early draft, and to Kinjal Dave for research assistance. In addition, the attendees of the Data & Society Workshop: Eclectic (2016) provided valuable feedback on thinking through the histories and epistemologies at stake in AI.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. For an expanded list of definitions see Press (Citation2014).

2. In the technology sector, the term “vaporware” refers to publicized hardware or software that does not exist or does not do what is promised. Because developers in the tech industry create software based on platform specifications, false advertising is seen as harmful to the ecosystem. Yet, in a more practical sense, vaporware is also duplicitous in the same sense as snake oil.

3. Computer scientists Russell and Norvig (Citation1995) have argued that the history of AI, far from centering on any particular definition of “intelligence,” can be seen as orienting around four interrelated but distinct goals: “systems that think like humans, systems that act like humans, systems that think rationally, systems that act rationally.”

4. This is not to say that all artificial intelligence research assumed (or assumes) the same form during the periods under discussion, and the field of AI research has always been characterized by multi-disciplinary and divergent approaches. See (Olazaran, Citation1996) for an analysis of one of the most well-known controversies over methods and techniques in AI.

5. The idea that otherwise inanimate objects might become self-animating can be found in ancient Greek and Chinese texts (Mazlish, Citation1995), and throughout history, especially during periods of social upheaval or rapid technological change, new mythologies emerged and circulated about the paradise or peril that new technologies would bring. Contemporary American imaginaries about AI are a palimpsest of previous mythologies as well as a particular formulation rooted in the rapidly developing technological cultures of the mid-twentieth century. Contemporary imaginaries of AI and robotics necessarily vary between cultures, and our focus on this paper is particular to Anglo-European cultures and histories. For recent work on the animating imaginaries of robots and other embodied form of “artificial intelligence” in Japan and Korea, see Jeon (Citation2016) and Robertson (Citation2017).

6. It is also significant, and typical, that Amy is gendered female. While X.ai offers the option to have “Andrew” as an assistant, feminized artificial and robotic agents abound. It is beyond the scope of this paper to analyze these dynamics in detail. For a critical analysis of the gendered aspects of artificial intelligence, see Adam (Citation1998) and Suchman (Citation2011); see also Robertson (Citation2010) for an analysis of gendered-robots in Japan.

7. While news media pronounced that a new frontier had been crossed, it should be noted that the formal article published in Nature (Silver et al., Citation2016) was reasonably contained in its extrapolations.

8. In the United States, the history of the Census (Anderson, Citation2015) offers a parallel insight into the challenges and perils of articulating categories.

9. The “uncanny valley” refers to the hypothesis, first originating in robotics design, that there is a negative emotional response to a machine that is very close to being human-like, but not an exact representation. This not-quite-perfect likeness creates a sense of the uncanny, and creates a valley, a low point, in emotional responses which are otherwise positive for machines either are perceptibly poor representations of humans or perfectly accurate. See Mori (Citation2012).

10. The intertwined topics of validity, reproducibility, and replicability are frequently emerging as topics of workshops and discussions at top machine learning conferences. See, for example, https://sites.google.com/view/icml-reproducibility-workshop/home.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.