109
Views
0
CrossRef citations to date
0
Altmetric
Research Article

PRACTICES OF BENCHMARKING, VULNERABILITY IN THE COMPUTER VISION PIPELINE

Pages 173-189 | Published online: 25 May 2023
 

Abstract

Computer vision datasets have proved to be key instruments to interpret visual data. This article concentrates on benchmark datasets which are used to define a technical problem and provide with a common referent against which different solutions can be compared. Through three case studies, ImageNet, the Pilot Parliaments Benchmark dataset and VizWiz, the article analyzes how photography is mobilised to conceive interventions in computer vision. The benchmark is a privileged site where photographic curation is tactically performed to change the scale of visual perception, oppose racial and gendered discrimination or rethink image interpretation for visually impaired users. Through the elaboration of benchmarks, engineers create curatorial pipelines involving large chains of heterogeneous actors exploiting various photographic practices from amateur snapshot to political portraiture or photography made by blind users. The article contends that the mobilisation of photography in the benchmark goes together with a multifaceted notion of vulnerability. It analyzes how various forms of vulnerabilities and insecurities pertaining to users, software companies, or vision systems are framed and how benchmarks are conceived in response to them. Following the alliances that form around vulnerabilities, the text explores the potential and limits of the practices of benchmarking in computer vision.

Acknowledgments

This work was supported by the Swiss National Science Foundation as part of the research project Curating Photography in the Networked Image Economy [grant number 183178]. And it is based on a doctoral research conducted thanks to the support of London South Bank University and The Photographers’ Gallery.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1. Hand, Ubiquitous Photography.

2. As in computer vision, research and production are tightly integrated, the word ‘field’ refers to the scientific discipline as well as the profession and the industry. In the case studies, nearly all the researchers mentioned are also engaged in the professional production of software.

3. Russakovsky et al., “ImageNet Large Scale.”

4. Buolamwini, “Gender Shades.”

5. Gurari et al., “VizWiz Grand Challenge.”

6. Crawford and Paglen, “Excavating AI”; Harvey, “MegaPixels”; Schmitt, “Tunnel Vision”; Malevé, “On the Data Set’s Ruins”; and Hunger, “Why so Many Windows?”

7. Balayn, Kulynych, and Gürses, ‘Exploring Data Pipelines; Jaton, ‘We Get the Algorithms’; Raji et al., ‘AI and the Everything’; and Gebru et al., “Datasheets for Datasets.”

8. Deng et al., “Imagenet.”

9. Fei-Fei, “How We Teach Computers.”

10. Fei-fei, “Where Did ImageNet Come From?”

11. Sluis, “The Networked Image after Web 2.0.”

12. GoogleTechTalks, “Large-Scale Image Classification.”

13. Krizhevsky, Sutskever, and Hinton, “ImageNet Classification.”

14. Stengers, Another Science is Possible.

15. The Computer Vision Foundation, “Computer Vision Awards.”

16. Tedone, “From Spectacle to Extraction.”

17. Quach, “Inside the 1TB ImageNet Data Set.”

18. Crawford and Paglen, “Excavating AI.”

19. Buolamwini, “Gender Shades,” 15.

20. Ibid.

21. Ibid., 51.

22. Ibid., 56.

23. Goon et al., “Skin Cancers in Skin Types.”

24. Paresh, “Exclusive.”

25. Ibid.

26. Amaro, “As If.”

27. Raji and Buolamwini, “Actionable Auditing,” 430.

28. Raji and Buolamwini, “Actionable Auditing.”

29. See note 5 above.

30. Lin et al., “Microsoft COCO.”

31. Dognin et al., “Image Captioning as an Assistive Technology.”

32. Grundell, “Rethinking While Redoing.”

33. Bigham et al., “VizWiz.”

34. Gurari et al., “VizWiz Grand Challenge,” 3612.

35. Ibid., 3611.

36. Simons, Gurari, and Fleischmann, “I Hope This Is Helpful.”

37. VizWiz team, “2018 VizWiz Grand Challenge Workshop.”

38. Grundell, “Rethinking While Redoing,” 204.

39. See note 9 above.

Additional information

Funding

 The work is supported by an Investigator Grant from the Novo Nordisk Foundation [NNF21OC0068539]

Notes on contributors

Nicolas Malevé

Nicolas Malevé is an artist, programmer and data activist living in Brussels. He has recently completed his PhD at London South Bank University, as part of a collaboration with The Photographers Gallery. In this context, he initiated the project Variations on a Glance (2015–2018), a series of workshops on the photographic elaboration of computer vision. He is currently a researcher at the Centre for the Study of the Networked Image, London.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 236.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.