2,229
Views
27
CrossRef citations to date
0
Altmetric
Articles

The Role of HCI in the Age of AI

ORCID Icon
Pages 1331-1344 | Published online: 27 Jun 2019
 

ABSTRACT

This article examines some of the mystique surrounding AI, including the interrelated notions of explainability and complexity, and argues that these notions suggest that designing human-centered AI is difficult. It explains how, once these are put aside, an HCI perspective can help define interaction between AI and users that can enhance rather than substitute one important aspect of human life: creativity. Key to developing such creative interactions are abstractions and grammars of action and other notions; the article explores the history of these in HCI and how they are to be used in the contemporary interaction and design space, in relation to AI. The article is programmatic rather than empirical though its argument uses real-world examples.

Notes

1. The literature on this is immense and enormously varied. I do not seek to offer a literature review of it all but will point towards what seem to be representative contemporary examples at appropriate stages of my argument. But for a good introduction to the many points of view that is not partisan see, Kaplan (Citation2016) Artificial Intelligence. In relation to games like Go, see Sadler and Regan, Game Changer (Citation2019).

2. See, for instance, The Glass Cage, Carr (Citation2015).

3. See Markoff’s Machines of Loving Grace (Citation2015).

4. See Kaplan’s Humans Need Not Apply (Citation2015).

5. See Kitano, Artificial Intelligence to Win the Nobel Prize (Citation2016).

6. See for example Husain’s The Sentient Machine (Citation2017). Husain is a computer scientist who not unreasonably wants to make a business out of AI technologies, but his claims extend well beyond computing. But see also Russell & Norvig’s Artificial Intelligence (Citation2016). It is just a textbook, but its introduction suggests that AI is the most important thing ever invented.

7. For a criticism of the AI community’s understanding of such things as ‘concepts’, fundamental to understanding human affairs, see Shanker (Citation1998, p. 185–249).

8. One philosopher who has sought to be more careful in this regard is Boden (Citation1977, Citation2016).

9. This has been a persistent problem. For example, Stanford University sought to bring clarity to this space with its AI and Life in 2030 (Citation2016) report written in 2015 (published the next year). The muddles it cites are very similar to those I list here, four years later.

10. There are of course many books in the area, but I think the best history on why discretion has been so crucial to HCI is Grudin, J. (Citation2017) From Tool to Partner. That HCI can nestle with the agenda of AI is clearly a central aspiration of this paper. Some commentators think this is not realistic, though. Indeed, some argue that AI and HCI are inimical. See Grudin again: AI & HCI: Two fields divided (Citation2009).

11. There are other perspectives on how to design computer-human systems even within HCI, though for the purposes of this paper I will ignore them. A more important distinction is between HCI and ergonomics which offers different benefits because it has different goals. In the case of ergonomics (or human factors as it is sometimes known) the ambition is to make the overall person-machine symbiosis as efficient as possible, whatever the role of the human. It is not creativity that matters but optimized efficiency. Of course, in certain situations, these concerns are quite close – making the use of hybrid methods appropriate. To be creative with text, for example, presupposes ease of data entry: the ergonomics of a keyboard underscoring the creative affordances of an editing tool. For a third time, Grudin is again good on this: see Bridging HCI Communities (Citation2018).

12. These inadequate interfaces have pedigree going back some years. A canonical example is with the Kinect camera, an AI vision system that was meant to enable natural (body) interaction but instead forced users to move their bodies in peculiar ways. The expectation and disappointment this created was reflected in the high sales of the system at first, the collapse of its sales once users realized how constraining the system was. In my view, this was a missed opportunity – if the technology had been designed from the outset with its affordances being treated as a resource for new, ‘peculiar’ forms of action, new grammars if you like, users may have been more delighted in what it could let them do. But HCI researchers had insufficient role in its development. One reason for this was the mystifying language surrounding the technology suggesting that HCI would not be needed: after all, the system was ‘able to see’ what the user needed. (See Harper & Mentis, Citation2013; O’Hara, Harper, Mentis, Sellen, & Taylor, Citation2013). But beyond this, and part of the price paid with this language, is the notion that AI and HCI are inimical to each other: if AI succeeds, one won’t need HCI. But some, like myself, see this as misguided. See also Ren, Rethinking the Relationship between Humans and Computers, (Citation2016); see also Ma, Towards Human-Engaged AI, (Citation2018).

13. That this is so effects all sorts of attempts to explore what AI can do. Some of the better studies from, for example, the social perspective have to work their way around these mystifications before they can find out what the technology does in the real world and its consequences when seen from the social view. See Neyland, The Everyday life of an Algorithm (Citation2019).

14. See for example Miller (Citation2019) Explanation in Artificial Intelligence.

15. The use of the term ‘black box’ has become something of a mantra in this field, and not always in ways that are helpful. I have mentioned Neyland in this regard who writes about the mystifying effects of such language. Be that as it may, there are many papers that explore what gets defined as black box AI, distinguishing the sets of techniques deployed in any type, and the approaches to making those techniques ‘explainable’. See, for instance, Guidotti et al., A Survey of Black Box Methods, (Citation2019).

16. This is a point I take from Wittgenstein: Philosophical Investigations, (Citation1953).

17. Ribeiro et al. (Citation2016) ‘Why Should I trust You?’: Explaining the Predictions of Any Classifier.

18. One might note that to see, in this view, is not to know that it is Harry or Sandra or whoever; recognition is not familiarity, a cue to say ‘Hello!’; on the contrary, it is to behave like a Go player making one play rather than another; there is no interest in what is seen or why it is seen. The goal is to win, when in this case, to win is to recognize the right face.

19. This was originally formulated by John Van Neumann but has been popularised by Kurzweil (Citation2005). But see Stanislaw (Citation1958).

20. So, from this view, while we might think of ourselves as singular – that is to say you and I might like to think of ourselves as such, that our minds are ours and ours alone – in fact, if one believes this view, our consciousness is the outcome of millions of little acts, little calculations and stratagems at the cellular (and system) level that produces this sense of self. Our sense of that self is now seen to be egregious. This is the view that Dennett argued for in his Consciousness Explained (Citation1991).

21. This is most eloquently expressed by the physicist, R. Jones, in his (Citation2004) Soft Machines – a much better book than Dennett’s in my view, since it explores the consequence of this important distinction – the one between description of activities and action that is governed by self-awareness. For those interested in exploring this line of argument, they should go back to Anscombe’s Intention (Citation1957) which explains how motives distinguish human action. In this view, a machine cannot have a motive, though it might have ‘reasons for doing what it does’ – such as probalistic reasons. But for an introduction see Harper et al., Choice. (op cit).

22. This is of course an argument that derives from the ordinary language philosophers, Wittgenstein (op cit) being the most regarded, if not the easiest to read.

23. At this time, the suffix man was meant to encompass all human kind, though whether that assumed all humans were equal is another matter, needless to say.

24. This is of course a massive simplification of a complex interweaving of ideas and trends; for an excellent overview related to the notion of the individual and the self, see Heehs (Citation2013) Writing the Self.

25. Op cit, (Neyland, Citation2019).

26. See Harper et al. (Citation2013) What is a File?

27. Thereska and Harper (Citation2012) Multi-structured redundancy.

28. When seen thus, in terms of what abstractions in Turing Theoretic machines do then it becomes clear that much of the social sciences critiques of AI that focus on the distance between abstraction and complexity miss the mark. Papers like Selbst et al.’s Fairness and Abstraction (Citation2018) say more about social studies of science and technology (SST) than they do about computing despite the authors’ claim otherwise.

29. See Smith (Citation1982).

30. It is associated with the later Wittgenstein, for example (Citation1953), but probably the most important exploration of this concept was by Kenneth Burke in his A Grammar of Motives (Citation1945).

31. And finding out the role of this word was a task set me by Mark Weiser and William Newman at PARC. This led to the study of the world’s first organization to have a complete network of WIMP machines: the International Monetary Fund, in Washington DC. See Harper, Inside the IMF (Citation1998).

32. See Odom, Harper, Sellen, and Thereska (Citation2012) Lost in Translation.

33. Another way is of course to identify closeness between files themselves. See Harper et al. (CitationForthcoming) Breaching the PC Data Store.

34. A point made by Farooq and Grudin in their Human-Computer Integration (Citation2016).

35. For explorations of this apparent dilemma – the contrast between the elaborate complexity of AI tools and the desire for everyday understanding or ‘intuitive understanding’ – See Selbts & Barocas, The Intuitive Appeal of Explainable Machines (Citation2018).

36. Some of the more interesting and thoughtful work here can be found in the research of M. Hildebrant. See for example Profiling: From Data to Knowledge, (Citation2006); and Smart technologies and their ends (Citation2015).

37. The canonical case is of course the EUs General Data Protection Regulation (GDPR) act that came into effect in 2018.

38. See Goodman & Flaxman, (Citation2017) European Union regulation on algorithmic decision-making.

39. As it happens, greedy algorithms are more often associated with decision tree methods, where it becomes difficult for the process to return or go back to a prior junction in the tree structure of analysis, finding itself stuck in a line of interrogation that it cannot withdraw from. But the term greedy is also metaphorical, and that is how I am using it here.

40. This is in fact a very oft-used example and is selected because it seems uncontentious. But see Caruana et al. (Citation2015) on the greedy algorithm problem in health-care situations where the consequences are more worrisome.

41. It might simply be a register of likelihood that delivers this ‘correction’. There might be no backpropagation.

42. I am not referring to the new vocabulary created by users, one that is at once playful and refined, despite its apparent abuse of good grammar. See Harper (Citation2005). I am thinking of the anacoluthia and plain loss of sense that users struggle with when interacting with their SMS tools, struggles they laugh about and mock. Saying wolf when you meant husky is but the least of their troubles. The analogies with AI translation tools are obvious, but there at least the AI provides plausible meaning. With messaging, meaning is often lost altogether.

43. I alluded to some of these issues many years ago. See Texture (Citation2010).

44. It is worth noting that when the algorithm was patented it was not labeled AI; it was simply described as a technique. The current fashion for AI has meant that today it is often renamed as AI; the parent company of Google, Alphabet, is rather fond of saying all it does is ‘AI’. For them, AI is ABC, so to speak.

45. The question of how one might enquire into the real world, into natural action for want of a phrase, has been a major concern in HCI since the turn to the social, with the emergence of CSCW and similar (See Randall, Harper, & Rouncefield, Citation2007). It is certainly something I have spent much time on, a key concern in my research being to distinguish such research in the wild for the purposes of HCI and doing so for social scientific reasons, for anthropology or sociology. These purposes are not the same and should not be muddled. See Harper, Randall & Rouncefield (Citation2005).

Additional information

Notes on contributors

Richard H. R. Harper

Richard H. R. Harper has written 14 books and collections, including The Myth of the Paperless Office (2003), Texture: human expression in the age of communications overload (2010) and Skyping the Family (2019). He is concerned with all aspects of HCI – from GUI design to systems architecture. He is Co-Director for the Institute for Social Futures (ISF) at the University of Lancaster.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.