279
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Engineered wisdom for learning machines

& ORCID Icon
Pages 257-272 | Received 15 Jan 2021, Accepted 16 Jun 2022, Published online: 23 Jun 2022
 

ABSTRACT

We argue that the concept of practical wisdom is particularly useful for organising, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioural scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as well as providing sandboxes or workspaces to help various stakeholders build practical wisdom in systems that are sufficiently realistic to aid transferring skills learned to real-world use. The latter are needed for the design of exercises and methods of evaluation within these workspaces, as well as ways of empirically assessing the transfer of wisdom from workspace to world. Systematic interaction between these three disciplines (and others) is the best approach to engineering wisdom for the machine age.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1. A detailed report about this particular incident has been published by the National Transportation Safety Board (National Transportation Safety Board, Citation2019).

2. More precisely, the Uber vehicle is classed as a level 3 semi-autonomous vehicle (out of a possible 5) by the NTSB.

3. For just some of the many proposals and discussions available, see, Annas (Citation2011), Coope (Citation2012), Stichter (Citation2016), and Swartwood (Citation2020), and Swartwood and Tiberius (Citation2019), and (especially relevant for our purposes) Vallor (Citation2016).

4. For example, in Chamberland et al. (Citation2015), expert self-explanation failed to improve the exam performances of medical students beyond what could be improved by self-generated explanations or those provided by skilled teachers.

5. Setting aside even more abstract notions of theoretical wisdom in terms of a kind of sophia or divine wisdom, which, though independently interesting, are far removed from the purposes of conceptual engineering for human-machine interactions we adopt here.

6. Much groundwork in the psychology of wisdom involves coming to a better understanding of our ‘folk’ understanding of the concept of wisdom across cultures. This is often called developing a theory of ‘implicit’ wisdom (e.g., Weststrate et al., Citation2016). The end goal of this research, however, is to develop an ‘explicit’ theory of wisdom that has decomposable, measurable components, and can be seen as a successor concept to our folk understanding of wisdom. We view our project as a philosophical attempt to help fill out this ‘explicit’ theory.

7. See especially Glück (Citation2018). Some psychologists (like Grossmann, cited above) want to sharply distinguish between the crystallised IQ measures that measure breadth of knowledge (which include breadth of factual knowledge and vocabulary), on the one hand, and measures of metacognitive awareness on the other, with only the latter being able to measure wisdom proper. We are happy to allow that what we call breadth of knowledge here is captured (somewhat imperfectly) by measures of crystallised intelligence, while metacognitive awareness must be measured in other ways. Whether one thinks of these as two measures of components of wisdom, on the one hand, or as a measure of wisdom and its precondition, on the other, will ultimately not make much of a practical difference (see above). Thanks are due to a reviewer for pushing us on the measurability of different proposed components of wisdom.

8. The standard line is that adversarial examples are bugs that reveal deep learning to be far less robust than once thought, and that it would be ideal to design networks that are robust to adversarial attacks (as in Gu & Rigazio, Citation2014). Others have pushed back on this, however, arguing that adversarial outcomes represent features of deep neural networks, not bugs (Ilyas et al., Citation2019).

9. This is a problem even in state-of-the-art language models, such as GPT-3, for instance, (Brown et al., Citation2020).

10. See the evidence collected by, among others, Kross and Grossmann (Citation2012), Grossmann et al. (Citation2016), and especially Grossmann et al. (Citation2021), which explores the knock-on effects of awareness for other aspects of metacognition.

11. One important aspect of rational strategy selection is its ability to offer solutions to problems encountered in an open decision environment, when an agent might be uncertain about which action her evidence supports (or even what evidence she has). An important component of wisdom is the ability to adapt one’s behaviour to new environments, something that is not captured by earlier models of decision-making that required a closed environment to generate accurate recommendations.

12. ‘Opacity’ in this context also sometimes refers to the fact that the actual mechanisms for commercially-proven AI systems are proprietary and hidden to both application developers and end users behind an API (application programming interface); see, Rudin (Citation2019).

13. It is a separate, though important, question whether networks must be black-boxed in this way (Rudin, Citation2019). Our point is simply that they often are, and that human decision-makers need to be able to deal with opaque algorithms in a wise way.

14. This is particularly important because other strategies for avoiding discriminatory decisions in algorithmic decision-making have a less-than-stellar track record. One might think a solution to this problem, one not based on cultivating practical wisdom, should come at the level of prohibition: programmers should not allow the network to consider certain factors (e.g., race) when considering an application. But the massive amount of data that deep networks work with make this policy ineffective: the network will often have enough information to build a proxy of the applicant’s race, even if such explicit information is disallowed. The prohibition thus fails to have its intended effect. See, Kearns and Roth (Citation2019, ch. 3).

15. Though we focus primarily on the three stages below, we note there are many other stages and stakeholders to consider (e.g., the regulatory stage). We will have more to say about these stages in future work.

16. For a report, see, Vincent (Citation2016). The story is slightly more complicated than was reported in the media at the time, because many of the most offensive tweets occurred after users asked the chatbot to repeat their own virulent messages back to them. But the chatbot also produced some offensive (if odd) material of its own, such as ‘Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.’

17. The design stage is also the stage where moral rules might be built into AIs themselves, creating so-called ‘artificial moral agents’ (Allen et al., Citation2000).

18. Gillespie and Lovelace (Citation2016), for instance, list at least five different, non-overlapping notions of efficient coding.

19. There are good mathematical reasons for this: simple methods of categorisation, for instance, will often approximate the performance of more complicated methods, without all of the complications that come from dealing with opaque and brittle algorithms (Hand, Citation2006).

20. The authors wish to extend their gratitude to participants in two iterations of the Machine Wisdom Workshop at the University of Pittsburgh (especially Shannon Vallor, Igor Grossmann, Matt Stichter, and Sina Fazelpour, who participated in both), as well as to participants in Colin Allen’s fall 2020 course on the philosophy of artificial intelligence, for conversations and comments that significantly improved the paper. Particular thanks are due to Chris Davison for his collaboration on the broader Machine Wisdom Project, and to an anonymous reviewer for several sets of comments that improved the paper a great deal.

Additional information

Funding

This work was supported by the Templeton World Charity Foundation award #0467: Practical Wisdom and Intelligent Machines.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.