Abstract
One candidate approach to creating artificial general intelligence (AGI) is to imitate the essential computations of human cognition. This process is sometimes called ‘reverse-engineering the brain’ and the end product called ‘neuromorphic.’ We argue that, unlike with other approaches to AGI, anthropomorphic reasoning about behaviour and safety concerns is appropriate and crucial in a neuromorphic context. Using such reasoning, we offer some initial ideas to make neuromorphic AGI safer. In particular, we explore how basic drives that promote social interaction may be essential to the development of cognitive capabilities as well as serving as a focal point for human-friendly outcomes.
Acknowledgement
The authors are indebted to three anonymous referees, whose comments prompted considerable improvements to the paper.
Notes
1. In an October, 2016 interview with Business Insider, Nick Bostrom, a leading thinker in artificial intelligence safety, indicated that he considers Google DeepMind the current leader in the AGI race. DeepMind works primarily with neuromorphic deep learning methods.
2. To avoid confusion, note that terms such as ‘language,’ ‘concept,’ ‘proposition,’ and ‘sentence’ should be interpreted here primarily in the sense used in cognitive psychology, not logic.