Abstract
One of the central problems of artificial intelligence is capturing the breadth and flexibility of human common sense reasoning. One way to evaluate common sense is to use versions of human tests that rely on everyday reasoning. The Bennett Mechanical Comprehension Test consists of everyday reasoning problems posed via pictures and is used to evaluate technicians. This test is challenging because it requires conceptual knowledge spanning a broad range of domains, experience with a wide variety of everyday situations, and spatial reasoning. This article describes how we have extended our Companion Cognitive Architecture, which treats analogical processing as central, to perform well over a subset of the Bennett test. We introduce analogical model formulation as a robust method for reasoning about everyday scenarios, by analogy with cases that represent prior experiences. This enables a companion to perform qualitative reasoning (QR) without a complete domain theory, as typically required for QR. We introduce sketch annotations to communicate linkages between visual and conceptual properties in sketches. We introduce analogical reference frames to enable comparative analysis to operate over a broader range of problems than prior techniques. We show that these techniques enable a companion to score reasonably well on a difficult subset of the Bennett test.
Acknowledgements
This study was supported by DARPA IPTO. The authors would like to thank Kate Lockwood and Patricia Dyke for their sketches and Tom Hinrichs and Jeff Usher for their work on the companion architecture.
Notes
1. For the security of the test, we cannot provide a full list of the problems used in this evaluation.
2. The majority of the content in our KB is drawn from ResearchCyc (www.research.cyc.com), plus our own material on QR and analogy. The conventions for Cyc-style knowledge bases (Lenat and Guha Citation1989) are documented on that website.
3. The compound form shown in is translated into a set of backward chaining rules for use with our reasoning engine.
4. Torque equilibrium also requires that the opposing torques be equal. Our system currently assumes this by default.