Abstract
Standard feedforward and recurrent networks cannot support strong systematicity when constituents are presented as local input/output vectors. To explain systematicity connectionists must either: (1) develop alternative models, or (2) justify the assumption of similar (non-local) constituent representations prior to the learning task. I show that the second commonly presumed option cannot account for systematicity, in general. This option, termed first-order connectionism, relies upon established spatial relationships between common-class constituents to account for systematic generalization: inferences (functions) learnt over, for example, cats, extend systematically to dogs by virtue of both being nouns with similar internal representations so that the function learnt to make inferences employing one simultaneously has the capacity to make inferences employing the other. But, humans generalize beyond common-class constituents. Cross-category generalization (e.g. inferences that require treating mango as a colour, rather than a fruit) makes having had the necessary common context to learn similar constituent representations highly unlikely. At best, the constituent similarity proposal encodes for one binary relationship between any two constituents, at any one time. It cannot account for inferences, such as transverse patterning, that require identifying and applying one of many possible binary constituent relationships that is contingent on a third constituent (i.e. ternary relationship). Connectionists are, therefore, left with the first option which amounts to developing models with the symbol-like capacity to represent explicitly constituent relations independent of constituent contents, such as in tensor-related models. However, rather just simply implementing symbol systems, I suggest reconciling connectionist and classical frameworks to overcome their individual limitations.