Abstract
Performing an action and observing it activate the same internal representations of action. The representations are therefore shared between self and other (shared representations of action, SRA). But what exactly is shared? At what level within the hierarchical structure of the motor system do SRA occur? Understanding the content of SRA is important in order to decide what theoretical work SRA can perform. In this paper, we provide some conceptual clarification by raising three main questions: (i) are SRA semantic or pragmatic representations of action?; (ii) are SRA sensory or motor representations?; (iii) are SRA representations of the action as a global unit or as a set of elementary motor components? After outlining a model of the motor hierarchy, we conclude that the best candidate for SRA is intentions in action, defined as the motor plans of the dynamic sequence of movements. We shed new light on SRA by highlighting the causal efficacy of intentions in action. This in turn explains phenomena such as inhibition of imitation.
Notes
1Gallese & Lakoff (Citation2005) provide an embodied account of concepts, which would erase the contradiction. However, for various reasons, we prefer to maintain a distinction between the experiential nonconceptual level and the conceptual level. Nonetheless, we agree that the nonconceptual level can ground the conceptual level.
2However, as far as we know, there is no study that directly investigates the relationship between motor imagery and action observation.
3In order to untangle these two alternatives, one would need better neuroimaging tools that allow for a better temporal resolution (e.g. EEG and MEG) or an analysis of the effective connectivity (e.g. by dynamical causal modeling). This would permit one, for example, to determine what comes first: the representation of the sensory consequences or the motor representation.
4Once again, this study cannot exclude completely a possible role of the anticipation of the sensory consequences of the movement. Although subjects never had any visual feedback on the movement they learned, they might have predicted the visual consequences of their movement. Alternatively, there may be some intermodal translation from haptic to vision (Meltzoff, 1995).
5For instance, Fogassi et al. (2005) assumed that the content of the intention detected by the monkey is something like “to eat”, while it could be as well described as “to place in the mouth”. It would be interesting to know what would happen if the monkey had to place either something eatable or something non-eatable in the mouth (hoping that everything is not eatable by monkeys). Then there would really be a similarity of intention in action with a difference of prior intentions.