Abstract
For veridical detection of object orientation any system that can itself vary in orientation must allocate orientation appropriately between itself and objects in space. A model for such allocation is presented; it is an extension of a similar model that has been proposed for the representation of motion in space (Swanston, Wade, & Day, 1987). The orientation of objects is registered and represented within frames of reference that are defined by the orientation of the receptors themselves, by the relative orientations within the visual field, by the integration of information from the two eyes, and by the orientation of the head with respect to gravity. These four levels are referred to as retinocentric, patterncentric, egocentric and geocentric, respectively. Evidence for the operation of mechanisms at these four levels is assessed. While knowledge concerning retinocentric orientation is abundant at the neurophysiological level, assigning any aspects of visual orientation to these processes is more problematical. The opposite is the case for patterncentric interactions: a variety of visual tilt illusions can readily be measured, and some of them influence apparent body position. The major contribution to veridical orientation perception involves the otolith organs, which signal the orientation of the head to gravity. Such signals are combined with those from earlier levels to provide a geocentric representation of orientation.