ABSTRACT
This paper reports a study which examined an interaction between action planning and processing of perceptual information in two different sensory modalities. In line with the idea that action planning consists in representing the action’s sensory outcomes, it was assumed that different types of actions should be coupled with different modalities. A visual and auditory oddball paradigm was combined with two types of actions: pointing and knocking (unrelated to the perceptual task). Results showed an interactive effect between the action type and the sensory modality of the oddballs, with impaired detection of auditory oddballs for knocking (congruent) action, as compared to a pointing (incongruent) action. These findings reveal that action planning can interact with modality-specific perceptual processing and that preparing an action presumably binds the respective perceptual features with an action plan, thereby making these features less available for other tasks.
Acknowledgments
I thank Johanna Bayer for help with data collection, as well as Julian Wykowski and Marek Wykowski for help with developing the stimuli. I also thank two Reviewers of this paper for insightful comments that greatly improved the paper’s quality.
Disclosure statement
No potential conflict of interest was reported by the author.
Notes
1 The analyses focused on action-related impact on a whole perceptual dimension (pitch vs. location). As such, feature-based effects that could potentially be due to executing action on a small/medium/large cup were not examined. This is in line with previous research (e.g. Wykowska et al., Citation2011, Citation2012, Citation2009; Wykowska & Schubö, Citation2012) and is dictated by the following logic and experimental design: the aim of this study (and also previous studies) was to examine the impact of action planning on perceptual processing. In the experimental design, participants first see a movement cue (picture) informing them what type of movement they should prepare (knocking vs, pointing). At this point, they are not informed about which cup they will be pointing to/knocking in front of. While they are preparing for a given action type, they are also engaged in the perceptual task. Only after they complete the perceptual task, the particular features (small/large, left/right, etc.) of the objects on which they execute the action are given. Therefore, the feature values related to the action plans cannot influence the feature values of perceptual processing, and feature-based analysis is impossible in the present design.