374
Views
19
CrossRef citations to date
0
Altmetric
Regular articles

The interplay of bottom-up and top-down mechanisms in visual guidance during object naming

, &
Pages 1096-1120 | Received 18 Nov 2012, Accepted 02 Sep 2013, Published online: 14 Nov 2013
 

Abstract

An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.

We would like to thank Konstantinos Tsagkaridis for sound and image annotation, and Alasdair D. F. Clarke for data collection (Experiment 2) and feedback on a previous draft of this paper. We are also grateful to two anonymous reviewers who have provided valuable and insightful comments to improve the quality of the presented manuscript.

The support of the European Research Council under award [grant number 203427] “Synchronous Linguistic and Visual Processing” is gratefully acknowledged.

Notes

1 Only congruency was tested through a rating task, where participants were asked to rate how likely (1–9) an object was to occur with the rest of the scene.

2 The authors used the term preattentively when referring to the early stage of gist processing. In this paper, we do not make a distinction between different attentional stages.

3 Some scenes came from Google Images.

4 Underwood and Foulsham (Citation2006) call this measure time prior to fixation.

5 Area was included also in previous analysis, but was excluded during model selection.

6 There is only one out-of-context object that could be mentioned for every five naming instances—that is, there are more congruent objects that can be named than incongruent ones. Thus, the incongruent condition is unbalanced when considering the full dataset.

7 We also used model selection and did not find any significant interaction, in any of the measures investigated.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.