374
Views
19
CrossRef citations to date
0
Altmetric
Regular articles

The interplay of bottom-up and top-down mechanisms in visual guidance during object naming

, &
Pages 1096-1120 | Received 18 Nov 2012, Accepted 02 Sep 2013, Published online: 14 Nov 2013

REFERENCES

  • Allison, B., Keller, F., & Coco, M. I. (2012). A bayesian model of the effect of object context on visual attention. In N. Miyake, D. Peebles, & R. Cooper (Eds.), Proceedings of the 34th annual conference of the Cognitive Science Society, Sapporo (pp. 1278–1283). Austin, TX: Cognitive Science Society.
  • Almeida, J., Knobel, M., Finkbeiner, M., & Caramazza, A. (2007). The locus of the frequency effect in picture naming: When recognizing is not enough. Psychonomic Bulletin & Review, 14(6), 1177–1182. doi: 10.3758/BF03193109
  • Baayen, R., Davidson, D., & Bates, D. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390–412. doi: 10.1016/j.jml.2007.12.005
  • Baayen, R., Piepenbrock, R., & Gulikers, L. (1996). Celex2 [Computer software manual]. Philadelphia: Linguistic Data Consortium.
  • Baluch, F., & Itti, L. (2011). Mechanisms of top-down attention. Trends in Neurosciences, 34, 210–224. doi: 10.1016/j.tins.2011.02.003
  • Barr, D. (2008). Analyzing visual world eye-tracking data using multilevel logistic regression. Journal of Memory and Language, 59(4), 457–474. doi: 10.1016/j.jml.2007.09.002
  • Bartram, D. (1974). The role of visual and semantic codes in object naming. Cognitive Psychology, 6, 325–356. doi: 10.1016/0010-0285(74)90016-4
  • Bonitz, V. S., & Gordon, R. D. (2008). Attention to smoking-relate and incongruous objects during scene viewing. Acta Psychologica, 129, 255–263. doi: 10.1016/j.actpsy.2008.08.006
  • Borji, A., Sihite, D. N., & Itti, L. (2012). Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing, 1–16.
  • Brooks, D. I., Rasmussen, I., & Hollingworth, A. (2010). The nesting of search contexts within natural scenes: Evidence from contextual cuing. Journal of Experimental Psychology: Human Perception and Performance, 36(6), 1406–1418.
  • Castelhano, M., & Heaven, C. (2010). The relative contribution of scene context and target features to visual search in real-world scenes. Attention, Perception, & Psychophysics, 72(5), 1283–1297. doi: 10.3758/APP.72.5.1283
  • Clark, A. (2012). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral Brain Science, 1–86.
  • Coco, M., & Keller, F. (2009). The impact of visual information on referent assignment in sentence production. In N. Taatgen & H. van Rijn (Eds.), Proceedings of the 31th annual conference of the Cognitive Science Society, Amsterdam (pp. 274–279). Austin, TX: Cognitive Science Society.
  • Damian, M., Vigliocco, G., & Levelt, W. (2001). Effects of semantic context in the naming of pictures and words. Cognition, 81, B77–B86. doi: 10.1016/S0010-0277(01)00135-4
  • Davenport, J., & Potter, M. (2004). Scene consistency in object and background perception. Psychological Science, 15, 559–564. doi: 10.1111/j.0956-7976.2004.00719.x
  • De Graef, P., Christiaens, D., & D'Ydewalle, G. (1990). Perceptual effects of scene context on object identification. Psychological Research, 52, 317–329. doi: 10.1007/BF00868064
  • Di Lollo, V., Kawahara, J., Zuvic, S. M., & Visser, T. A. (2001). The preattentive emperor has no clothes: A dynamic redressing. Journal of Experimental Psychology: General, 130(3), 479–492. doi: 10.1037/0096-3445.130.3.479
  • Eckstein, M., Drescher, B., & Shimozaki, S. (2006). Attentional cues in real scenes, saccadic targeting and Bayesian priors. Psychological Science, 17, 973–980. doi: 10.1111/j.1467-9280.2006.01815.x
  • Ehinger, K. A., Hidalgo-Sotelo, B., Torralba, A., & Oliva, A. (2009). Modelling search for people in 900 scenes: A combined source model of eye guidance. Visual Cognition, 17, 945–978. doi: 10.1080/13506280902834720
  • Einhauser, W., Rutishauser, U., & Koch, C. (2008). Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli. Journal of Vision, 8, 1–19.
  • Elazary, L., & Itti, L. (2008). Interesting objects are visually salient. Journal of Vision, 8(14:18), 1–15.
  • Enns, J. T., & Lleras, A. (2008). What's next? New evidence for prediction in human vision. Trends in Cognitive Sciences, 12(9), 327–333. doi: 10.1016/j.tics.2008.06.001
  • Evans, K. K., & Treisman, A. (2005). Perception of objects in natural scenes: Is it really attention free? Journal of Experimental Psychology: Human Perception and Performance, 31(6), 1476–1492.
  • Foulsham, T., & Underwood, G. (2011). If saliency affects search then why? Evidence from normal and gaze-contingent search tasks in natural scenes. Cognitive Computation, 3, 48–63. doi: 10.1007/s12559-010-9069-9
  • Foxe, J. J., & Simpson, G. V. (2002). Flow of activation from V1 to frontal cortex in humans – A framework for defining “early” visual processing. Experimental Brain Research, 142(1), 139–150. doi: 10.1007/s00221-001-0906-7
  • Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neurosciences, 11(2), 127–138. doi: 10.1038/nrn2787
  • Gajewski, D. A., & Henderson, J. M. (2005). Minimal use of working memory in a scene comparison task. Visual Cognition: Special Issue on Real-World Scene Perception, 12, 979–1002. doi: 10.1080/13506280444000616
  • Gleitman, L., January, D., Nappa, R., & Trueswell, J. (2007). On the give and take between event apprehension and utterance formulation. Journal of Memory and Language, 57, 544–569. doi: 10.1016/j.jml.2007.01.007
  • Griffin, Z., & Bock, K. (1998). Constraint, word frequency, and the relationship between lexical processing levels in spoken word production. Journal of Memory and Language, 38, 313–338. doi: 10.1006/jmla.1997.2547
  • Griffin, Z., & Oppenheimer, D. (2006). Speakers gaze at objects while preparing intentionally inaccurate labels for them. Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 943–948.
  • Hayhoe, M., & Ballard, D. (2005). Eye movements in natural behavior. Trends in Cognitive Science, 9, 188–194. doi: 10.1016/j.tics.2005.02.009
  • Henderson, J., Brockmole, J., Castelhano, M., & Mack, M. (2007). Visual saliency does not account for eye-movements during visual search in real-world scenes. In R. van Gompel, M. Fisher, W. Murray, & R. Hill (Eds.), Eye movement research: Insights into mind and brain (pp. 538–562). Amsterdam: Elsevier.
  • Henderson, J., Malcolm, G., & Schandl, C. (2009). Searching in the dark: Cognitive relevance drives attention in real-world scenes. Psychonomic Bulletin & Review, 16, 850–856. doi: 10.3758/PBR.16.5.850
  • Henderson, J., Weeks, P., & Hollingworth, A. (1999). The effects of semantic consistency on eye movements during complex scene viewing. Journal of Experimental Psychology: Human Perception and Performance, 25, 210–228.
  • Hinton, G. E. (2007a). Learning multiple layers of representation. Trends in Cognitive Sciences, 11, 428–434. doi: 10.1016/j.tics.2007.09.004
  • Hocking, J., McMahon, K., & de Zubicaray, G. (2009). Semantic context and visual feature effects in object naming: An fmri study using arterial spin labeling. Journal of Cognitive Neuroscience, 21, 1571–1583. doi: 10.1162/jocn.2009.21114
  • Huettig, F., & Altmann, G. (2005). Word meaning and the control of eye fixation: Semantic competitor effects and the visual world paradigm. Cognition, 96(1), B23–B32. doi: 10.1016/j.cognition.2004.10.003
  • Hwang, A. D., Wang, H.-C., & Pomplun, M. (2011). Semantic guidance of eye movements in real-world scenes. Vision Research, 51(10), 1192–1205. doi: 10.1016/j.visres.2011.03.010
  • Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10–12), 1489–1506. doi: 10.1016/S0042-6989(99)00163-7
  • Levelt, W. J. M., Schriefers, H., Vorberg, D., Meyer, A., Pechmann, T., & Havinga, J. (1991). The time course of lexical access in speech production: A study of picture naming. Psychological Review, 98(1), 122–142. doi: 10.1037/0033-295X.98.1.122
  • Li, F. F., VanRullen, R., Koch, C., & Perona, P. (2002). Rapid natural scene categorization in the near absence of attention. Proceedings of the National Academy of Sciences, 99(14), 9596–9601.
  • Loftus, G., & Mackworth, N. (1978). Cognitive determinants of fixation location during picture viewing. Journal of Experimental Psychology: Human Perception and Performance, 4, 565–572.
  • Mack, S. C., & Eckstein, M. P. (2011). Object co-occurrence serves as a contextual cue to guide and facilitate visual search in a natural viewing environment. Journal of Vision, 9(11) (9), 1–13.
  • Meyer, S. A., Sleiderink, A., & Levelt, W. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66, B25–B33. doi: 10.1016/S0010-0277(98)00009-2
  • Mirman, D., Dixon, J., & Magnuson, J. (2008). Statistical and computational models of the visual world paradigm: Growth curves and individual differences. Journal of Memory and Language, 59(4), 475–494. doi: 10.1016/j.jml.2007.11.006
  • Parkhurst, D., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42(1), 107–123. doi: 10.1016/S0042-6989(01)00250-4
  • Potter, M., Kroll, J., Yachzel, B., Carpenter, E., & Sherman, J. (1986). Pictures in sentences: Understanding without words. Journal of Experimental Psychology: General, 115, 281–294. doi: 10.1037/0096-3445.115.3.281
  • Rao, R. & Ballard, D. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87. doi: 10.1038/4580
  • Russell, B. C., Torralba, A., Murphy, K. P., & Freeman, W. T. (2008). LabelMe: A database and web-based tool for image annotation. International Journal of Computer Vision, 77(1–3), 157–173. doi: 10.1007/s11263-007-0090-8
  • Snodgrass, M., & Vanderwart, J. G. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6, 174–215.
  • Tatler, B. W. (2007). The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision, 7(14), 4, 1–17. doi: 10.1167/7.14.4
  • Tatler, B. W., Hayhoe, M., Land, M., & Ballard, D. (2011). Eye guidance in natural vision: Reinterpreting salience. Journal of Vision, 11(5) , 1–23. doi: 10.1167/11.5.5
  • Tatler, B. W., & Vincent, B. T. (2008). Systematic tendencies in scene viewing. Journal of Eye Movement Research, 2(2), 5: 1–18.
  • Torralba, A., Oliva, A., Castelhano, M., & Henderson, J. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological review, 4(113), 766–786. doi: 10.1037/0033-295X.113.4.766
  • Underwood, G. (2009). Cognitive processes in eye guidance: Algorithms for attention and image processing. Cognitive Computation, 1, 64–76. doi: 10.1007/s12559-008-9002-7
  • Underwood, G., & Foulsham, T. (2006). Visual saliency and semantic incongruency influence eye-movements when inspecting pictures. Quarterly Journal of Experimental Psychology, 59, 1931–1949. doi: 10.1080/17470210500416342
  • Underwood, G., Humphrey, L., & Cross, E. (2007). Congruency, saliency and gist inspection of objects in natural scenes. In R. van Gompel, M. Fisher, W. Murray, & R. Hill (Eds.), Eye movement research: insights into mind and brain (pp. 561–572). Amsterdam: Elsevier.
  • Underwood, G., Templeman, E., Lamming, L., & Foulsham, T. (2008). Is attention necessary for object identification? Evidence from eye movements during the inspection of real-world scenes. Consciousness and Cognition, 17, 159–170. doi: 10.1016/j.concog.2006.11.008
  • Vo, M.-H., & Henderson, J. M. (2009). Does gravity matter? Effects of semantic and syntactic inconsistencies on the allocation of attention during scene perception. Journal of Vision, 9(24), 1–15.
  • Vo, M.-H., & Henderson, J. M. (2011). Object scene inconsistencies do not capture gaze: Evidence from the flash-preview moving-window paradigm. Attention Perception & Psychophysics, 73, 1742–1753. doi: 10.3758/s13414-011-0150-6
  • Walther, D., & Koch, D. (2006). Modeling attention to salient proto-objects. Neural Networks, 19, 1395–1407. doi: 10.1016/j.neunet.2006.10.001
  • Zelinsky, G., & Murphy, G. (2000). Synchronizing visual and language processing. Psychological Science, 11(2), 125–131. doi: 10.1111/1467-9280.00227
  • Zelinsky, G., & Schmidt, J. (2009). An effect of referential scene constraint on search implies scene segmentation. Visual Cognition, 17(6), 1004–1028. doi: 10.1080/13506280902764315

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.