730
Views
11
CrossRef citations to date
0
Altmetric
Review Articles

Linearisation during language production: evidence from scene meaning and saliency maps

&
Pages 1129-1139 | Received 29 Aug 2018, Accepted 19 Dec 2018, Published online: 11 Jan 2019

References

  • Biederman, I. (1981). On the semantics of a glance at a scene. In Perceptual organization (pp. 213–253). Hillsdale, NJ: Lawrence Erlbaum Associates.
  • Bock, J. K. (1986). Syntactic persistence in language production. Cognitive Psychology, 18(3), 355–387.
  • Bock, K. (1987). Exploring levels of processing in sentence production. In Natural language generation (pp. 351–363). Dordrecht: Springer.
  • Bock, K., & Ferreira, V. (2014). Syntactically speaking. In The Oxford handbook of language production. Oxford: Oxford University Press.
  • Bock, K., Irwin, D., & Davidson, D. J. (2004). Putting first things first. In J. M. Henderson & F. Ferreira (Eds.), The interface of language, vision, and action (pp. 224–250). New York, NY: Psychology Press.
  • Bock, K., Irwin, D. E., Davidson, D. J., & Levelt, W. J. (2003). Minding the clock. Journal of Memory and Language, 48(4), 653–685.
  • Bock, K., & Loebell, H. (1990). Framing sentences. Cognition, 35, 1–39.
  • Brown-Schmidt, S., & Konopka, A. E. (2015). Processes of incremental message planning during conversation. Psychonomic Bulletin & Review, 22(3), 833–843.
  • Brown-Schmidt, S., & Tanenhaus, M. K. (2006). Watching the eyes when talking about size: An investigation of message formulation and utterance planning. Journal of Memory and Language, 54, 592–609.
  • Bunger, A., Papafragou, A., & Trueswell, J. C. (2013). Event structure influences language production: Evidence from structural priming in motion event description. Journal of Memory and Language, 69(3), 299–323.
  • Castelhano, M. S., & Henderson, J. M. (2007). Initial scene representations facilitate eye movement guidance in visual search. Journal of Experimental Psychology: Human Perception and Performance, 33(4), 753–763.
  • Castelhano, M. S., & Henderson, J. M. (2008). The influence of color on the perception of scene gist. Journal of Experimental Psychology: Human Perception and Performance, 34(3), 660–675.
  • Chang, F., Dell, G. S., Bock, K., & Griffin, Z. M. (2000). Structural priming as implicit learning: A comparison of models of sentence production. Journal of Psycholinguistic Research, 29(2), 217–230.
  • Chun, M. M. (2000). Contextual cueing of visual attention. Trends in Cognitive Sciences, 4, 170–178.
  • Cohn, N., Coderre, E., O’Donnell, E., Osterby, A., & Loschky, L. (2018, July 28). The cognitive systems of visual and multimodal narratives. The 40th Annual Cognitive Science Society Meeting, Monona Terrace Convention Center, Madison, WI.
  • Elsner, M., Clarke, A., & Rohde, H. (2018). Visual complexity and its effects on referring expression generation. Cognitive Science, 42(4), 940–973.
  • Ferreira, F., & Henderson, J. M. (1998). Linearization strategies during language production. Memory and Cognition, 26(1), 88–96.
  • Ferreira, F., & Swets, B. (2002). How incremental is language production? Evidence from the production of utterances requiring the computation of arithmetic sums. Journal of Memory and Language, 46(1), 57–84.
  • Fromkin, V. (1973). Speech errors as linguistic evidence. The Hague: Mouton.
  • Garrett, M. F. (1988). Processes in language production. Linguistics: The Cambridge Survey, 3, 69–96.
  • Gleitman, L. R., January, D., Nappa, R., & Trueswell, J. C. (2007). On the give and take between event apprehension and utterance formulation. Journal of Memory and Language, 57(4), 544–569.
  • Griffin, Z. M. (2004). Why look? Reasons for speech-related eye movements. In J. M. Henderson & F. Ferreira (Eds.), The interface of language, vision, and action (pp. 192–222). New York, NY: Psychology Press.
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274–279.
  • Hafri, A., Papafragou, A., & Trueswell, J. C. (2013). Getting the gist of events: Recognition of two-participant actions from brief displays. Journal of Experimental Psychology: General, 142(3), 880–905.
  • Harel, J., Koch, C., & Perona, P. (2007). Graph-based visual saliency. In Advances in neural information processing systems (pp. 545–552). Cambridge, MA: MIT Press.
  • Henderson, J. M. (2017). Gaze control as prediction. Trends in Cognitive Sciences, 21(1), 15–23.
  • Henderson, J. M., & Ferreira, F. (2004). Scene perception for psycholinguists. In J. M. Henderson, & F. Ferreira (Eds.), The interface of language, vision, and action (pp. 1–58). New York, NY: Psychology Press.
  • Henderson, J. M., & Hayes, T. R. (2017). Meaning-based guidance of attention in scenes as revealed by meaning maps. Nature Human Behaviour, 1(10), 743–747.
  • Henderson, J. M., & Hayes, T. R. (2018). Meaning guides attention in real-world scene images: Evidence from eye movements and meaning maps. Journal of Vision, 18(6), 10–10.
  • Henderson, J. M., Hayes, T. R., Rehrig, G., & Ferreira, F. (2018). Meaning guides attention during real-world scene description. Scientific Reports, 8, 13504.
  • Itti, L., & Koch, C. (2001). Computational modelling of visual attention. Nature Reviews Neuroscience, 2(3), 194–203.
  • Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.
  • Koehler, K., & Eckstein, M. P. (2017a). Beyond scene gist: Objects guide search more than scene background. Journal of Experimental Psychology: Human Perception and Performance, 43, 1–17.
  • Koehler, K., & Eckstein, M. P. (2017b). Temporal and peripheral extraction of contextual cues from scenes during visual search. Journal of Vision, 17(2), 1–32.
  • Levelt, W. J. M. (1989). Speaking: From intention to articulation. Cambridge, MA: MIT Press.
  • Levelt, W. J. M., Page, R. B. L., & Longuet-Higgins, H. C. (1981). The speaker’s linearization problem. Philosophical Transactions of the Royal Society B: Biological Sciences, 295, 305–315.
  • MacDonald, M. C. (2013). How language production shapes language form and comprehension. Frontiers in Psychology, 4, 226.
  • Melinger, A., Branigan, H. P., & Pickering, M. J. (2014). Parallel processing in language production. Language, Cognition and Neuroscience, 29(6), 663–683.
  • Myachykov, A., Thompson, D., Scheepers, C., & Garrod, S. (2011). Visual attention and structural choice in sentence production across languages. Language and Linguistics Compass, 5(2), 95–107.
  • Peacock, C. E., Hayes, T. R., & Henderson, J. M. (2018). Meaning guides attention during scene viewing, even when it is irrelevant. Attention, Perception, & Psychophysics, 81(1), 20–34.
  • Potter, M. C., Wyble, B., Hagmann, C. E., & McCourt, E. S. (2014). Detecting meaning in RSVP at 13 ms per picture. Attention, Perception, & Psychophysics, 76(2), 270–279.
  • Rehrig, G., Cheng, M., McMahan, B. C., & Shome, R. (in preparation). Why are the batteries in the microwave?: Use of semantic information under uncertainty in a search task. Manuscript in preparation.
  • Shanon, B. (1984). Room descriptions. Discourse Processes, 7(3), 225–255.
  • Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113(4), 766–786.
  • van de Velde, M., & Meyer, A. S. (2014). Syntactic flexibility and planning scope: The effect of verb bias on advance planning during sentence recall. Frontiers in Psychology, 5, 1174.
  • Vogels, J., Krahmer, E., & Maes, A. (2013). Who is where referred to how, and why? The influence of visual saliency on referent accessibility in spoken language production. Language and Cognitive Processes, 28(9), 1323–1349.
  • Zwitserlood, P., Bölte, J., Hofmann, R., Meier, C. C., & Dobel, C. (2018). Seeing for speaking: Semantic and lexical information provided by briefly presented, naturalistic action scenes. PLOS ONE, 13(4), 1–22.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.