References
- Altmann, G. T. M. (2004). Language-mediated eye movements in the absence of a visual world: The “blank screen paradigm”. Cognition, 93(2), B79–B87. https://doi.org/https://doi.org/10.1016/j.cognition.2004.02.005
- Altmann, G. T. M. (2011). The mediation of eye movements by spoken language. In S. P. Liversedge, I. Gilchrist, & S. Everling (Eds.), The Oxford handbook of eye movements (pp. 979–1004). Oxford University Press.
- Altmann, G. T. M., & Kamide, Y. (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247–264. https://doi.org/https://doi.org/10.1016/S0010-0277(99)00059-1
- Altmann, G. T. M., & Mirković, J. (2009). Incrementality and prediction in human sentence processing. Cognitive Science, 33(4), 583–609. https://doi.org/https://doi.org/10.1111/j.1551-6709.2009.01022.x
- Alvarez, G., & Cavanagh, P. (2004). The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychological Science, 15(2), 106–111. https://doi.org/https://doi.org/10.1111/j.0963-7214.2004.01502006.x
- Andersson, R., Ferreira, F., & Henderson, J. M. (2011). I see what you’re saying: The integration of complex speech and scenes during language comprehension. Acta Psychologica, 137(2), 208–216. https://doi.org/https://doi.org/10.1016/j.actpsy.2011.01.007
- Baayen, R., van Rij, J., Cecile, D., & Wood, S. (2016). Autocorrelated errors in experimental data in the language sciences: Some solutions offered by generalized additive mixed models. In D. Speelman, K. Heylan, & D. Geeraerts (Eds.), Mixed effects regression models in Linguistics (pp. 49–69). Springer.
- Baddeley, A. (1998). Working memory. Académie Des Sciences, 321, 167–173. https://doi.org/https://doi.org/10.1016/S0764-4469(97)89817-4
- Barr, D. J. (2008). Analyzing “visual world” eyetracking data using multilevel logistic regression. Journal of Memory and Language, 59(4), 457–474. https://doi.org/https://doi.org/10.1016/j.jml.2007.09.002
- Boersma, P., & Weenink, D. (2009). Praat: Doing phonetics by computer (5.1.05).
- Brothers, T., Swaab, T. Y., & Traxler, M. J. (2017). Goals and strategies influence lexical prediction during sentence comprehension. Journal of Memory and Language, 93, 203–216. https://doi.org/https://doi.org/10.1016/j.jml.2016.10.002
- Brown-Schmidt, S., & Tanenhaus, M. K. (2008). Real-time investigation of referential domains in unscripted conversation: A targeted language game approach. Cognitive Science, 32(4), 643–684. https://doi.org/https://doi.org/10.1080/03640210802066816
- Chang, F. (2002). Symbolically speaking : a connectionist model of sentence production. Cognitive Science, 26(26), 609–651. https://doi.org/https://doi.org/10.1207/s15516709cog2605_3
- Chang, F., Dell, G. S., & Bock, J. K. (2006). Becoming syntactic. Psychological Review, 113(2), 234–272. https://doi.org/https://doi.org/10.1037/0033-295X.113.2.234
- Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. https://doi.org/https://doi.org/10.1017/S0140525X12000477
- Coco, M. I., & Keller, F. (2015). Integrating mechanisms of visual guidance in naturalistic language production. Cognitive Processing, 16(2), 131–150. https://doi.org/https://doi.org/10.1007/s10339-014-0642-0
- Coco, M. I., Keller, F., & Malcolm, G. L. (2016). Anticipation in real-world scenes: The role of visual context and visual memory. Cognitive Science, 40(8), 1995–2024. https://doi.org/https://doi.org/10.1111/cogs.12313
- Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. Cognitive Psychology, 6(1), 84–107. https://doi.org/https://doi.org/10.1016/0010-0285(74)90005-X
- Dell, G. S., & Chang, F. (2014). The p-chain: Relating sentence production and its disorders to comprehension and acquisition. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1634), 20120394. https://doi.org/https://doi.org/10.1098/rstb.2012.0394
- Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-driven anticipatory eye movements in virtual reality. Behavior Research Methods, 50(3), 1102–1115. https://doi.org/https://doi.org/10.3758/s13428-017-0929-z
- Fodor, J. A. (1983). The modularity of mind. MIT Press.
- Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. https://doi.org/https://doi.org/10.1038/nrn2787
- Hari, R., Henriksson, L., Malinen, S., & Parkkonen, L. (2015). Centrality of social interaction in human brain function. Neuron, 88(1), 181–193. https://doi.org/https://doi.org/10.1016/j.neuron.2015.09.022
- Hastie, T. J., & Tibshirani, R. J. (1990). Generalized additive models. Chapman & Hall/CRC Press.
- Havron, N., de Carvalho, A., Fiévet, A. C., & Christophe, A. (2019). Three-to four-year-old children rapidly adapt their predictions and use them to learn novel word meanings. Child Development, 90(1), 82–90. https://doi.org/https://doi.org/10.1111/cdev.13113
- Henderson, J. M., & Ferreira, F. (2004). Scene perception for psycholinguists. In J. M. Henderson, & F. Ferreira (Eds.), The interface of language, vision, and action: Eye movements and the visual world (pp. 1–58). Psychology Press.
- Heyselaar, E., Hagoort, P., & Segaert, K. (2015). In dialogue with an avatar, language production is identical compared to dialogue with a human partner. Behavior Research Methods, 1, 15. https://doi.org/https://doi.org/10.3758/s13428-015-0688-7
- Heyselaar, E., Johnston, K., & Paré, M. (2011). A change detection approach to study visual working memory of the macaque monkey. Journal of Vision, 11(3), 1–10. https://doi.org/https://doi.org/10.1167/11.3.11
- Hintz, F., Meyer, A. S., & Huettig, F. (2017). Predictors of verb-mediated anticipatory eye movements in the visual world. Journal of Experimental Psychology: Learning Memory and Cognition, 43(9), 1352–1374. https://doi.org/https://doi.org/10.1037/xlm0000388
- Hockett, C. F. (1960). The origin of speech. Scientific American, 203(3), 88–96. https://doi.org/https://doi.org/10.1038/scientificamerican0960-88
- Huettig, F., & Janse, E. (2016). Individual differences in working memory and processing speed predict anticipatory spoken language processing in the visual world. Language, Cognition and Neuroscience, 31(1), 80–93. https://doi.org/https://doi.org/10.1080/23273798.2015.1047459
- Huettig, F., & McQueen, J. M. (2007). The tug of war between phonological, semantic and shape information in language-mediated visual search. Journal of Memory and Language, 57(4), 460–482. https://doi.org/https://doi.org/10.1016/j.jml.2007.02.001
- Huettig, F., Olivers, C. N. L., & Hartsuiker, R. J. (2011a). Looking, language, and memory: Bridging research from the visual world and visual search paradigms. Acta Psychologica, 137(2), 138–150. https://doi.org/https://doi.org/10.1016/j.actpsy.2010.07.013
- Huettig, F., Rommers, J., & Meyer, A. S. (2011b). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta Psychologica, 137(2), 151–171. https://doi.org/https://doi.org/10.1016/j.actpsy.2010.11.003
- Ito, A., Corley, M., & Pickering, M. J. (2018). A cognitive load delays predictive eye movements similarly during L1 and L2 comprehension. Bilingualism: Language and Cognition, 21(2), 251–264. https://doi.org/https://doi.org/10.1017/S1366728917000050
- Kahneman, D. (2011). Thinking, Fast and Slow. Penguin.
- Kamide, Y., Altmann, G. T. M., & Haywood, S. L. (2003). The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements. Journal of Memory and Language, 49(1), 133–156. https://doi.org/https://doi.org/10.1016/S0749-596X(03)00023-8
- Kendon, A. (1977). Some functions of gaze-direction in two-person conversation. Studies in the Behaviour of Social Interaction, 13–51.
- Kendon, A. (1990a). Conducting interaction. Patterns of behavior in focused encounters. Cambridge University Press.
- Kendon, A. (1990b). Spatial organization in social encounters. In A. Kendon (Ed.), Studies in the behaviour of social interaction (pp. 179–208). Peter de Ridder Press.
- Kendon, A. (1992). The negotiation of context in face-to-face interaction. In A. Duranti, & C. Goodwin (Eds.), Rethinking context: Language as an interactive phenomenon (pp. 323–334). Cambridge University Press.
- Knoeferle, P. (2015). Language comprehension in rich non-linguistic contexts: Combining eye tracking and event-related brain potentials. In R. M. Willems (Ed.), Cognitive neuroscience of natural language use (pp. 77–100). Cambridge University Press.
- Knoeferle, P., & Crocker, M. W. (2006). The coordinated interplay of scene, utterance, and world knowledge: Evidence from eye tracking. Cognitive Science, 30(0), 481–529. https://doi.org/https://doi.org/10.1207/s15516709cog0000_65
- Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519–543. https://doi.org/https://doi.org/10.1016/j.jml.2007.01.003
- Kochari, A. R., & Flecken, M. (2018). Lexical prediction in language comprehension: A replication study of grammatical gender effects in Dutch. PsyArXiv Preprints, https://doi.org/https://doi.org/10.17605/OSF.IO/9NPUE
- Kuperberg, G. R. (2007). Neural mechanisms of language comprehension: Challenges to syntax. Brain Research, 1146(1), 23–49. https://doi.org/https://doi.org/10.1016/j.brainres.2006.12.063
- Kuperberg, G. R., & Jaeger, T. F. (2017). What do we mean by prediction in language comprehension? Language, Cognition, and Neuroscience, 31(1), 32–59. https://doi.org/https://doi.org/10.1080/23273798.2015.1102299
- Kutas, M., DeLong, K. A., & Smith, N. J. (2011). A look around at what Lies ahead: Prediction and predictability in language processing. In M. Bar (Ed.), Predictions in the Brain: Using our past to generate a future (pp. 190–207). Oxford University Press. https://doi.org/https://doi.org/10.1093/acprof:oso/9780195395518.003.0065.
- Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390(6657), 279–281. https://doi.org/https://doi.org/10.1038/36846
- Matin, E., Shao, K. C., & Boff, K. R. (1993). Saccadic overhead: Information-processing time with and without saccades. Perception & Psychophysics, 53(4), 372–380. https://doi.org/https://doi.org/10.3758/BF03206780
- Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Wolfsthurn, V. G. Z., Bartolozzi, S., Kogan, F., Ito, V., Mézière, A., Barr, D., Rousselet, D. J., Ferguson, G. A., Busch-Moreno, H. J., Fu, S., Tuomainen, X., Kulakova, J., Husband, E., … Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. ELife, 7, 1–24. https://doi.org/https://doi.org/10.7554/eLife.33468
- Pan, X., & Hamilton, A. (2018). Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape. British Journal of Psychology, 109(3), 395–417. https://doi.org/https://doi.org/10.1111/bjop.12290
- Parsons, T. D. (2015). Virtual reality for enhanced ecological validity and experimental control in the clinical, affective and social neurosciences. Frontiers in Human Neuroscience, 9. https://doi.org/https://doi.org/10.3389/fnhum.2015.00660
- Peeters, D. (2018). A standardized set of 3-D objects for virtual reality research and applications. Behavior Research Methods, 50(3), 1047–1054. https://doi.org/https://doi.org/10.3758/s13428-017-0925-3
- Peeters, D. (2019). Virtual reality: A game-changing method for the language sciences. Psychonomic Bulletin and Review, 26, 894–900. https://doi.org/https://doi.org/10.3758/s13423-019-01571-3
- Peeters, D., & Dijkstra, T. (2018). Sustained inhibition of the native language in bilingual language production: A virtual reality approach. Bilingualism: Language and Cognition, 21(5), 1035–1061. https://doi.org/https://doi.org/10.1017/S1366728917000396
- Peeters, D., Hagoort, P., & Özyürek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64–84. https://doi.org/https://doi.org/10.1016/j.cognition.2014.10.010
- Pickering, M. J., & Gambi, C. (2018). Predicting while comprehending language: A theory and review. Psychological Bulletin, 144(10), 1002–1044. https://doi.org/https://doi.org/10.1037/bul0000158
- Pickering, M. J., & Garrod, S. (2007). Do people use language production to make predictions during comprehension? Trends in Cognitive Sciences, 11(3), 105–110. https://doi.org/https://doi.org/10.1016/j.tics.2006.12.002
- Pickering, M. J., & Garrod, S. (2013). An integrated theory of language production and comprehension. 329–392. https://doi.org/https://doi.org/10.1017/S0140525X12001495.
- Porretta, V., Kyröläinen, A., van Rij, J., & Järvikivi, J. (2017). Visual world paradigm data: From preprocessing to nonlinear time-course analysis. International Conference on Intelligent Decision Technologies, 268–277. https://doi.org/https://doi.org/10.1007/978-3-319-92028-3
- Pylyshyn, Z. (1989). The role of location indexes in spatial perception: A sketch of the FINST spatial-index model. Cognition, 32(1), 65–97. https://doi.org/https://doi.org/10.1016/0010-0277(89)90014-0
- R Core Development Team. (2011). R: A language and environment for statistical computing.
- Salthouse, T. A., McGuthry, K. E., & Hambrick, D. Z. (1999). A Framework for analyzing and interpreting differential aging patterns: Application to three measures of implicit learning. Aging, Neuropsychology, and Cognition, 6(1), 1–18. https://doi.org/https://doi.org/10.1076/anec.6.1.1.789
- Scheflen, A. E., & Ashcraft, N. (1976). Human territories: How we behave in space-time.
- Sorensen, D. W., & Bailey, K. G. D. (2007). The world is too much: Effects of array size on the link between language comprehension and eye movements. Visual Cognition, 15(1), 112–115. https://doi.org/https://doi.org/10.1080/13506280600975486
- Staub, A., Abbott, M., & Bogartz, R. S. (2012). Linguistically guided anticipatory eye movements in scene viewing. Visual Cognition, 20(8), 922–946. https://doi.org/https://doi.org/10.1080/13506285.2012.715599
- Staudte, M., Crocker, M. W., Heloir, A., & Kipp, M. (2014). The influence of speaker gaze on listener comprehension: Contrasting visual versus intentional accounts. Cognition, 133(1), 317–328. https://doi.org/https://doi.org/10.1016/j.cognition.2014.06.003
- Tomasello, M. (1995). Joint attention as social cognition. In D. Moore, & P. J. Dunham (Eds.), Joint attention: Its origins and role in development (pp. 103–130). Lawrence Erlbaum Associates.
- Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2018). The combined use of virtual reality and EEG to study language processing in naturalistic environments. Behavior Research Methods, 50(2), 862–869. https://doi.org/https://doi.org/10.3758/s13428-017-0911-9
- van den Brink, D., Brown, C. M., & Hagoort, P. (2000). Electrophysiological evidence for early contextual influences during spoken-word recognition: N200 versus N400 effects. 967–985.
- van Rij, J. (2015, March). Overview GAMM analysis of time series data. https://Jacolienvanrij.Com/Tutorials/GAMM.html
- van Rij, J., Wieling, M., Baayen, R., & van Rijn, H. (2017). itsadug: Interpreting Time Series and Autocorrelated Data Using GAMMs (R package version 2.3).
- Vogel, E. K., & Machizawa, M. G. (2004). Neural activity predicts individual differences in visual working memory capacity. Nature, 428(6984), 748–751. https://doi.org/https://doi.org/10.1038/nature02447
- Vogel, E. K., Woodman, G. F., & Luck, S. J. (2006). The time course of consolidation in visual working memory. Journal of Experimental Psychology. Human Perception and Performance, 32(6), 1436–1451. https://doi.org/https://doi.org/10.1037/0096-1523.32.6.1436
- Willems, R. M. (2015). Cognitive neuroscience of natural language use. Cambridge University Press.
- Wolpert, D. M., & Flanagan, J. R. (2001). Motor prediction. Current Biology, 11(18), R729–R732. https://doi.org/https://doi.org/10.1016/S0960-9822(01)00432-8
- Wood, S. (2017). Package “mgcv”.
- Yurovsky, D., Case, S., & Frank, M. C. (2017). Preschoolers flexibly adapt to linguistic input in a noisy channel. Psychological Science, 28(1), 132–140. https://doi.org/https://doi.org/10.1177/0956797616668557