2,131
Views
24
CrossRef citations to date
0
Altmetric
Articles

Size Does Matter: Implied Object Size is Mentally Simulated During Language Comprehension

, , &

Abstract

Embodied theories of language comprehension propose that readers construct a mental simulation of described objects that contains perceptual characteristics of their real-world referents. The present study is the first to investigate directly whether implied object size is mentally simulated during sentence comprehension and to study the potential influence of developmental factors on mental simulation by comparing adults' and children's mental simulation processing. Participants performed a sentence-picture verification task in which they read a sentence that implied a large or a small size for an object and then saw a picture of the object that matched or mismatched the implied size. Responses to pictures were faster when implied size and pictured size matched, suggesting that readers activated perceptual information on object size during sentence comprehension. The magnitude of the match effect was equal across age groups. The results contribute to refining and advancing knowledge with respect to the nature of mental simulations.

Introduction

Understanding written (and spoken) language not only involves comprehension of individual words and the propositional structure of text, it also requires the construction of a rich, coherent, visuospatial mental representation of the situation described in the text (e.g., de Koning & van der Schoot, Citation2013; Kintsch & van Dijk, Citation1978; Zwaan & Radvansky, Citation1998). According to embodied theories of language comprehension (Barsalou, Citation1999; Fischer & Zwaan, Citation2008; Glenberg, Citation1997), these mental representations, or situation models, are (partially) formed by perceptual symbols, which are directly derived from real-world perceptual and motor experiences (Barsalou, Citation1999; Zwaan, Citation1999). On this account, understanding sentences involves a mental simulation of described events by reactivating and integrating traces of earlier experiences from multiple perceptual and motor modalities in the brain that were recruited when the actual experience was acquired (Barsalou, Citation1999).

A growing body of research suggests that readers indeed activate sensorimotor information during language processing and that this information facilitates understanding (for overviews, see Barsalou, Citation2008; Fischer & Zwaan, Citation2008). More precisely, it has been shown that during sentence comprehension readers mentally simulate perceptual information of objects like an object's shape and of events like the described direction of motion in a scene, even when these perceptual properties are not explicitly mentioned in the text but are only implied in a sentence (e.g., Kaschak et al., Citation2005; Zwaan, Stanfield, & Yaxley, Citation2002). For example, Zwaan et al. (Citation2002), using a sentence-picture verification task, asked participants to read a sentence that implied a particular object shape (e.g., “The ranger saw the eagle in the nest”) and subsequently presented a picture of the described object (e.g., eagle) that matched (e.g., perched eagle) or mismatched (e.g., flying eagle) the shape implied in the sentence. Readers were faster to verify that an eagle had been mentioned in the sentence when the picture depicted a perched rather than a flying eagle, suggesting that readers changed their mental representation of the eagle depending on the context in which it was described.

Similar findings have been obtained for sentences implying orientation (Stanfield & Zwaan, Citation2001), visibility (Yaxley & Zwaan, Citation2007), color (Zwaan & Pecher, Citation2012), number (Patson, George, & Warren, Citation2014), and distance (Vukovic & Williams, Citation2014) of described objects. Zwaan and Pecher (Citation2012) showed that prior findings on the perceptual dimensions shape and orientation obtained with the sentence-picture verification task could be replicated in a more heterogonous population (i.e., not only involving psychology undergraduates) and in a less controlled environment than the laboratory. Together, the aforementioned studies indicate that the match effect is robust and replicable. Investigating the width of the match effect found with the sentence-verification task is important to refine and advance the embodied cognition perspective and create a stable body of findings with respect to mental simulation and the potential factors influencing these processes.

The present study adds to this and extends previous research on the sentence-picture verification task in two ways. First, we focus on a visual object property that has, to our knowledge, not received systematic attention from researchers within this paradigm: object size. That is, the present study examined whether implied perceptual information on object size is mentally simulated during sentence comprehension. Second, other than concentrating on mental simulation in adults, our study also addresses children's mental simulation processing. This aligns with the need to expand the research on mental simulation in children (Wellsby & Pexman, Citation2014) and enables us to make a direct comparison between mental simulation processes in children and adult readers within a single study to investigate potential developmental differences.

Simulating size information

To our knowledge few studies have focused on the—activation of—conceptual representations of information on object size in relation to language, but research suggests that object size is central to various cognitive functions like object recognition, implicit memory, conceptual processing, and perception-action coordination (e.g., Barsalou, Citation2008; Biederman & Cooper, Citation1992). For example, in grasping tasks reach-to-grasp movements are influenced by the size of visually presented objects (Taylor & Zwaan, Citation2010), the size inferred from nouns participants read (i.e., apple versus grape; Glover, Rosenbaum, Graham, & Dixon, Citation2004), and adjectives (i.e., large, small) printed on to-be grasped objects (e.g., Gentilucci & Gangitano, Citation1998). Also, on size judgment tasks requiring an explicit comparison of size, such as when having to choose the largest animal from two animal names printed in the same (e.g., lion ant) or different (e.g., LION ant) fonts (Rubinstein & Henik, Citation2002), nouns seem to automatically activate information on object size.

Similar findings have been observed on a same-different category judgment task in which the primary task (i.e., decide whether two animals belong to the same category) was unrelated to object size (Setti, Caramelli, & Borghi, Citation2009). That is, a noun referring to a large or a small object was assigned to the appropriate category faster when it was preceded by a same-size noun object (like elephant-giraffe). Similarly, perceptual information on object size influences performance in property verification tasks: It takes longer to verify whether a property (e.g., mane) belongs to a category (e.g., horse) for larger properties (Solomon & Barsalou, Citation2004). These findings are supported by research showing that brain areas that represent the size of objects during perception and action become active to represent size conceptually (Kan, Barsalou, Solomon, Minor, & Thompson-Schill, Citation2003). Moreover, they are consistent with neuroimaging findings suggesting that the brain is equipped with neurons sensitive to the processing of size information (Murata, Gallese, Luppino, Kaseda, & Sakata, Citation2000).

In accordance with these studies, research applying the sentence-picture verification paradigm to investigate the mental representation of object size in reading comprehension seems to suggest that object size plays a role in mental simulation (Vukovic & Williams, Citation2014; Winter & Bergen, Citation2012). For example, Winter and Bergen (Citation2012) concluded that reading sentences describing objects at different distances from the protagonist (“You are looking at the milk bottle in the fridge/across the supermarket”) modulated the size with which the object was visually simulated. After reading about the milk bottle in the fridge, participants were faster to verify that the largely depicted milk bottle (implying closeness) had been mentioned in the sentence than the smaller version of the milk bottle (implying distance). Vukovic and Williams (Citation2014) confirmed these findings and even showed that distance was simulated automatically and routinely. It is, however, important to note that in these studies object size was not manipulated directly but rather indirectly through the manipulation of distance. Distance between the object and the observer was implied by changing the size of the object pictured on the computer screen. A smaller picture on a computer screen equals a smaller retinal size (i.e., the absolute size of the object on the retina). In reality, the retinal size of an object changes proportionally with distance between the object and the observer, meaning that a smaller object implies a larger distance. Although these results seem to suggest that implied object size is mentally simulated during sentence processing, stronger evidence for this suggestion would be obtained by manipulating object size directly (Milliken & Jolicoeur, Citation1992).

The present experiment was specifically designed to investigate this. As previous studies have shown (Bergen & Winter, 2012; Milliken & Jolicoeur, Citation1992), however, it is difficult to directly manipulate object size in a similar way as other visual object properties. Object size cannot be perceived objectively from a picture without providing context cues for distance or other referential cues. Therefore, in the present study the objects of interest were depicted together with a familiar object (i.e., a table) of which the size is known and relatively constant. That is, all objects were presented on a table, which served as a stable referential cue, so that perceived size was manipulated independently of the distance between the observer and the object. If the suggestion from prior research (Vukovic & Williams, Citation2014; Winter & Bergen, Citation2012) that object size is mentally simulated during sentence comprehension is correct, we would expect a match effect on the sentence-picture verification task when implied object size is directly manipulated.

Adults versus children

To date, research on mental simulation during sentence processing has mainly focused on adult cognition (Wellsby & Pexman, Citation2014). To establish a link with this prior work, our study included adult participants. Additionally, we included a sample of children to expand the research on mental simulation in a relatively unexplored direction. That is, evidence on children's mental simulations is limited to a study by Engelen et al. (Citation2011) showing that children from grades 2 through 4 are able to mentally simulate objects' implied shape and orientation while reading or listening to sentences in a sentence-picture verification task. So far, no studies have been conducted to corroborate or extend this finding or are developmental aspects of mental simulation further explored into adulthood. A second aim of the present study was therefore to gain insight into both adults' and children's mental simulation processing and to directly compare their performance on the sentence-picture verification task. Treating participant's age (adults vs. children) as a between-subjects factor within the same study ensures that any obtained difference does not arise from procedural and design issues or the used materials but rather reflects developmental differences.

In principle, there are three possible outcomes. First, if adults outperform children on a mental simulation task, this would suggest that general knowledge and cognitive skills such as domain-expertise and processing efficiency are fundamental to mental simulation. Research indicates that experience with relevant objects and events influences the perceptual-motor representations a person constructs (Holt & Beilock, Citation2006). It is therefore likely that adults possess a network of perceptual presentations that is richer than that of children. Furthermore, language processing becomes more efficient with age (e.g., Chiappe, Hasher, & Siegel, 2000; Kail & Hall, Citation1994), which presumably leaves adults with more available cognitive resources for mental simulation processes during reading than children. Based on the assumption that children have relatively inefficient processing skills and lack relevant experiences, it would be expected that children construct mental simulations that are less rich than those constructed by adults. This would result in a more prominent match effect for adults than for children.

Alternatively, if children outperform adults on a mental simulation task, this would suggest that perceptual information is more fundamental to children's than to adults' mental simulations during reading. Empirical work on word-learning and conceptual knowledge acquisition shows that children need and actively use visual and sensorimotor information in order to create optimal situations for learning words and understanding concepts (e.g., Yu & Smith, Citation2012). Their dependence on perceptual information might be further compounded by a reduced role of language in children relative to adults. If children depend more heavily on perceptual information to derive meaning from text, this would be reflected in a stronger match effect for children than for adults.

Finally, if children show a match effect that is comparable in magnitude with that of adults, this will indicate that mental simulation processes are relatively independent of developmental aspects such as those mentioned above. This outcome would be consistent with the previous study by Engelen et al. (Citation2011) where primary school children's performance on a sentence-picture verification task was compared across grades. In that study no effect of grade on the match effect was found, suggesting that children construct mental simulations in sentence context, even when expertise and processing capacity are limited. Although Engelen et al. (Citation2011) adopted a developmental perspective to mental simulation, their comparisons across different ages were restricted to a sample of primary school children from grades 2 through 4. Therefore, information on the role of development in mental simulation beyond this restricted age range is yet lacking. Because of the vast amount of knowledge that is already obtained with adult participants, the direct comparison between adults and children made in our study provides further insight into the extent to which developmental factors potentially influence mental simulation processes.

Present study

In the present study participants read a short sentence implying either a small or large sized object. Unlike previous studies (except for Taylor & Zwaan, Citation2010) that studied object size by using different objects for small and large objects (e.g., stapler to refer to a small object; saw to refer to a large object) and/or made large-small differences explicit in their studies (e.g., Gentilucci & Gangitano, Citation1998; Kan et al., Citation2003), our experimental sentences did not mention object size explicitly and described a single object in a small or a large context. A picture followed each sentence and participants indicated whether the depicted object was mentioned in the sentence. The picture always showed a table of fixed size upon which a large or small instance of an object was presented, so that the pictured object size either matched or mismatched the object's size implied in the sentence (Figure ). Importantly, this reference point enabled participants to “read off” an object's size directly, allowing us to study object size independently of distance (Milliken & Jolicoeur, Citation1992).

Figure 1 Samples of experimental sentence-picture pairs used in Experiments 1 and 2. Note that sentences were presented to participants in Dutch (translation in parentheses). Pictures were presented full-screen, so that also small instances of an object could be accurately perceived.
Figure 1 Samples of experimental sentence-picture pairs used in Experiments 1 and 2. Note that sentences were presented to participants in Dutch (translation in parentheses). Pictures were presented full-screen, so that also small instances of an object could be accurately perceived.

Methods

Participants

The adult group consisted of 38 students (37 women) enrolled in educational sciences courses who participated for course credit. Their mean age was 20.68 (SD = 2.74). The children group consisted of 150 children (72 girls) from grades 4 (n = 72) and 6 (n = 78) from four primary schools in a large urban area in the Netherlands. Ages ranged from 8.44 to 11.26 years (M = 9.87, SD = .53) in grade 4 and from 10.98 to 13.03 years (M = 11.79, SD = .35) in grade 6. All children had grade level reading skills.

Materials

Eighty-four Dutch sentences were constructed: 28 experimental sentence pairs and 28 filler sentences (see Appendix). Each sentence of an experimental sentence pair implied a large or a small size of the same object and only differed in the last or middle noun and, in a few cases, on the preposition (Figure ). Any references to the size of the described object like the use of diminutives were avoided. Filler sentences all mentioned at least one concrete noun that referred to a small (e.g., mug) or a large (e.g., chair) object. Each sentence of an experimental sentence pair was accompanied by a color picture, which was drawn by a professional artist for this experiment. This picture depicted the critical object described in the sentence. Because unlike visual object properties such as shape and orientation an object's size can only be determined by relating the object to surrounding objects or criteria, the picture also showed a table on which the critical object was centrally presented (Figure ). In all pictures the same table, having a fixed width and height, served as a reference point to which the size of the critical object could be determined. The picture depicted a small or a large version of the same object on the table so that it matched the object's implied size in one of the sentences and mismatched it in the other. Except for size of the critical object, the matching and mismatching pictures were identical. Filler sentences were accompanied by a picture depicting the same table on which an object was presented that was semantically unrelated to the words in the preceding sentence. To keep the filler items as comparable as possible with the experimental items, the pictured objects in filler items varied in size (e.g., tweezers, ball, vase, chair). Filler pictures served to balance the number of affirmative and negative responses. All pictures occupied an area of approximately 15 × 20 cm on the computer screen.

Design

Four lists of sentence-picture pairs were created. Each list contained one of four possible sentence-picture pairs: small-large, large-small, small-small, and large-large. Within the age groups (i.e. children vs. adults), participants were randomly assigned to one of these lists according to a Latin square that counterbalanced items, conditions, and lists so that each participant was exposed to each condition, whereas each item appeared only in one condition per list. This produced a 2 (implied size: small vs. large) × 2 (pictured size: small vs. large) × 4 (list) × 2 (age group) design, with implied size and pictured size as within-participants variables and list and age group as between-participants variables.

Procedure

For adults, testing took place in the lab, and to conform with previous studies with adults (e.g., Zwaan et al., Citation2002) sentences were read silently. Children were tested on a 15-inch laptop in a quiet room within their school. In line with Engelen et al. (Citation2011), they were instructed to read each sentence aloud. All participants were instructed to decide, as quickly as possible, whether the subsequently pictured object had been mentioned in the sentence. Each trial started with a left-aligned vertically centered sentence displayed in a black 24-point New Courier font against a white background. Participants pressed the space bar if they had understood the sentence, and then a centrally presented fixation cross appeared for 500 ms (cf. Zwaan & Pecher, Citation2012), followed by a picture. Participants pressed the “j” key for “yes” responses and the “f” key for “no” responses. Participants began with two practice items consisting of one related and one unrelated picture. All trials were presented in random order. The experiment took about 15 minutes.

Results

Reaction times (RTs) on correct responses were trimmed by removing all RTs>3,000 ms and < 300 ms as well as RTs that were more than 2.5 SDs from the participant's mean in the relevant condition. This constituted removal of less than 6% of the data. A 2 (implied size: small vs. large) × 2 (pictured size: small vs. large) × 4 (list) × 2 (age group: adults vs. children) repeated-measures analysis of variance with list and age group as between-participants variables was conducted on the RTs and accuracy. Initially, for the children sample grade (grade 4 vs. grade 6) was included in the analyses. Because this variable did not yield any significant interactions with the other variables (Fs < 1) it was not considered in further analyses.

Because we had a counterbalanced design, only subject analyses were conducted (Raaijmakers, Schrijnemakers, & Gremmen, Citation1999). List was included as between-participants factor to increase the power of the analysis by eliminating error due to random pairings of item to condition, but given its lack of theoretical relevance effects for the list variable are not reported (Pollatsek & Well, Citation1995; Raaijmakers et al., Citation1999; Wassenburg & Zwaan, Citation2010).

In Figure the results of both age groups (i.e., adults and children) are shown separately. Overall, there was a significant interaction between implied size and pictured size, F(1,180) = 18.60, MSE = 514,161, p < .001,  = .09, indicating that participants generally responded faster when the picture matched rather than mismatched the implied size of the sentence. The implied size × pictured size interaction was significant for the adult age group, F(1,34) = 5.41, MSE = 32,685, p = .026,  = .14, and the children age group, F(1,146) = 21.97, MSE = 26,465, p < .001,  = .13. The strikingly similar pattern of results for adults and children, as shown in Figure , was evidenced by a nonsignificant interaction of implied size, pictured size, and age group, F(1,180) = .04, p = .835, indicating that the match effect did not differ for the two age groups. In fact, additional analyses showed approximately equal magnitudes of the match effect for adults (69 ms; Mmatch = 838, Mmismatch = 907), t(37) = 2.35, p = .021, d = .27, and children (61 ms; Mmatch = 1,251, Mmismatch = 1,312), t(149) = 4.30, p < .001, d = .35. No main effects for implied size, F(1,180) = .20, p = .659, and pictured size, F(1,180) = .66, p = .416, were found on the RTs. There was, however, a main effect of age group, F(1,180) = 52.21, MSE = 19,888,377, p < .001,  = .23, which shows that adults are significantly faster in responding to the pictures than children.

Figure 2 Mean response times for each sentence-picture combination for children (top) and adults (bottom). Error bars depict standard errors of the mean.
Figure 2 Mean response times for each sentence-picture combination for children (top) and adults (bottom). Error bars depict standard errors of the mean.

Similarly, analyses of the participants' accuracy scores showed a significant overall interaction between implied size and pictured size, F(1,180) = 8.24, MSE = .012, p = .005,  = .04. Although this interaction did not reach significance for adults, F(1,34) = 2.02, p = .164, the interaction was significant for children, F(1,146) = 12.41, MSE = .012, p = .001,  = .08. As displayed in Table , children's responses were more accurate when sentence and picture matched (.94) than when they mismatched (.91) in object size, t(149) = 3.30, p = .001, d = .27. Finally, there were no significant overall main effects for implied size, pictured size, or age group on response accuracy (all Fs < 1.21, ns).

Table 1 Mean accuracy for each sentence-picture combination for adults and children separately.

In summary, adults and children showed strikingly similar response patterns. These results demonstrate that both adults and children mentally simulate the implied object size when comprehending the sentences as indicated by faster responses to matching pictures. Importantly, the accuracy scores and RTs for the match and mismatch trials together did not show evidence for a speed-accuracy trade-off. That is, faster response times for matching versus mismatching items did not coincide with reduced accuracy on those items.

Discussion

In the present study we obtained evidence that both adults and children construct a perceptual simulation of object size during language comprehension. Specifically, adults' and children's responses in a picture-verification task were faster when the implied size of a pictured object matched rather than mismatched the object's size implied in the previously read sentence. These findings are in accordance with earlier results (e.g., Vukovic & Williams, Citation2014; Winter & Bergen, Citation2012; Zwaan & Pecher, Citation2012) and constitute an advance over prior research in two ways. First, we demonstrate that, similar to other visual object properties like shape and orientation, perceptual information on object size is activated during language comprehension when investigated directly (rather than indirectly through variation of distance, e.g., Winter & Bergen, Citation2012). Second, in a direct comparison we showed that adults and children construct mental simulations equally, suggesting that this process does not depend on reader's age.

The match effect observed in our study was comparable in magnitude with that obtained previously with adults concerning object shape and distance (Winter & Bergen, Citation2012; Zwaan et al., Citation2002; Zwaan & Pecher, Citation2012). The fact that we obtained an equally strong match effect as has been reported for indirect manipulation of object size (i.e., distance; Winter & Bergen, Citation2012) indicates that certain object properties are mentally simulated independently of whether or not they are crucial to object recognition and categorizing objects (e.g., Milliken & Jolicoeur, Citation1992). Interestingly, however, the match effect we found for object size is larger than that reported for orientation and color (Connell, Citation2007; Stanfield & Zwaan, Citation2001; Zwaan & Pecher, Citation2012), suggesting that differences in the magnitude of the match effect arise from the extent to which object properties afford and constrain actions. In other words, it seems that action-relevant object properties (i.e., size, shape) are most strongly activated during sentence comprehension. This corroborates Zwaan and Pecher's (Citation2012) suggestion that the extent to which specific instances of an object (e.g., large book, raw egg) afford different actions (e.g., large book can be put in the bookcase but not in your pocket; raw egg can be broken but not eaten) influences the likelihood that object properties will be simulated. That is, object shapes and sizes possibly more directly constrain interactions and have clearer goal relevance than other visual object properties like orientation and thus are strongly activated and more likely to be represented in the mental simulation of a sentence (for an extended discussion, see Zwaan & Pecher, Citation2012). Whether an explanation in terms of action relevance adequately accounts for differences in the (strength of) activation of visual object properties during language comprehension remains to be examined in future research. Our results provide a starting point from which such endeavors can be further explored.

In the present study the response times were directly compared across age groups. Not surprisingly, overall children showed slower responses than adults. Importantly, the magnitude of the match effect observed in our study appeared equal for adults and children, suggesting that the “object size effect” is a robust phenomenon. Apparently, perceptual information is equally important to adults and children for deriving meaning from text, at least when it concerns information about object size. This finding supports the emerging research showing that children are capable of mental simulation when reading sentences (Engelen et al., Citation2011). By focusing on implied object size, the current study extends prior work by showing that this is not restricted to commonly investigated perceptual dimensions. Moreover, the strikingly similar pattern of findings for the adults and children broadens our insight into the developmental aspects of mental simulation. More specifically, our findings suggest that the match effect is not only similar for readers who are relatively close in age such as when making comparisons across primary school children (Engelen et al., Citation2011) but also for readers who are at a relatively large distance from each other like when comparing adults with children. This supports the idea that constructing mental simulations during sentence processing is presumably independent of developmental factors. Future research is needed to further substantiate this claim.

Overall, our study confirms and extends a growing body of evidence suggesting that sentence comprehension involves the construction of a mental simulation consisting of activated perceptual properties of described objects (e.g., Zwaan et al., Citation2002). Such an interpretation is consistent with views assigning nonlinguistic (multisensory) representations, in addition to linguistic representations, an important role in the language comprehension process (e.g., Zwaan, Citation2003). This is assumed, for example, by theories of embodied language comprehension, which state that readers derive meaning from text through (partial) reactivation and integration of neural traces of previous perceptual experiences (e.g., Barsalou, Citation1999).

An important concern of this type of research is the external validity of the results. Some might argue that readers are encouraged to mentally simulate visual object properties because pictures are involved. It is, however, not likely that the reported results are due to task demands for several reasons (for a comparable discussion, see Winter & Bergen, Citation2012). First, several studies have reported comparable match effects in tasks that do not involve a comparison between pictures and language (e.g., Pecher, van Dantzig, Zwaan, & Zeelenberg, Citation2009; Wassenburg & Zwaan, Citation2010). Second, actively generating images as a response strategy would not improve overall task performance because in most sentences more than one object was mentioned and half of the time pictures mismatched the preceding sentence (for a more extensive discussion of a similar argument, see Stanfield & Zwaan, Citation2001). Finally, such a response strategy cannot account for the reported results, which indicate that readers construct integrated images of the sentences (see also Stanfield & Zwaan, Citation2001). Constructing an integrated image of the sentence as a response strategy would not make much sense, because the size of the depicted object mismatches that of the integrated images in half of the experimental trials. For these reasons the reported results are most likely not due to task demands or response strategies. It seems more likely that readers routinely simulate perceptual object properties during sentence processing.

In conclusion, we provide the first direct evidence that implied perceptual information on object size is mentally simulated during language comprehension in a sentence picture-verification task. Thereby, we provide an important extension to research on mental simulation of perceptual properties during language comprehension (e.g., Zwaan et al., Citation2002) as well as to studies investigating the representation of size information in other domains like motor action (e.g., Glover et al., Citation2004). The fact that we simultaneously studied adults and children in a single study using the same task provides an additional advantage over prior research in that we could more directly study the robustness of the observed effect and the potential influence of developmental factors. Importantly, our study also carries the research on mental simulation of visual object properties to another level and opens up new directions and opportunities for future studies. That is, our study makes clear that it is important to try to further elucidate the (shared) factors that conduces some visual object properties, like size and shape, to be more strongly activated during language comprehension than others, like orientation. Furthermore, the present study refines and advances the embodied cognition perspective and contributes to creating a stable body of findings with respect to mental simulation and the potential factors influencing these processes. Hopefully, our study serves as a useful impetus for future studies related to the mental simulation of described objects and situations during language comprehension.

References

  • Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–660.
  • Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–645.
  • Biederman, I., & Cooper, E. E. (1992). Size invariance in visual object priming. Journal of Experimental Psychology: Human Perception and Performance, 18, 121–133.
  • Chiappe, P., Siegel, L. S., & Hasher, L. (2000). Working memory, inhibitory control, and reading disability. Memory & Cognition, 28, 8–17.
  • Connell, L. (2007). Representing object colour in language comprehension. Cognition, 102, 476–485.
  • de Koning, B. B., & van der Schoot, M. (2013). Becoming part of the story! Refueling the interest in visualization strategies for reading comprehension. Educational Psychology Review, 25, 261–287.
  • Engelen, J. A. A., Bouwmeester, S., de Bruin, A. B. H., & Zwaan, R. A. (2011). Perceptual simulation in developing language comprehension. Journal of Experimental Child Psychology, 110, 659–675.
  • Fischer, M. H., & Zwaan, R. A. (2008). The role of the motor system in language comprehension. Quarterly Journal of Experimental Psychology, 61, 825–850.
  • Gentilucci, M., & Gangitano, M. (1998). Influence of automatic word reading on motor control. European Journal of Neuroscience, 10, 752–756.
  • Glenberg, A. M. (1997). What memory is for. Behavioral and Brain Sciences, 20, 1–55.
  • Glover, S., Rosenbaum, D. A., Graham, J., & Dixon, P. (2004). Grasping the meaning of words. Experimental Brain Research, 154, 103–108.
  • Holt, L. E., & Beilock, S. L. (2006). Expertise and its embodiment: Examining the impact of sensorimotor skill expertise on the representation of action-related text. Psychonomic Bulletin & Review, 13, 694–701.
  • Kail, R., & Hall, L. K. (1994). Processing speed, naming speed, and reading. Developmental Psychology, 30, 949.
  • Kan, I. P., Barsalou, L. W., Solomon, K. O., Minor, J. K., & Thompson-Schill, S. L. (2003). Role of mental imagery in a property verification task: fMRI evidence for perceptual representations of conceptual knowledge. Cognitive Neuropsychology, 20, 525–540.
  • Kaschak, M. P., Madden, C. J., Therriault, D. J., Yaxley, R. J., Aveyard, M., Blanchard, A. A., & Zwaan, R. A. (2005). Perception of motion affects language processing. Cognition, 94, 79–89.
  • Kintsch, W., & Van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85, 363–394.
  • Milliken, B., & Jolicoeur, P. (1992). Size effects in visual recognition memory are determined by perceived size. Memory & Cognition, 20, 83–95.
  • Murata, A., Gallese, V., Luppino, G., Kaseda, M., & Sakata, H. (2000). Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP. Journal of Neurophysiology, 83, 2580–2601.
  • Patson, N. D., George, G., & Warren, T. (2014). The conceptual representation of number. Quarterly Journal of Experimental Psychology, 67, 1349–1365.
  • Pecher, D., van Dantzig, S., Zwaan, R. A., & Zeelenberg, R. (2009). Language comprehenders retain implied shape and orientation of objects. Quarterly Journal of Experimental Psychology, 62, 1108–1114.
  • Pollatsek, A., & Well, A. D. (1995). On the use of counterbalanced designs in cognitive research: A suggestion for a better and more powerful analysis. Journal of Experimental Psychology: Learning, Memory, & Cognition, 21, 785–794.
  • Raaijmakers, J. G. W., Schrijnemakers, J. M. C., & Gremmen, F. (1999). How to deal with “The language-as-fixed-effect fallacy”: Common misconceptions and alternative solutions. Journal of Memory and Language, 41, 416–426.
  • Rubinstein, O., & Henik, A. (2002). Is an ant larger than a lion? Acta Psychologica, 111, 141–154.
  • Setti, A., Caramelli, N., & Borghi, A. M. (2009). Conceptual information about size of objects in nouns. European Journal of Cognitive Psychology, 21, 1022–1044.
  • Solomon, K. O., & Barsalou, L. W. (2004). Perceptual simulation in property verification. Memory & Cognition, 32, 244–259.
  • Stanfield, R. A., & Zwaan, R. A. (2001). The effect of implied orientation derived from verbal context on picture recognition. Psychological Science, 12, 153–156.
  • Taylor, L. J., & Zwaan, R. A. (2010). Grasping spheres, not planets. Cognition, 115, 39–45.
  • Vukovic, N., & Williams, J. N. (2014). Automatic perceptual simulation of first language meanings during second language sentence processing in bilinguals. Acta Psychologica, 145, 98–103.
  • Wassenburg, S. I., & Zwaan, R. A. (2010). Readers routinely represent implied object rotation: The role of visual experience. Quarterly Journal of Experimental Psychology, 63, 1665–1670.
  • Wellsby, M., & Pexman, P. M. (2014). Developing embodied cognition: insights from children's concepts and language processing. Frontiers in Psychology, 5, 1–10.
  • Winter, B., & Bergen, B. (2012). Language comprehenders represent object distance both visually and auditorily. Language and Cognition, 4, 1–16.
  • Yaxley, R. H., & Zwaan, R. A. (2007). Simulating visibility during language comprehension. Cognition, 105, 229–236.
  • Yu, C., & Smith, L. B. (2012). Embodied attention and word learning in toddlers. Cognition, 125, 244–262.
  • Zwaan, R. A. (1999). Embodied cognition, perceptual symbols, and situation models. Discourse Processes, 28, 81–88.
  • Zwaan, R. A. (2003). The immersed experiencer: Toward an embodied theory of language comprehension. Psychology of Learning and Motivation, 44, 35–62.
  • Zwaan, R. A., & Pecher, D. (2012). Revisiting mental simulation in language comprehension: Six replication attempts. PLoS One, 7, e51382.
  • Zwaan, R. A., & Radvansky, G. A. (1998). Situation models in language comprehension and memory. Psychological Bulletin, 123, 162–185.
  • Zwaan, R. A., Stanfield, R. A., & Yaxley, R. H. (2002). Language comprehenders mentally represent the shape of objects. Psychological Science, 13, 168–171.

Appendix

List of experimental sentences in Dutch (left) and their English translations (right)