1,096
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Made you look! Temporal and emotional characteristics of attentional shift towards gazed locations

, & | (Reviewing Editor)
Article: 1115614 | Received 23 Jun 2015, Accepted 29 Oct 2015, Published online: 27 Nov 2015

Abstract

Studies using a cued gazing paradigm show that attention is reflexively shifted to the gazed-at location. However, there is disagreement as to the factors modulating attention orienting due to gaze cueing. In a series of three experiments, we investigated the role of the emotional expression of the cue (Exp. 1, 2 and 3), cue-target stimulus onset asynchrony (SOA) (Exp. 2 and 3) and emotional valence of the target (Exp. 3) in the participants’ ability to attend to the target. Experiments 1 and 3 were discrimination tasks. Participants had to differentiate between two neutral targets in Exp. 1 and between two emotionally laden targets (a “square” and a “circle” associated with positive or negative emotions) in Experiment 3. In Experiment 2, participants had to detect a single target presented at different time intervals. The results suggest that attention is oriented towards gazed locations regardless of the accompanying emotional expression, SOA and emotion target association. Thus, eye gaze-mediated attention shifts in normal healthy adults seem to be unaffected by the experimental manipulations studied herein.

Public Interest Statement

Can a person use the emotion on their face to influence another person’s focus of attention? Some researchers say “yes”. Others find that “gaze cueing” is possible only when someone is susceptible to a specific emotional trigger (e.g. an anxious person is more likely to follow the gaze of a “fearful” face). In three experiments, we manipulated: (1) the emotion expressed on the gazing face; (2) the time lag between the gazing face attracting and then shifting another’s focus of attention; (3) the real or imagined emotional power of the item to be gazed at. Comprehensive data analysis indicates that our results replicate previous findings regarding the null effect of emotional faces. That is, people follow gaze cueing regardless of: (1) the emotion accompanying the gaze; (2) the time lag; (3) the target item’s emotional value.

Competing interests

The authors declare no competing interest.

1. Introduction

Gaze following is a cognitive skill. We follow the gaze of other agents naturally and also expect to find objects compatible with the emotions expressed by the gazing face. Following fearful faces may lead to early discovery of stimuli that may be dangerous i.e. a snake. Similarly, following a happy face may lead to a pleasant object and enhance social cohesion. Faces have evolutionary significance for visual search (Purcell & Stewart, Citation1986, Citation1988) and emotional appraisal (Eastwood, Smilek, & Merikle, Citation2001; Fox et al., Citation2000). Their emotional content influences how faces engage attention. While gaze cues may shift attention automatically, emotions do play a critical role in this activity. We look at gaze cues because we expect some objects in the environment whose valence is congruent with the emotion of the gazing face. Emotional content of gazing faces influences processing since valence of a face is processed with a brief glance (Calvo & Esteves, Citation2005; Streit et al., Citation2003). Identification of facial emotions seems to be an automatic and unconscious process (see Mumenthaler & Sander, Citation2015; Palermo & Rhodes, Citation2007 for a review). However, faces displaying distinct emotions are detected at different intervals (Batty & Taylor, Citation2003; Liu, Ioannides, & Streit, Citation1999). While much research has shown how one detects the emotion of a face, there is no agreement as to how emotional content of faces influences attention orienting.

In this study, we examined how the eye gaze of schematic emotional faces influenced attention orienting during a visual target detection/discrimination task. We also explored if targets presented as “good” or “bad” receive preferential attentional allocation when cued by a matching emotional face. We presented faces centrally to measure the influence of gaze on target identification and discrimination in a spatial cueing task (Posner, Citation1978). Human participants automatically shift their gaze as the face they are looking at shifts eyes in any direction (Driver et al., Citation1999; Friesen & Kingstone, Citation1998; Friesen, Ristic, & Kingstone, Citation2004). Friesen et al. (Citation2004) suggest that this form of attention orienting may indicate a completely different type of attentional mechanism than conforming to the traditional endogenous and exogenous forms as such. This attention shift in the gazed-at direction is reflexive and automatic (Tipples, Citation2002). Gaze following thus has both a social and cognitive basis. Even infants and children follow gazes during early stages of development to look at and approach objects in the environment (Klinnert, Campos, Sorce, Emde, & Svejda, Citation1983). Centrally presented gaze cues have been shown to shift attention automatically (Friesen & Kingstone, Citation1998). That is, as soon as one looks at a pair of gazing eyes, one starts to look in the direction of the gaze. Again, this behaviour of attention orienting can be assumed to be linked to our intense familiarity with faces and gazes. However, whether emotional expression of gazing faces influences attention orienting has been a much debated issue with contradictory results.

Several researchers have used the spatial cueing paradigm to investigate whether the distinct emotional content of faces leads to different patterns of attention orienting (Bayliss, Frischen, Fenske, & Tipper, Citation2007; Hietanen & Leppänen, Citation2003; Mathews, Fox, Yiend, & Calder, Citation2003; Tipples, Citation2006). This paradigm is an offshoot of the Posner spatial cueing paradigm. Here, a face is presented centrally followed by a gaze shift in any direction. In some trials, targets appear at the gazed-at locations; but in others, they do not. It has been found that just like centrally presented arrows, centrally presented gaze cues also lead to higher facilitation in target processing when targets appear at the gazed-at locations. Stimulus onset asynchrony (SOA) appears to be an important variable in this paradigm, as studies have shown differing results in gaze cueing tasks as a result of SOA (Tipples, Citation2006). Further, Tipples (Citation2002) found that eye gaze and other symbolic cues such as arrows behave similarly in terms of triggering automatic orienting to targets. Friesen et al. (Citation2004) showed that directional arrows might lead to reflexive shifts of attention when they are non-predictive, but they do not do so when they are counterpredictive. However, counterpredictive gaze cues can automatically orient attention. As a result, the authors suggest that gaze cues may be more strongly reflexive than arrow cues. In other words, participants immediately orient attention towards a location indicated by the gaze shift. Most researchers have obtained a cue validity effect with centrally presented gaze cues in spatial-orienting tasks. Yet it has been less clear if the emotional content of faces that gaze directly influence the focus of attention.

However, an important question in this regard is whether the emotional content of the gazing face influences gaze shift. Considerable evidence shows that the human attentional system is sensitive to different facial emotions (Fox et al., Citation2000; Hansen & Hansen, Citation1988). However, few studies have examined how the eye gaze of emotional faces affects attentional mechanisms. Facial emotions and gaze directions were processed together by participants (Ganel, Valyear, Goshen-Gottstein, & Goodale, Citation2005). Furthermore, it has been shown that participants may be faster in orienting to “fearful” faces compared to “happy” faces (Tipples, Citation2006). This suggests a threat superiority effect manifested in an urge to look around for “bad” things when cued by a “fearful” face. While this argument predicts that observers should show selective attention orienting to different emotional faces (i.e. faster for fearful faces), the results are not conclusive across studies. It is quite possible that such a discrepancy among studies may have resulted from differences in stimuli and other aspects of the experimental designs. For instance, different tasks (i.e. detection vs. discrimination) can play a role in eliciting the effect of emotional content of faces on orienting as well as SOA (see Frischen, Bayliss, & Tipper, Citation2007 for a review).

In detection tasks, there is a single target to which participants are asked to respond. On the other hand, during discrimination tasks, participants are asked to distinguish between two possible targets and make separate responses. Discrimination tasks are more attention demanding, as detecting a signal is easier for the visual system compared to discriminating between signals that share similar features (Sagi & Julesz, Citation1985). In a recent paper reporting a series of experiments on the effect of masked and unmasked neutral faces on the gaze cueing effect, it was found that those effects for unmasked cues also hold in discrimination, detection and localization tasks (Al-Janabi & Finkbeiner, Citation2014). However, for the particular case of discrimination and detection tasks, it is not known if the type of task determines whether gazing emotional faces influence attention.

What types of emotions influence gaze-induced attention shift? This is an important question, since faces do exhibit emotion and gazes. Therefore, when we look at a face that is gazing, we also automatically process its emotion(s). Therefore, it is likely that both facial emotion and gaze have a combined influence on attention orientation. Tipples (Citation2006) presented happy, fearful and neutral gazing faces followed by targets after either a short (300 ms) or longer (700 ms) SOA. The results showed a cue validity effect in target detection both after short and longer SOAs. However, participants were faster in orienting to fearful faces than happy faces. It was also observed that individuals with fearful traits showed a greater orienting effect to fearful faces at short SOA. This pattern of results suggests that gaze-related orienting is influenced by emotions, by cue-target intervals, and by the interactions between them. Further, these effects are modulated by factors related to the psychological profiles of participants.

Happy faces have also been shown to influence attention orienting (Hori et al., Citation2005; Putman, Hermans, & van Honk, Citation2006). Why do happy faces manipulate attention if they do not activate any sense of alertness? It has been suggested that happy faces have a processing advantage compared to other emotions: they are recognized faster and earlier (Kirita & Endo, Citation1995). Adams and Kleck (Citation2003) observed that happy faces, like other emotional expressions, influence attention orienting. One expects disturbing stimuli following a fearful gaze. Similarly, one expects pleasant stimuli following a happy face. It is, of course, debatable and difficult to establish which type of expression has been evolutionarily more useful as far as attention orienting in the environment is concerned. Smiling and happy faces of mothers play an important role in the emotional development of an infant (e.g. Tronick, Citation1989). Therefore, appreciation of happy faces too could trigger automatic attention orienting similar to sad or fearful faces.

Studies on this issue have not consistently observed an influence of emotional expression of a gazing face on attentional orientation. Hietanen and Leppänen (Citation2003) did not find any evidence of a distinct influence of facial expression on attention orienting beyond a validity effect, in that targets at cued locations were responded to faster than targets at uncued locations. In this study, the authors presented fearful, angry, happy and neutral faces both as schematic drawings and real pictures in a spatial cueing task. There was an influence of gaze cue at a very short SOA (i.e. 14 ms) but no influence of the corresponding emotion on cueing effect. Additionally, the authors observed that the gaze cueing effect was stronger for schematic faces than real faces. In spite of the fact that the authors used SOAs ranging from 14 to 600 ms in several experiments, there was no measurable influence of emotion on the cueing effect. Therefore, it is probably not the case that longer SOAs allow the emotional effect to develop. Tipples (Citation2006) observed the effect of emotion at both 300 and 700 ms. Hietanen and Leppänen (Citation2003) thus suggest that orienting attention to facial gaze cues could be independent of emotional analysis of such faces. Further, the authors speculate that emotional content of faces may influence “shift” and “disengage” components of attention differently. Those influences may not be readily visible in reaction times (RT). It is also possible that the influence of emotional faces could be more prominent, not on attention orienting but on social appraisal of stimuli. Therefore, these proposals suggest that the traditional spatial cueing paradigm may not be the ideal paradigm to investigate emotional effects on attention orienting with traditional variables such as speed and accuracy to targets. Nevertheless, many other studies have used the spatial cueing paradigm and have obtained effects that do suggest an influence of emotional faces on attention orienting. However, methodological differences between studies cannot be ruled out as major factors could have influenced the results and their interpretations.

For instance, enhanced orienting to fearful faces has only been observed in highly anxious participants (Mathews et al., Citation2003). Others have advanced the proposal that looking at an emotional face induces different types of expectancy related to particular objects. Therefore, one may see an influence of emotional faces on gaze shift if emotion congruent objects are to be found in the gazed at direction. It is one thing to expect faster or slower RT as a result of attending to emotional faces and another to expect a “good” or “bad” object as a result of such emotional appraisal. For instance, expecting a “snake” to a fearful face as opposed to expecting a “baby” to a happy face (Bayliss, Schuch, & Tipper, Citation2010). Some evidence suggests that the affective context of the target plays a role in gaze cueing by emotional faces (Bayliss et al., Citation2010; Pecchinenda, Pes, Ferlazzo, & Zoccolotti, Citation2008). Participants, therefore, may not just orient differently to emotional gaze cues; instead, they may “expect” to find targets matching in valence with such cues. For example, Bayliss et al. (Citation2007) observed that even if emotional content may not directly influence attention orienting, as far as RTs are concerned, participants view objects differently (as “good” or “bad”) depending on the type of emotion. In this study, the authors induced gaze cueing using emotional faces and asked participants to evaluate the attractiveness of objects gazed at. It was observed that people liked objects more and rated them highly when the gaze shift was triggered by a happy face than a sad face. Therefore, the authors proposed that regular RT measures may not show the combined effect of gaze cueing and emotion in a spatial attention task. Rather, the effect of emotion on target evaluation is more subtle. Gaze cues accompanied by facial emotion enhance subjective appraisal of objects looked at. Therefore, it is possible that faces with different emotions induce unique expectations among participants.

Pecchinenda et al. (Citation2008) argued that gaze cueing is modulated by facial expression only when participants have an evaluative goal towards the targets. These researchers presented happy, disgusted, fearful and neutral faces as cues. Positive and negative words were used as targets. The authors observed that fearful or disgusted faces elicited stronger cueing effects, but only when the task was to evaluate whether the target words denoted something positive or negative. However, when the task was to respond based on the case of the target words (UPPERCASE/lowercase), gaze cueing effects were comparable across all the emotional faces. Indeed, an explicit emotion evaluation of the target is needed in order to show that emotion effects have also been reported in tasks requiring the mapping of emotions onto space (Marmolejo-Ramos, Montoro, Elosúa, Contreras, & Jiménez-Jiménez, Citation2014). This type of evidence suggests that a gaze cueing effect cannot occur unless an explicit emotional assessment is required.

1.1. Current study

We examined these issues further in three different studies using schematic faces with different facial emotions (i.e. happy, sad and neutral) in target detection and discrimination tasks. (Note that most studies have used “angry” faces and not many have examined the effect of “sad” expressions.) We also examined if emotional gaze cues selectively affect the processing of targets that themselves have “affective” status (Bayliss et al., Citation2007). While it has been shown that happy and sad gaze cues influence the affective ratings of targets by observers, it is not clear if a “sad” gaze helps find a “sad” object faster. In Experiment 1, participants were presented emotional faces that shifted gaze in four directions. They were asked to differentiate between two objects. This experiment was designed to examine the claims of earlier work in which evidence of emotional gaze cueing was found in a discrimination task but not in a detection task. In Experiment 2, we introduced a detection task along with two SOAs. Previous research has shown that the emotional content of faces takes at least 100–300 ms after the onset of the gaze cue to show its full effect on tasks (Friesen & Kingstone, Citation1998, Citation2003; Ristic, Friesen, & Kingstone, Citation2002). Based on this reasoning, one should expect the effect of emotional gaze cues to be enhanced by a longer SOA than by a shorter SOA. It has also been shown that “fear” and “happy” emotions influence attention orienting differently at shorter and longer SOAs (Tipples, Citation2006). In Experiment 3, we examined if facial emotion influenced attention orienting when observers look for “good” and “bad” targets. In other words, we tested whether emotional eye gaze facilitates detection of objects with congruent affective profiles.

Hence, if the facial expression of a cueing face influences attention orienting due to a gaze cue, we predict that the gaze cueing effects obtained due to happy/sad cueing faces should vary compared to neutral faces. Also, such a gaze cueing effect should be larger at the long SOA. Additionally, we also expect to observe an effect of emotional gaze cueing modulated by the relationship between the emotion of the central face and valence of the target object. That is, responses should be faster towards “happy”/“sad” objects when gazed at via centrally presented “happy”/“sad” faces.

2. Experiment 1

2.1. Methods

2.1.1. Participants

Forty-two students from the University of Hyderabad participated in the experiment. All participants reported normal or corrected-to-normal vision and consented to participate in the experiment (see Table ). All participants gave written consent for their participation; and for this and subsequent experiments, the ethics committee from the University of Hyderabad approved the protocol.

Table 1. Demographics of the participants in the three experiments

2.1.2. Stimuli and procedure

Schematic diagrams of a face expressing happy, sad and neutral emotions were used in the experiments. The faces measured 6° × 6°. For each emotion type, the gaze direction of the face varied between frontal, left and right. The stimuli were presented on a 19″ LCD monitor with a screen resolution of 1,024 × 768 pixels and refresh rate of 60 Hz. Participants were seated at a distance of 60 cm from the monitor. Stimuli were designed and presented using DMDX software (Forster & Forster, Citation2003).

The trials in each experiment started with a fixation cross at the centre subtending a visual angle of 2° × 2° which stayed for 1,000 ms. Participants were asked to fixate their gaze at this cross. The fixation cross was followed by a frontal-looking face displaying happy/neutral/sad emotion at the centre surrounded by four visible placeholders subtending 2° × 2°. The placeholders were located at an eccentricity of 9° horizontally (left, right) and 7° vertically (up, down). This event remained on the screen for 1,000 ms.

Immediately after the previous event, the face (displaying happy/sad/neutral emotion) shifted its gaze (looking up/down/left/right) which lasted for 100 ms. The face then returned to the normal position with eyes looking forward and stayed for 1,000 ms acting as cue-back. The facial expression of the central face did not change during a trial. There were two kinds of targets to be differentiated: a square and circle that appeared in one of the quadrangles. Then the target (1° × 1°) appeared for 2,000 ms or until a response was made. Using a keyboard, the subjects pressed “s” if they saw a square and “c” if they saw a circle. If the target appeared at the location cued by the schematic face, the trial was considered as “valid”. If the cued location was different from the target location, the trial was considered to be “invalid”. There was an equal number of “valid” and “invalid” trials in the experiment. On invalid trials, the target appeared only at the location exactly opposite to the cue. For example, if the gaze cue was towards the left, the target appeared only in the right box on invalid trials. There were 240 trials in total; 80 trials for each emotion type. For each emotion type, there were 40 valid and 40 invalid trials. The location of the target was equally divided between the four place holders. The participants were given a break for two minutes halfway through the experiment (see Figure (A)).

Figure 1. Illustration of the sequence of events in Experiments 1 (A), 2 (B) and 3 (C), and the schematic faces used in the experiments (D)

Figure 1. Illustration of the sequence of events in Experiments 1 (A), 2 (B) and 3 (C), and the schematic faces used in the experiments (D)

2.1.3. Design and statistical analyses

The design consisted of the variables “emotion cue” (happy, neutral, sad) and “validity” (valid, invalid). As the variables were within-participants, repeated measures ANOVA were used to analyse the data. RT and accuracy rates (AR) were used as dependent variables. Various statistical techniques were applied to the RT data before analysing them via parametric ANOVAs. This comprehensive analytical approach is a variation of the “side-by-side” data analysis approach (see discussion section in Marmolejo-Ramos, Cousineau, Benites, & Maehara, Citation2015). It requires applying various statistical methods to the same data-set in order to find patterns in the data (see Notes in Table for extra details). To assist in finding patterns in the data, estimations were performed of expected and observed sizes of main effects and interactions (see Table ).

Table 2. Results of “side-by-side” parametric-only analyses performed on the reaction time and accuracy rate data obtained in the three experiments (significant p-values are shaded)

Table 3. Estimation of observed and expected effect sizes (partial eta squares) of the main effects and interactions in the three experiments (the results are based on the RT data only)

2.2. Results

Table summarizes the results of the parametric-only, “side-by-side” analyses performed on the data obtained in the experiment. Effect sizes for main effects, interactions (ηp2 symbol) and pairwise comparisons (Cohen’s d symbol) are also provided along with their observed power (here, op) (see Lakens, Citation2013). See Notes in Table for interpretation benchmarks of the effect sizes. Table focuses on observed and expected sizes of main effects and interactions.

2.2.1. RT data

In Experiment 1, three of the analyses showed a medium effect of the Validity × Emotion-Cue interaction (Mηp2 = .08, SD = 0, Mop = .61, SD = .02) in that in the neutral emotion cue, participants responded faster to valid than to invalid trials (MCohen’s d = .60, SD = .69, Mop = .97, SD = .02). However, across all analyses a large main effect of validity was evident (Mηp2 = .26, SD = .05, Mop = .92, SD = .08), in that participants were faster responding to targets on valid compared to invalid trials (MCohen’s d = .56, SD = .31, Mop = .92, SD = .08), (see Figure ).

Figure 2. Mean RT in Experiments 1–3.

Notes: The average RTs correspond to those obtained in analysis AM (see Notes in Table ). The within-subjects variables are “validity” (valid, invalid), “emotion cue” (happy, neutral, sad), SOA (500 ms, 1,000 ms), and “target association” (P = positive and N = negative). Although the Y axis was adapted to the range of RTs found in each experiment, the major unit (10 ms) was kept constant. Error bars represent ±1 SE.
Figure 2. Mean RT in Experiments 1–3.

2.2.2. AR data

No significant main effects or interactions were evident in Experiment 1 (all p > .05), (see Table ).

2.3. Discussion

In Experiment 1, we considered gaze cues with three different emotion types: happy, sad and neutral. We observed a main effect of validity, that is, responses were faster to targets at cued-at locations than otherwise. We also observed a significant interaction between emotion of the cueing face and validity. However, pairwise interactions showed that differences in response speed on valid and invalid trials were significantly different only for the neutral face. “Happy” or “sad” faces did not significantly modulate the cueing effect. However, we did not include a cue-target delay in Experiment 1; and this might have led to the null effect in the emotional expression of the face. Several studies have shown that facial expression of gaze cues affects attention orienting at long SOAs (Driver et al., Citation1999; Friesen & Kingstone, Citation1998; Ristic et al., Citation2002). To investigate the role of SOA in attention orienting due to emotion cues, we decided to run another experiment by including two different SOAs: 500 ms and 1,000 ms. Further, most cueing studies using emotional faces have not employed a cue-back procedure. The cueing face typically gazes at a location/target and stays there until a response. Hence, to be able to compare to existing studies, we used a similar procedure where the gaze direction stayed at the cued-at location.

3. Experiment 2

3.1. Methods

3.1.1. Participants

Nineteen students from the University of Hyderabad participated in the experiment. All participants reported normal or corrected-to-normal vision and consented to participate in the experiment (see Table ). All participants gave written consent for their participation.

3.1.2. Stimuli and procedure

The stimuli used in Experiment 2 were identical to that used in Experiment 1. After the fixation cross, a frontal looking face was presented for 1,000 ms. Immediately after this, the central face with its gaze directed towards one of the boxes (left/right/up/down) stayed on the screen for 500 ms or 1,000 ms time, after which the target appeared. The target remained on the screen for a maximum of 2,000 ms. The trials were divided into two blocks, one for each SOA, and consisted of 104 trials. In this experiment, instead of a discrimination task, we administered a detection task. Participants were instructed to press the “SPACE” bar as soon as they detected a circle in one of the placeholders. Thus, there were catch-error trials although no AR were collected. There were a total of 240 trials, out of which 32 were catch trials (see Figure (B)).

3.1.3. Design and statistical analyses

Experiment 2 consisted of the variables “emotion cue” (happy, neutral, sad), “validity” (valid, invalid) and SOA (500 ms, 1,000 ms). As the variables were within-participants, repeated measures ANOVA were used to analyse the data. RT were used as dependent variables, and the statistical analyses were as those applied to the data obtained from Experiment 1.

3.2. Results

Table summarizes the results of the analyses performed on the data obtained in the experiment.

3.2.1. RT data

In Experiment 2, three of the analyses showed a large main effect of validity (Mηp2 = .4, SD = .06 Mop = .85, SD = .08) such that participants responded faster to valid than invalid trials (MCohen’s d = .71, SD = .45, Mop = .85, SD = .08). One of the analyses (AMdn) showed a significant three-way interaction SOA × Validity × Emotion-Cue substantiated by valid trials being responded to faster than invalid trials when a happy face appeared for 500 ms, a neutral face appeared for 500 ms and a neutral face appeared for 1,000 ms (see Figure ).

3.2.2. AR data

Given the design of Experiment 2, no accuracy rate analysis was performed. The total percentage of catch-error trials was found to be very low (3.53%). Hence, further analyses were not performed on the errors.

3.3. Discussion

In Experiment 2, we found that a delay between gaze cueing and the appearance of a target does not always affect attention orienting. Although we found some evidence indicating that SOA and emotion cue and validity can interact, once again the analyses demonstrated that cue validity was the driver of the effect (see also Table ). It is possible that the emotion of the cueing face modulates shifts in attention only when gaze cueing is towards an affective target. Previously, Friesen, Halvorson, and Graham (Citation2011) showed that gaze cueing was stronger when people had to respond to emotionally valenced targets. Several studies have shown that affective context of the target plays a role in attention orienting towards the targets (Bayliss et al., Citation2010; Pecchinenda et al., Citation2008). Since the emotional valence of an object is subjective and temporally dynamic, we decided to assign emotional symbolic values to arbitrary objects such as a square or circle.

4. Experiment 3

4.1. Methods

4.1.1. Participants

Twenty-four students from the University of Hyderabad participated in the experiment. All participants reported normal or corrected-to-normal vision and consented to participate in the experiment (see Table ). All participants gave written consent for their participation.

4.1.2. Stimuli and procedure

In Experiment 3, the stimuli and basic design of Experiment 2 were used again but the task was different. The participants had to differentiate between a square and a circle, and they were instructed to press “A” or “L” on seeing the target. This was done to make sure that participants would use two different hands while responding. The correspondence between the response key (“A”/“L”) and target-type (circle/square) was counterbalanced across participants. Additionally, before the start of the experiment, participants were asked to form a positive association with target “X” and a negative association with target “Y”. (The assignment of “X” and “Y” to circle/square was counterbalanced across participants.) For example, in the case of a positive association with the target, they were told “In this experiment, the circle represents positive emotions. Please think of events, experiences and words that evoke positive emotions within you. In other words, associate ‘X’ with something that makes you happy”. Similar instructions were given with respect to target “Y” but with reference to a negative association. These instructions appeared together on the same screen after every 48 trials and stayed on for 30 s. The experiment had a total of 288 trials divided into six blocks. There were 96 trials for each emotion type, divided into 48 valid and 48 invalid trials. The trials were further divided equally between SOA and type of emotion associated with the target. The trials within a block and the blocks themselves were presented in a randomized order (see Figure (C)).

4.1.3. Design and statistical analyses

Experiment 3 consisted of “emotion cue” (happy, neutral, sad), “validity” (valid, invalid), SOA (500 ms, 1,000 ms) and “target association” (positive, negative). As the variables were within-participants, repeated measures ANOVA were used to analyse the data. RT and AR were used as dependent variables, and the statistical analyses were as those applied to the data obtained from Experiments 1 and 2.

4.2. Results

Tables and summarize the results of the analyses performed on the data obtained in the experiment.

4.2.1. RT data

In Experiment 3, the four analyses also showed a large main effect of validity (Mηp2 = .48, SD = .09, Mop = .96, SD = .05) in that valid trials were responded to faster than invalid trials (MCohen’s d = .94, SD = .49, Mop = .96, SD = .05). Also, three of the analyses showed a main effect of target association (Mηp2 = .29, SD = .03, Mop = .80, SD = .06) in that negative target associations were responded to faster than positive target associations (MCohen’s d = .77, SD = .59, Mop = .80, SD = .06). In one of the analyses, an interaction between target association and SOA was present such that in the 500 ms SOA, negative target associations were responded to faster than positive target associations. Finally, in another of the analyses, it was found that a three-way interaction between SOA, target association and emotion cue validated by negative target associations was responded to faster than positive target associations during the 500 ms SOA and the neutral emotion cue and the 1,000 ms SOA and the happy emotion cue. Figure summarizes the results of the experiment.

4.2.2. AR data

No significant main effects or interactions were evident in Experiment 3 (all p > .05).

4.3. Discussion

Experiment 3 once again showed that participants responded faster to targets at cued locations as opposed to targets at uncued locations. However, once again we found no evidence to suggest that either the emotion of the face or the variable SOA had any influence on the gaze-cueing effect. This replicates our findings in Experiments 1 and 2 (see also Table ). In addition, we had participants attaching positive and negative values for targets to be differentiated in this experiment. We found that affective context of the target object modulated the speed at which participants responded to the targets. People responded faster to objects associated with negative emotion than to objects associated with positive emotion.

5. General discussion

A series of experiments was devised to investigate the factors affecting attention orienting by emotional gaze cues. The main message across the studies, the “side-by-side” analysis and the observed-expected effect size estimations, is that the expression of the cueing face or the cue-target delay does not affect the participants’ response to gazed cues. Instead, the cueing itself resulted in faster orienting to targets located at the cued location as supported by a significantly large effect of validity. (A trial was considered to be “valid” when the target location and the cued location were the same and “invalid” when they were not.) We also found that the association of the target object with positive or negative valence modulated the speed at which participants responded to the targets. Response time to objects associated with “negative” emotion elicited faster responses compared to objects associated with “positive” emotion. In addition, in Experiment 1, we observed that gaze direction interacted significantly with the emotional content of the cueing face. But this effect was seen only for neutral faces, and not for “happy” or “sad” faces. This effect was not replicated in the next two experiments. It is possible that using a cue-back procedure in Experiment 1 might have resulted in this effect. However, few studies in the past have employed a cue-back in gaze cueing studies with emotional faces. Future studies should examine the influence of emotional faces when the eye-gaze does not stay at the cued location.

The results of Experiments 1 and 2 are in line with research which showed photographs exhibiting facial affect do not influence spatial attention, unless an explicit emotion evaluation task is required (Pecchinenda et al., Citation2008). They also support the idea that facial expression (be it real or schematic) and SOA seem to have no effect on gaze-cued attention (Hietanen & Leppänen, Citation2003). The fact that the main effect of cue validity (i.e. valid gaze cue faster than invalid) held across discrimination, detection and symbolic emotion discrimination tasks lends support to a recent study which demonstrated that informative unmasked gaze cueing effects are not task dependent (Al-Janabi & Finkbeiner, Citation2014). Further, Yamada and Kawabe (Citation2013) found that gaze influences the mislocalization of a target object (i.e. an object not gazed at) at a SOA of two seconds, but not at a SOA of zero seconds. Our results complement those results by showing that for SOAs as short as 500 ms, gaze influences attention towards the gazed location regardless of the accompanying emotional face. Not only does this indicate that attention to external objects is not affected by the location looked at (Yamada & Kawabe, Citation2013), but also that emotions have no effect on attended objects or locations gazed at. It could be argued that the pictorial faces used in our study did not portray a realistic emotional face and thus did not lead to an emotion effect. However, as shown in previous studies (Pecchinenda et al., Citation2008), using real or pictorial faces does not seem to modulate the effect.

Our Experiment 3 extended previous studies that used targets with affective contexts to examine the influence of attention orienting due to emotional faces (Bayliss et al., Citation2010; Pecchinenda et al., Citation2008). Our results show that the emotional valence of the stimuli modulated the response speed of the participants. However, it did not modulate gaze cueing of the emotional face. Bayliss et al. (Citation2010) observed higher cueing effects when the emotion of the cueing face matched the affective context of the target object. Similarly, Pecchinenda et al. (Citation2008) found the emotional content of the cueing face exerted influence only when the targets were supposed to be affectively evaluated. However, there are important differences between these studies and the present study. We did not use stimuli inherently associated with a particular affect (such as the picture of a snake or specific emotional words). Instead, we used arbitrary objects which were temporarily associated with an emotional valence. It is possible that the association formed was not strong enough and that mapping of emotions onto arbitrary symbols may not be sufficient to deviate spatial attention.

Thus, this last result could be investigated by examining the relationship between emotional gaze and attention to spatial locations from an embodied cognition viewpoint. Studies on the metaphorical mapping of emotions onto space indicate that positive concepts are mapped onto upper spatial locations, whereas negative concepts are mapped onto lower spatial locations (Ansorge & Bohner, Citation2013; Marmolejo-Ramos, Elosúa, Yamada, Hamm, & Noguchi, Citation2013; Marmolejo-Ramos et al., Citation2014). It could be suggested that targets located in upper locations are detected faster when happy faces gaze at those locations than when targets are located in lower locations gazed at by happy faces. By the same token, targets located in upper locations are detected more slowly than targets located in lower locations when these are gazed at by sad faces. If the gaze cueing effect is resistant to factors such as emotion faces, SOA and emotion–target association, it might be the case that the metaphorical mapping of emotions onto space does not interact with gaze cues. However, we believe these conjectures have not been tested as yet.

Acknowledgements

The authors thank Natalie Butcher, Yuki Yamada, Rosie Gronthos and Susan Brunner for proofreading this manuscript and suggesting ideas for its improvement. We also thank Divya Bhatia and Kesaban Roy Choudhary for help in data collection.

Additional information

Funding

Funding. This work was supported by a DST Cognitive Science Initiative grant to R.K.M. from the Indian Government.

Notes on contributors

Seema Prasad

Seema Prasad has completed her MSc in Physics and is currently a PhD student at the Center for Neural and Cognitive Sciences, University of Hyderabad. Her main research interests include attention and unconscious processing and language–vision interactions.

Fernando Marmolejo-Ramos

Fernando Marmolejo-Ramos is a postdoctoral fellow at the Department of Psychology at Stockholm University. He has a MAppSc and a PhD from Ballarat University and the University of Adelaide, respectively. His main research areas are the embodiment of language and emotions, cross-modal processing and applied statistics.

Ramesh Kumar Mishra

Ramesh Kumar Mishra is currently chair of the Center for Neural and Cognitive Sciences at the University of Hyderabad. His research interests include attention, language and visual processing.

References

  • Adams, R. B., & Kleck, R. E. (2003). Perceived gaze direction and the processing of facial displays of emotion. Psychological Science, 14, 644–647.10.1046/j.0956-7976.2003.psci_1479.x
  • Al-Janabi, S., & Finkbeiner, M. (2014). Responding to the direction of the eyes: In search of the masked gaze-cueing effect. Attention, Perception & Psychophysics, 76, 148–161.
  • Ansorge, U., & Bohner, G. (2013). Investigating the association between valence and elevation with an implicit association task that requires upward and downward responses.Universitas Psychologica, 12, 1453–1471.
  • Batty, M., & Taylor, M. J. (2003). Early processing of the six basic facial emotional expressions. Cognitive Brain Research, 17, 613–620.10.1016/S0926-6410(03)00174-5
  • Bayliss, A. P., Frischen, A., Fenske, M. J., & Tipper, S. P. (2007). Affective evaluations of objects are influenced by observed gaze direction and emotional expression. Cognition, 104, 644–653.10.1016/j.cognition.2006.07.012
  • Bayliss, A. P., Schuch, S., & Tipper, S. P. (2010). Gaze cueing elicited by emotional faces is influenced by affective context. Visual Cognition, 18, 1214–1232.10.1080/13506285.2010.484657
  • Calvo, M., & Esteves, F. (2005). Detection of emotional faces: Low perceptual threshold and wide attentional span. Visual Cognition, 12, 13–27.10.1080/13506280444000094
  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.10.1037/0033-2909.112.1.155
  • Driver, J., Davis, G., Ricciardelli, P., Kidd, P., Maxwell, E., & Baron-Cohen, S. (1999). Gaze perception triggers reflexive visuospatial orienting. Visual Cognition, 6, 509–540.10.1080/135062899394920
  • Eastwood, J. D., Smilek, D., & Merikle, P. M. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception & Psychophysics, 63, 1004–1013.
  • Forster, K. I., & Forster, J. C. (2003). DMDX: A windows display program with millisecond accuracy. Behavior Research Methods, Instruments, & Computers, 35, 116–124.
  • Fox, E., Lester, V., Russo, R., Bowles, R. J., Pichler, A., & Dutton, K. (2000). Facial expressions of emotion: Are angry faces detected more efficiently? Cognition & Emotion, 14, 61–92.
  • Friesen, C. K., Halvorson, K. M., & Graham, R. (2011). Emotionally meaningful targets enhance orienting triggered by a fearful gazing face. Cognition and Emotion, 25, 73–88.10.1080/02699931003672381
  • Friesen, C. K., & Kingstone, A. (1998). The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 5, 490–495.
  • Friesen, C. K., & Kingstone, A. (2003). Abrupt onsets and gaze direction cues trigger independent reflexive attentional effects. Cognition, 87, B1–B10.10.1016/S0010-0277(02)00181-6
  • Friesen, C. K., Ristic, J., & Kingstone, A. (2004). Attentional effects of counterpredictive gaze and arrow cues. Journal of Experimental Psychology: Human Perception and Performance, 30, 319–329.
  • Frischen, A., Bayliss, A. P., & Tipper, S. P. (2007). Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychological Bulletin, 133, 694.10.1037/0033-2909.133.4.694
  • Ganel, T., Valyear, K. F., Goshen-Gottstein, Y., & Goodale, M. A. (2005). The involvement of the “fusiform face area” in processing facial expression. Neuropsychologia, 43, 1645–1654.10.1016/j.neuropsychologia.2005.01.012
  • Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd: An anger superiority effect. Journal of Personality and Social Psychology, 54, 917–924.10.1037/0022-3514.54.6.917
  • Hietanen, J. K., & Leppänen, J. M. (2003). Does facial expression affect attention orienting by gaze direction cues? Journal of Experimental Psychology: Human Perception and Performance, 29, 1228–1243.
  • Hori, E., Tazumi, T., Umeno, K., Kamachi, M., Kobayashi, T., Ono, T., & Nishijo, H. (2005). Effects of facial expression on shared attention mechanisms. Physiology & Behavior, 84, 397–405.
  • Kirita, T., & Endo, M. (1995). Happy face advantage in recognizing facial expressions. Acta Psychologica, 89, 149–163.10.1016/0001-6918(94)00021-8
  • Kittler, J. E., Menard, W., & Phillips, K. A. (2007). Weight concerns in individuals with body dysmorphic disorder. Eating Behaviors, 8, 115–120.10.1016/j.eatbeh.2006.02.006
  • Klinnert, M. D., Campos, J., Sorce, J. F., Emde, R. N., & Svejda, M. J. (1983). Social referencing: Emotional expressions as behavior regulators. Emotion: Theory, Research and Experience, 2, 57–86.
  • Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4(863). doi:10.3389/fpsyg.2013.00863
  • Liu, L., Ioannides, A. A., & Streit, M. (1999). Single trial analysis of neurophysiological correlates of the recognition of complex objects and facial expressions of emotion. Brain Topography, 11, 291–303.10.1023/A:1022258620435
  • Marmolejo-Ramos, F., Cousineau, D., Benites, L., & Maehara, R. (2015). On the efficacy of procedures to normalise ex-Gaussian distributions. Frontiers in Psychology, 5(1548). doi:10.3389/fpsyg.2014.01548
  • Marmolejo-Ramos, F., Elosúa, M. R., Yamada, Y., Hamm, N., & Noguchi, K. (2013). Appraisal of space words and allocation of emotion words in bodily space. PLoS ONE, 8(12), e81688. doi:10.1371/journal.pone.0081688
  • Marmolejo-Ramos, F., Montoro, P. R., Elosúa, M. R., Contreras, M. J., & Jiménez-Jiménez, W. A. (2014). The activation of representative emotional verbal contexts interacts with vertical spatial axis. Cognitive Processing, 15, 253–267.10.1007/s10339-014-0620-6
  • Mathews, A., Fox, E., Yiend, J., & Calder, A. (2003). The face of fear: Effects of eye gaze and emotion on visual attention. Visual Cognition, 10, 823–835.10.1080/13506280344000095
  • Mumenthaler, C., & Sander, D. (2015). Automatic integration of social information in emotion recognition. Journal of Experimental Psychology: General. doi:10.1037/xge0000059
  • Palermo, R., & Rhodes, G. (2007). Are you always on my mind? A review of how face perception and attention interact. Neuropsychologia, 45, 75–92.10.1016/j.neuropsychologia.2006.04.025
  • Pecchinenda, A., Pes, M., Ferlazzo, F., & Zoccolotti, P. (2008). The combined effect of gaze direction and facial expression on cueing spatial attention. Emotion, 8, 628–634.10.1037/a0013437
  • Posner, M. I. (1978). Chronometric explorations of mind. Hillsdale, NJ: Lawrence Erlbaum.
  • Purcell, D. G., & Stewart, A. L. (1986). The face-detection effect. Bulletin of the Psychonomic Society, 24, 118–120.10.3758/BF03330521
  • Purcell, D. G., & Stewart, A. L. (1988). The face-detection effect: Configuration enhances detection. Perception & Psychophysics, 43, 355–366.
  • Putman, P., Hermans, E., & van Honk, J. (2006). Anxiety meets fear in perception of dynamic expressive gaze. Emotion, 6, 94–102.10.1037/1528-3542.6.1.94
  • Ristic, J., Friesen, C. K., & Kingstone, A. (2002). Are eyes special? It depends on how you look at it. Psychonomic Bulletin & Review, 9, 507–513.
  • Sagi, D., & Julesz, B. (1985). Detection versus discrimination of visual orientation. Perception, 14, 619–628.
  • Streit, M., Dammers, J., Simsek-Kraues, S., Brinkmeyer, J., Wölwer, W., & Ioannides, A. (2003). Time course of regional brain activations during facial emotion recognition in humans. Neuroscience Letters, 342, 101–104.10.1016/S0304-3940(03)00274-X
  • Tipples, J. (2002). Eye gaze is not unique: Automatic orienting in response to uninformative arrows. Psychonomic Bulletin & Review, 9, 314–318.
  • Tipples, J. (2006). Fear and fearfulness potentiate automatic orienting to eye gaze. Cognition & Emotion, 20, 309–320.
  • Tronick, E. Z. (1989). Emotions and emotional communication in infants. American Psychologist, 44, 112–119.10.1037/0003-066X.44.2.112
  • Yamada, Y., & Kawabe, T. (2013). Gaze-cueing of attention distorts visual space. Universitas Psychologica, 12, 1501–1510.