1,253
Views
14
CrossRef citations to date
0
Altmetric
BRIEF ARTICLE

Differential impact of emotional task relevance on three indices of prioritised processing for fearful and angry facial expressions

, &
Pages 175-184 | Received 24 Feb 2014, Accepted 07 Aug 2015, Published online: 15 Sep 2015

ABSTRACT

It is commonly assumed that threatening expressions are perceptually prioritised, possessing the ability to automatically capture and hold attention. Recent evidence suggests that this prioritisation depends on the task relevance of emotion in the case of attention holding and for fearful expressions. Using a hybrid attentional blink (AB) and repetition blindness (RB) paradigm we investigated whether task relevance also impacts on prioritisation through attention capture and perceptual salience, and if these effects generalise to angry expressions. Participants judged either the emotion (relevant condition) or gender (irrelevant condition) of two target facial stimuli (fearful, angry or neutral) imbedded in a stream of distractors. Attention holding and capturing was operationalised as modulation of AB deficits by first target (T1) and second target (T2) expression. Perceptual salience was operationalised as RB modulation. When emotion was task-relevant (Experiment 1; N = 29) fearful expressions captured and held attention, and were more perceptually salient than neutral expressions. Angry expressions captured attention, but were less perceptually salient and capable of holding attention than fearful and neutral expressions. When emotion was task-irrelevant (Experiment 2; N = 30), only fearful attention capture and perceptual salience effects remained significant. Our findings highlight the importance for threat-prioritisation research to heed both the type of threat and prioritisation investigated.

Extensive evidence suggests that facial expressions constitute a class of highly salient stimuli, especially when they are affectively charged (Carretié, Citation2014). In particular, facial expressions signalling threat, such as expressions of fear and anger, have been found to enjoy preferential processing and to influence behaviour even in the absence of conscious awareness (Bar-Haim, Lamy, Pergamin, Bakermans-Kranenburg, & van IJzendoorn, Citation2007; Whalen et al., Citation2013; Yiend, Citation2010). These findings have been taken as evidence for the existence of a threat-prioritisation mechanism that automatically and pre-attentively detects and orchestrates reactions to threatening expressions by allocating reorienting attentional resources and enhancing perceptual processing of them (Ohman, Lundqvist, & Esteves, Citation2001). In this model, attention and processing resources are automatically allocated to threatening expressions due to the survival value imparted by paying attention to threats signalled by our conspecifics through evolution (Ohman, Soares, Juth, Lindström, & Esteves, Citation2012). Supported by an extensive neuroimaging literature (Vuilleumier, Citation2005) and an evolutionarily plausible neurobiological model detailing how such automaticity can occur (LeDoux, Citation1998; Ohman, Carlsson, Lundqvist, & Ingvar, Citation2007), this automatic threat-prioritisation account has been highly influential.

However, evidence also exists that the processing benefits enjoyed by threat-related expressions depend on the context the stimuli are presented in (Hassin, Aviezer, & Bentin, Citation2013), and task demands, such as cognitive load (Pessoa, Citation2005; Stein, Peelen, Funk, & Seidl, Citation2010). These findings point to the importance of top-down factors in determining whether or not attention is allocated to threatening facial expressions and question the validity of the automatic threat-prioritisation account (Pessoa, Citation2008). One potent challenge to this account comes from research showing that relatively simple manipulations of attentional focus can reduce, or even eliminate, processing advantages seen for threatening facial expressions. For instance, Stein, Zwickel, Ritter, Kitzmantel, and Schneider (Citation2009) showed that whether a processing advantage for fearful facial expressions was observed depended on emotion being relevant for the task being performed (Huang, Baddeley, & Young, Citation2008; Most, Chun, Widders, & Zald, Citation2005). This finding suggests that the explicit relevance of emotion to the task at hand is necessary for prioritised processing of threatening facial expressions to occur. This potentially constitutes a significant problem for the automatic threat-prioritisation account, since it appears to contradict the assumption that prioritisation should occur in an automatic, pre-attentive and context-insensitive fashion (Ohman, Citation2002).

The Stein study investigated affective prioritisation as measured by the ability of emotional stimuli to enhance the attentional blink (AB; Raymond et al., Citation1992). This is a phenomenon in which the identification of an initial target (T1) in a rapid serial visual presentation (RSVP) stream impairs the detection of a subsequent target (T2) if T2 is presented within 200–500 ms of T1. The size of this effect has been shown to be affected by the salience of the T1 stimulus, such that emotional stimuli, including threat-related facial expressions, reliably elicit greater deficits than neutral stimuli (Maratos, Citation2011; Maratos, Mogg, & Bradley, Citation2008; McHugo, Olatunji, & Zald, Citation2013; Most et al., Citation2005; Stein et al., Citation2009, Citation2010). Theoretical accounts suggest that the AB stems from top-down inhibition of perceptual processing in order to shield processing of the T1 target from perceptual interference, resulting in T2 being missed if it occurs before processing of T1 is finished (e.g., Olivers & Meeter, Citation2008). As such, prioritisation measured by the effect of the emotional qualities of T1 on AB deficits appears to primarily reflect an enhanced ability of emotional stimuli to hold attention, i.e., to ensure that attentional resources remain occupied with the processing of the stimulus itself (Mathewson, Arnell, & Mansfield, Citation2008; Schwabe et al., Citation2011). This type of prioritisation differs conceptually from the paradigm cases of automatic threat prioritisation that rather emphasise the capacity of threatening stimuli to capture attention in the face of ongoing processing (Ohman, Citation2005). Moreover, extant evidence suggests that this deficit enhancement depends specifically on the semantic emotional salience of stimuli (Huang et al., Citation2008). In contrast, automatic threat prioritisation is hypothesised to be triggered by perceptual features of threatening stimuli (Ohman et al., Citation2012). On a similar note, AB deficit enhancement occurs for emotion-related T1s in general, including words (Schwabe et al., Citation2011), emotional scenes (Most et al., Citation2005) and positive stimuli (Most, Smith, Cooter, Levy, & Zald, Citation2007). The non-specific nature of this effect suggests that the mechanism involved reflects the engagement of a general salience-based prioritisation mechanism. This contention is supported by recent neuroimaging work showing that such attentional holding effects are associated with activation of the salience network (Schwabe et al., Citation2011). This network has been shown to respond to relevant stimuli across a wide range of tasks, suggesting that it supports a general mechanism for flexible attention allocation (Vossel, Geng, & Fink, Citation2014). Possibly, relevance manipulations are particularly effective at modulating forms of “top-down” prioritisation effects depending on this network. Conversely, relevance manipulations may not be effective at modulating “bottom-up” forms of prioritisation such as involuntary attention capture in the face of ongoing processing, or prioritisation based on the perceptual salience of threat-related expressions.

The current study

The current study investigated this possibility by directly comparing the effect of task relevance on affective prioritisation effects attributable to (i) attention holding, (ii) attention capture and (iii) perceptual salience. We did this using a RSVP paradigm based on the previously mentioned Stein et al (Citation2009) study. In two experiments subjects were asked to detect and identify either the emotion (Experiment 1; relevant condition) or gender (Experiment 2; irrelevant condition) of neutral and threat-related expressions (anger and fear) embedded in a stream of distractors (scrambled faces; see ). T1 and T2 were either presented in serial position 2 (Lag 2 condition) or 6 (Lag 6 condition), allowing us to distinguish specific AB modulation by threat from general detectability effects.

Figure 1. (a) Schematic representation of a single trial. Participants were presented with an RSVP consisting of 15 stimuli, in which either 1 or 2 intact target faces were embedded in a stream of scrambled faces (distractors. After each RSVP, participants reported the number of non-scrambled faces they perceived, and then either the emotional expression (Experiment 1) or the gender (Experiment 2) of the perceived faces. The critical within-experiment manipulations consisted of varying expressed emotion (anger, fear and neutral) of T1 and T2 targets and the number of distractors separating the targets. On 50% of trials only T1 was presented. In the remainder T2 could appear at either Lag 2 (1 intervening item) or Lag 6 (5 intervening items. (b) Overall T2 detection accuracies in Experiment 1 (emotion decision) for trials in which T1 was correctly identified. (c) Overall T2 detection accuracies in Experiment 2 (Gender decision) for trials in which T1 was correctly identified. Error bars represent standard errors of marginal mean estimates.

Figure 1. (a) Schematic representation of a single trial. Participants were presented with an RSVP consisting of 15 stimuli, in which either 1 or 2 intact target faces were embedded in a stream of scrambled faces (distractors. After each RSVP, participants reported the number of non-scrambled faces they perceived, and then either the emotional expression (Experiment 1) or the gender (Experiment 2) of the perceived faces. The critical within-experiment manipulations consisted of varying expressed emotion (anger, fear and neutral) of T1 and T2 targets and the number of distractors separating the targets. On 50% of trials only T1 was presented. In the remainder T2 could appear at either Lag 2 (1 intervening item) or Lag 6 (5 intervening items. (b) Overall T2 detection accuracies in Experiment 1 (emotion decision) for trials in which T1 was correctly identified. (c) Overall T2 detection accuracies in Experiment 2 (Gender decision) for trials in which T1 was correctly identified. Error bars represent standard errors of marginal mean estimates.

We made two modifications to the procedure described in Stein et al (Citation2009). First, we included both angry and fearful emotional expressions in our design, allowing us to investigate whether observed effects of relevance generalised to threatening stimuli in general. Furthermore, both T1 and T2 targets were faces,Footnote1 and emotional expressions of both targets were varied in a fully crossed manner. This design allowed us to investigate attention holding effects, operationalised as the effect of T1 targets on subsequent target detection, as in Stein et al. (Citation2009). Additionally, this allowed us to investigate the effect of emotional relevance on the ability of emotional T2 targets to modulate AB size, which served as our measure of attention capture. Modulations of this sort have been reported for a range of salient stimuli, including threatening expressions (e.g., Maratos, Citation2011; Maratos et al., Citation2008), and are thought to reflect the ability of stimuli to capture attention and awareness by breaking through top-down inhibition caused by processing of T1 (Bocanegra & Zeelenberg, Citation2009; Olivers & Meeter, Citation2008; Schwabe et al., Citation2011).

Finally, we investigated the modulation of repetition blindness (RB) deficits by threatening expressions. RB occurs on the same time scale as the AB, and is a T2 detection deficit in the RSVP that occurs when T1 and T2 share perceptual features. As with AB, RB deficits has been shown to be reduced for salient stimuli, such as personal names (Arnell, Shapiro, & Sorensen, Citation2010), emotional words (Knickerbocker & Altarriba, Citation2013), as well as threatening expressions (Mowszowski, McDonald, Wang, & Bornhofen, Citation2012). However, unlike the AB, the RB does not stem from a limitation in attentional processing, but rather capacity limitations of perceptual processing pertaining to the individuation of repeated items (Chun, Citation1997; Hochhaus & Marohn, Citation1991; Kanwisher, Citation1987; Koivisto & Revonsuo, Citation2008). Thus, we could investigate the degree to which relevance impacts on prioritisation brought about by the perceptual salience of threatening stimuli.

The objective of Experiment 1 (N = 29) was to replicate previous findings and establish a baseline measure of the effect of threatening expressions on our measures of attention holding, attention capture and perceptual salience when emotion was task-relevant. Overall, we expected to find significantly lower T2 detection accuracy at Lag 2 relative to Lag 6, indicating the occurrence of an AB. Following previous findings we expected the magnitude of this deficit to be modulated by threatening expressions, such that threatening T1 targets would result in larger AB deficits and threatening T2 targets should result in smaller AB deficits, both relative to the neutral baseline AB deficit. Furthermore, we expected to find evidence of smaller RB effects for threatening stimuli, such that repetitions of threatening faces should show better T2 detection rates than neutral faces.

In Experiment 2 (N = 30), we investigated the effect of making emotion irrelevant to the task at hand by having participants identify the gender, rather than emotion, of T1 and T2 targets. Following Stein et al. (Citation2009) we expected to find significantly decreased AB attention holding effects following threatening T1 stimuli compared to Experiment 1. Following our hypothesis that relevance should play less of a role in prioritisation effects based on attention capture (T2 AB) and perceptual salience (RB), we expected these effects to be substantially unchanged relative to Experiment 1.

Methods

Participants

For each experiment, 30 participants (Experiment 1: 17 females, mean age 26 years ± 3.3 SD; Experiment 2: 21 females, mean age 25 years ± 2.6 SD) were recruited from the local population. All participants reported normal or corrected-to-normal vision. Participants gave informed consent, and were monetarily compensated for their participation. One participant in Experiment 1 was excluded due to performing at chance on overall target detection.

Apparatus and stimuli

Experiments were implemented using E-Prime 2.0 Professional software (Psychology Software Tools, Pittsburgh, PA) running on a custom built desktop experiment computer, with a 22" LCD colour monitor running at 60 Hz (verified with photodiode and oscilloscope). The participants viewed the monitor at a free viewing distance of approximately 50 cm. Stimuli were greyscale photographs subtending 3.5° × 5.5° of visual angle presented on a black background. Angry, fearful and neutral facial stimuli (39 actors per emotion; 19 females) were taken from the Radboud Faces Database (Langner et al., Citation2010), converted to greyscale and intensity normalised. The images were rescaled to 240 × 420 pixels, and an oval face-fitting mask was applied to reduce variance attributable to incidental cosmetic features and to ensure that facial features carrying information about emotional expressions were in approximately the same location for all stimuli. Distractors were generated by dividing the inner elements of neutral stimuli into 108 squares and randomly recomposing them.

Design and procedure

(a) schematically depicts the experimental trial structure. Each trial started with a 1000 ms fixation cross that disappeared 250 ms before RSVP start, indicating the beginning of a trial. Each RSVP stream consisted of 15 items presented for 83 ms. Each stream contained either one or two target stimuli (intact faces), while the remainder consisted of distractors (randomly recomposed faces). Participants performed 720 trials in total, 50% of which were dual-target test trials and 50% single target catch trials. Single and dual-target trials were randomly intermixed, and were identical except that single target trials replaced T2 with a distractor stimulus. T1 randomly occurred in serial positions 4–8. In dual-target trials, T2 occurred either at Lag 2 (one intervening distractor, stimulus onset asynchrony (SOA) 166 ms) or at Lag 6 (five intervening distractors, SOA 498 ms). T1 and T2 targets varied randomly in expressed emotion (anger, fear and neutral), as well as actor identities and gender. At the end of each RSVP-series participants were prompted to indicate if they had seen one or two intact faces, and then sequentially report the emotion (Experiment 1) or gender (Experiment 2) of the faces they had seen. Responses were given using the numerical keypad. First participants responded whether they saw one or two faces (“1” or “2” on the keypad) whereupon they reported the emotion (Experiment 1: (“1” for “anger”, “2” for fear and “3” for neutral) or gender (Experiment 2: “1” for female and “2” for male) of the detected faces in sequence. The only difference between experiments was the discrimination performed.

Participants underwent a training block of 21 trials immediately prior to testing. To alleviate fatigue, participants were allowed to take short breaks between trials. Trials preceding self-initiated breaks were excluded from analysis (∼6% in both experiments). Participants were instructed to emphasise accuracy when responding.

Results

Analysis approach

Mixed effects logistic regression analyses of accuracy data were performed using generalised linear mixed modelling (GLMM) with binomial error distribution and a logit link function as implemented in Revolution R (version 7) and the lme4 software package (http://cran.r-project.org/web/packages/lme4/index.html). Unlike analysis of variance, GLMM allows the inclusion of every data point in the analyses instead of using aggregated averages for every participant and so takes into account individual differences in a participants’ behaviour over the course of many trials. This improves the accuracy of the fixed effect estimates and allows trial-wise control over potential confounds like fatigue or reaction times (Baayen, Davidson, & Bates, Citation2008). GLMMs also allow the incorporation of independent, crossed subject and stimulus random effects in the analysis, accounting for any stimulus-specific confounds and improving the generalisability of the estimated effects (Jaeger, Citation2008; Judd, Westfall, & Kenny, Citation2012).

Factor-wise significance testing was performed using X2 tests of −2 restricted log-likelihood of nested models (Jaeger, Citation2008) which provides the equivalent information as F-tests by testing whether adding factors to the model explains sufficient variance to justify the added model complexity. Condition-wise significance testing was done using two-tailed Z-tests. AB and RB results are reported as log-odds estimates of fixed effects on T2 detection accuracy in trials where T1 was correctly identified. All reported p-values were corrected for multiple comparisons by controlling the false discovery rate (FDR) using the procedure described in Benjamini and Yekutieli (Citation2001).

T1 identification accuracy

To ensure that T2 detection effects were not confounded with difficulties in identifying emotional expressions, T1 identification accuracy for single target trials were analysed for each experiment separately. The models contained T1 expression as a fixed effect and crossed random effects for subject and stimulus exemplar.

In Experiment 1, 86% of anger, 95% of fear and 92% of neutral T1 expressions were correctly classified according to expression. Chi-square tests of factor-wise significance revealed a main effect of T1 expression (X2(2) = 34.03, p < .0001). Follow-up Z-tests revealed that this effect was attributable to lower classification accuracy of angry expressions relative to both fear (Z = −4.6, p < .0001) and neutral (Z = −2.6, p < .001) expressions.

In Experiment 2, 87% of anger, 86% of fear and 91% of neutral T1 expressions were correctly classified according to gender. Chi-square tests of factor-wise significance revealed no effect of T1 expression (X2(2) = 4.9, p > .1).

T2 detection accuracy

Model specification

Identical models predicting trial-wise T2 detection accuracy were fitted for both experiments. The models included fixed factors for T1 and T2 expressions, and Lag. Crossed random intercepts were specified for subject and stimulus exemplar. To avoid confounding between-experiment differences in T2 detection with the differences in T1 accuracy reported above, all trials were included in the analysis and T1 accuracy was added as a control variable. Additionally, we included variables for trial number and reaction times to account for fatigue and learning effects.

Assessment of AB and RB effects

For each condition, AB amplitude was established by subtracting Lag 2 from Lag 6 accuracy (i.e., the T1*T2*Lag interaction term), ensuring that effects reflect a modulation of the AB proper and not a general accuracy effect. As we had no specific differential hypotheses about the size of modulation effects, all tests of AB modulation were performed relative to the T1 neutral–T2 neutral condition.

RB effects were estimated by contrasting repeat trials (T1 fear–T2 fear; T1 anger–T2 anger) with neutral non-repeat trials (T1 fear–T2 neutral; T1 anger–T2 neutral) at Lag 2 only. To ensure that results were not confounded with T1 AB effects of emotion, supplementary analyses were done using the Lag 2, T1 neutral–T2 neutral condition as baseline, yielding the same pattern of results. Note that all AB effects containing repetitions (T1 anger–T2 anger, T1 fear–T2 fear) are contaminated by RB effects. We therefore report them for completeness, but do not interpret them.

Analysis strategy

We adopted a sequential analysis approach. We first tested our hypotheses separately for Experiments 1 and 2. To ensure the between-experiment comparability of results, we preliminarily tested for task difficulty effects by investigating the main effect of decision-type on T2 detection accuracy. We found that accuracy on average was significantly higher when making the gender decision (X2(1) = 5.07, p < .05; cf. (b) and 1(c)), suggesting the existence of a task difficulty difference between experiments leading to smaller AB magnitude overall in Experiment 1. We accounted for this when doing the direct comparisons presented in (c) by baseline correcting the AB effects. This was achieved by subtracting the estimate of the T1 neutral–T2 neutral AB from the estimates of all other AB effects for each experiment separately, in effect controlling for average AB size within each experiment. These corrected estimates were then used for between-experiments comparison of AB modulations.

Figure 2. (a) AB T2|T1 deficits (T2 detection performance at Lag 6–Lag 2, on trials where T1 was correctly identified) for the emotional decision condition (Experiment 1). Significance assessed relative to T1 neutral–T2 neutral baseline condition. Higher values indicate worse T2 detectability. (b) Differences in RB deficits as measured by the difference in detection accuracy for Repeated (T1 fear–T2 fear, T1 anger–T2 anger) and non-repeated (T1 fear–T2 neutral, T1 anger–T2 neutral) conditions. (c) AB T2|T1 detection deficits for the gender decision condition (Experiment 2). Significance assessed relative to T1 neutral–T2 neutral baseline condition. (d) Differences in RB deficits as measured by the difference in detection accuracy for Repeated (T1 rear–T2 fear, T1 anger–T2 anger) and non-repeated (T1 fear–T2 neutral, T1 anger–T2 neutral) conditions. (e) Differences in baseline corrected AB T2|T1 detection deficits as a function of decision type (emotional–gender decision). Baseline correction was performed to account for differences in overall accuracy between experiments by subtracting the experiment specific baseline (T1 neutral–T2 neutral) condition from effects before comparison. Negative values indicate smaller deficits when making an emotional decision. (f) Differences in RB deficits as a function of making an emotion or gender decision. Positive values indicate larger deficits when making an emotional decision. All results reported on the scale of inference (i.e., log-odds). Error bars represent standard errors of differences. All FDR corrected significant effects are marked: * = p < .05 ** = p < .01 *** = p < .001.

Figure 2. (a) AB T2|T1 deficits (T2 detection performance at Lag 6–Lag 2, on trials where T1 was correctly identified) for the emotional decision condition (Experiment 1). Significance assessed relative to T1 neutral–T2 neutral baseline condition. Higher values indicate worse T2 detectability. (b) Differences in RB deficits as measured by the difference in detection accuracy for Repeated (T1 fear–T2 fear, T1 anger–T2 anger) and non-repeated (T1 fear–T2 neutral, T1 anger–T2 neutral) conditions. (c) AB T2|T1 detection deficits for the gender decision condition (Experiment 2). Significance assessed relative to T1 neutral–T2 neutral baseline condition. (d) Differences in RB deficits as measured by the difference in detection accuracy for Repeated (T1 rear–T2 fear, T1 anger–T2 anger) and non-repeated (T1 fear–T2 neutral, T1 anger–T2 neutral) conditions. (e) Differences in baseline corrected AB T2|T1 detection deficits as a function of decision type (emotional–gender decision). Baseline correction was performed to account for differences in overall accuracy between experiments by subtracting the experiment specific baseline (T1 neutral–T2 neutral) condition from effects before comparison. Negative values indicate smaller deficits when making an emotional decision. (f) Differences in RB deficits as a function of making an emotion or gender decision. Positive values indicate larger deficits when making an emotional decision. All results reported on the scale of inference (i.e., log-odds). Error bars represent standard errors of differences. All FDR corrected significant effects are marked: * = p < .05 ** = p < .01 *** = p < .001.

Experiment 1: emotional decision

(b) shows T2 detection rates in Experiment 1, split by T1 and T2 expression and Lag. Chi-square tests of factor-wise significance revealed a main effect of Lag (X2(1) = 2757.4, p < .0001) such that accuracy was worse at Lag 2 than Lag 6 (Z = −13.6, p < .0001) indicating the occurrence of an AB. Additionally, significant main effects for T1 (X2(2) = 32.28, p < .001) and T2 (X2(3) = 166.22, p < .001), as well as significant T1*Lag (X2(8) = 17.91, p < .05) and T1*T2 (X2(16) = 26.18, p < .05) interactions were observed. Crucially, a T1*T2*Lag interaction (X2(16) = 47.92, p < .001) was observed, indicating the existence of condition specific modulation of RB and/or AB amplitudes.

AB effects for Experiment 1 are illustrated in (a). When investigating attention holding effects we replicated previous work showing that fearful T1s enhanced AB deficits for neutral T2s (Z = 2.93, p < .01). Conversely, and unexpectedly, angry T1s decreased AB deficits for both neutral (Z = −4.38, p < .001), and fearful (Z = −2.48, p < .05) T2 targets relative to the neutral baseline. When investigating attention capture effects we replicated previous work finding smaller AB amplitudes for both angry (Z = −4.09, p < .001) and fearful (Z = −2.35, p < .05) T2 expressions than for neutral T2s, when these were preceded by a neutral T1. Finally, when investigating perceptual salience we replicated previous work finding that repetitions of fearful expressions were associated with improved T2 detectability (Z = −3.61, p < .001; (b)). However, we unexpectedly found evidence for an enhanced RB effect for angry expressions (Z = −2.48, p < .05).

Experiment 2: gender decision

(c) shows T2 detection rates in Experiment 2, split by T1 and T2 expression and Lag. Chi-square tests revealed a significant main effect of Lag (X2(1) = 3169.2, p < .0001) such that accuracy was worse at Lag 2 than Lag 6 (Z = −8.8, p < .0001) indicating the occurrence of an AB. Further, a main effect of T2 (X2(3) = 120.42, p < .0001) was observed, in addition to a T1*T2*Lag interaction (X2(16) = 41.40 p < .0001) was observed, indicating the existence of condition specific modulation of RB and/or AB amplitudes.

AB effects for Experiment 2 are illustrated in (c). As expected, we found no effect of T1 expression on T2 detection. When investigating attention capture and perceptual salience, we found smaller AB magnitude for fearful T2s preceded by neutral T1s (Z = −3.06, p < .05), as well as smaller RB deficits (Z = −2.4, p < .01; (d)) for repeated fearful expressions. No significant differences were observed for angry expressions. We also observed increased AB deficits for the T1 fear–T2 fear (Z = 3.8, p < .0001). Unexpectedly, we also observed larger deficits for the T1 neutral–T2 anger (Z = 2.4, p < .05) condition relative to the T1 neutral–T2 neutral baseline.

Effect of task relevance: between-experiment comparison

In order to directly test the effect of decision-type on AB and RB effects, both experiments were analysed in a single model including a decision-type factor, coding whether the task was emotion or gender discrimination. Given the existence of T1*T2*Lag interactions in both experiments considered alone, the condition specific effects of relevance was tested using the T1*T2*Lag*decision-type interaction term. This was found to be significant (X2(16) = 49.55, p < .001) indicating a condition-specific modulation of previously observed T1*T2*Lag interactions by decision-type.

(e) shows the difference in baseline corrected AB amplitudes between experiments (see above). This comparison revealed that anger expressions showed evidence of significantly greater perceptual salience deficits (Δ anger RB deficit: Z = 2.48, p < .05) and attention capture enhancements (Δ T1 neutral–T2 anger (Z = −4.71, p < .001) when emotion was relevant. Attention holding effects were also found to be affected by relevance, with angry expressions being significantly more effective at decreasing AB deficits when emotion was relevant (Δ T1 anger–T2 fear, Z = −3.01, p < .01; Δ T1 anger–T2 neutral, Z = −2.47, p < .01). Note that while no significant differences were observed for T1 fear–T2 neutral in this analysis, a significant effect of this condition was only observed when emotion was relevant (i.e., Experiment 1).

Unexpectedly, relevance modulated T1 AB effects for fearful stimuli such that AB deficits were larger for both angry and fearful T2 targets (T1 fear–T2 anger (Z = −2.95, p < .01), T1 fear–T2 fear (Z = −3.28, p < .001).

Discussion

The objective of this study was to investigate the effect of task relevance of emotion on indices of processing prioritisation for threat-related facial expressions. Using a hybrid AB/RB RSVP design we investigated this for three modes of prioritisation: attention holding (T1 AB), attention capture (T2 AB) and perceptual salience (RB). Following previous research (Maratos, Citation2011; Maratos et al., Citation2008; Stein et al., Citation2009, Citation2010), we hypothesised that threatening expressions should show evidence of prioritised processing relative to neutral stimuli on all three of these indices when participants were asked to judge the emotion expressed by target stimuli, i.e., when emotion was task-relevant (Experiment 1). When emotion was made irrelevant to the task by having participants judge the gender of the facial expressions (Experiment 2), we expected attention holding effects to be diminished, reflecting its reliance on top-down tuning of general salience detection mechanisms (Schwabe et al., Citation2011; Stein et al., Citation2009). Attention capture and perceptual salience effects were expected to be unchanged, reflecting their hypothesised reliance on automatic bottom-up threat detection mechanisms (Ohman, Citation2002; Ohman et al., Citation2012; Ohman, Flykt, & Esteves, Citation2001).

Overall, our findings for fearful expressions were consistent with our hypotheses, showing that prioritisation of fearful expressions measured by attention capture and perceptual salience is robust to relevance manipulations, while the prioritisation measured by attention holding is not. Unexpectedly, expressions of anger did not conform to this pattern: While showing evidence of enhanced attention capture under conditions of relevance, we observed enhanced of T2 detectability following angry T1s for both neutral and fearful T2s. This suggests that angry expressions are less capable of engaging attention holding prioritisation mechanisms than neutral stimuli. While this contradicts previous studies showing that schematic angry T1s enhance the AB deficit (Maratos, Citation2011; Maratos et al., Citation2008), it should be noted that this effect has previously been shown not to replicate when using naturalistic stimuli (Taylor & Whalen, Citation2014), suggesting that these two types of stimuli differ in how they are processed. While this could explain a lack of enhanced AB following angry T1s, it does not explain the observation of decreased AB for neutral and fearful T2s, nor the increased RB observed for angry expressions. Furthermore, the fact that these effects were only observable when emotion was task-relevant suggests that these results reflect an actual difference in how fearful and angry expressions are able to engage prioritisation mechanisms. This is inconsistent with automatic threat detection theory, which predicts a similar pattern for fearful and angry expressions and that angry expressions should elicit stronger prioritisation effects on account of them signalling a direct threat to the perceiver (Ohman et al., Citation2012). However, they are consistent with recent findings showing that angry and fearful expressions differ in how they guide attention with fearful expressions diffusing and angry expressions focusing attention (Davis et al., Citation2011; Taylor & Whalen, Citation2014). This is consistent with angry T1s enhancing T2 detection, as the T2s were presented in the same location as T1s. Furthermore, our observation of enhanced RBs for anger expressions suggests that this focusing specifically enhances change detection, though more research is needed to establish this.

While the relevance of emotion appears to offer a clear explanation for the observed modulations of T1 AB effects, neither relevance nor threat-value accounts explain the differential effects observed for angry and fearful expressions in the T2 AB and RB effects. Interestingly, these forms of “bottom-up” prioritisation are thought to be associated with the amygdala, a brain region commonly assumed to function as a “threat detection” module (Ohman et al., 2007). However, recent meta-analyses of neuroimaging studies show that fearful, but not angry, expressions reliably elicit amygdala activation (Costafreda, Brammer, David, & Fu, Citation2008). Whalen (Citation1998, Citation2007) proposed that this pattern of results can be explained when considering the quality of information about the threat signalled by each expression: Whereas angry faces signal a localised and direct threat (i.e., the angry person), fearful faces provide an ambiguous indication of potential environmental threats. Thus, amygdala activation and, by extension, bottom-up prioritisation as explored in this study might be particularly strongly elicited by ambiguous stimuli. While consistent with our observations, future research specifically varying the ambiguity and relevance of threatening stimuli is needed to determine the validity of this account.

Limitations

The major limitation of the current study is that we cannot rule out the possibility that the between-group design may have influenced the comparison of the task dependent boundary conditions. Future research should control for this by utilising more sensitive within-subject designs, or by investigating individual differences related to affective and attentional functioning. The latter of these is also an interesting avenue for future research, as it could provide a platform for investigating the relationship between mode and modality of specific prioritisation effects and affective styles.

Conclusion

It is commonly assumed that all threatening facial expressions are automatically prioritised for processing. The current study found support for this hypothesis for fearful, but not angry, expressions, and only for prioritisation through attention capture and perceptual salience. Our findings further show that processing fearful and angry expressions have different consequences on subsequent processing, suggesting that other aspects of these stimuli, such as their informational content, is used to guide attention. Thus, our findings highlight the importance for future research into emotion–attention interactions of specifying both the type of threat and type of prioritisation in question when investigating prioritisation effects, and the dangers of treating them as undifferentiated constructs.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1Stein et al. (Citation2009) used scenes as T2 stimuli, and subjects decided whether the picture depicted indoors or outdoors scenes.

References

  • Arnell, K. M., Shapiro, K. L., & Sorensen, R. E. (2010). Reduced repetition blindness for one's own name. Visual Cognition, 6(6), 609–635. doi:10.1080/135062899394876
  • Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390–412.  doi:10.1016/j.jml.2007.12.005
  • Bar-Haim, Y., Lamy, D., Pergamin, L., Bakermans-Kranenburg, M. J., & van IJzendoorn, M. H. (2007). Threat-related attentional bias in anxious and nonanxious individuals: A meta-analytic study. Psychological Bulletin, 133(1), 1–24. doi:10.1037/0033-2909.133.1.1
  • Benjamini, Y., & Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics. doi:10.2307/2674075
  • Bocanegra, B. R., & Zeelenberg, R. (2009). Dissociating emotion-induced blindness and hypervision. Emotion, 9(6), 865–873. doi:10.1037/a0017749
  • Carretié, L. (2014). Exogenous (automatic) attention to emotional stimuli: A review. Cognitive, Affective & Behavioral Neuroscience. doi:10.3758/s13415-014-0270-2
  • Chun, M. M. (1997). Types and tokens in visual processing: A double dissociation between the attentional blink and repetition blindness. Journal of Experimental Psychology: Human Perception and Performance, 23(3), 738–755. doi:10.1037/0096-1523.23.3.738
  • Costafreda, S. G., Brammer, M. J., David, A. S., & Fu, C. H. Y. (2008). Predictors of amygdala activation during the processing of emotional stimuli: A meta-analysis of 385 PET and fMRI studies. Brain Research Reviews, 58(1), 57–70. doi:10.1016/j.brainresrev.2007.10.012
  • Davis, F. C., Somerville, L. H., Ruberry, E. J., Berry, A. B. L., Shin, L. M., & Whalen, P. J. (2011). A tale of two negatives: Differential memory modulation by threat-related facial expressions. Emotion, 11(3), 647–655. doi:10.1037/a0021625
  • Hassin, R. R., Aviezer, H., & Bentin, S. (2013). Inherently ambiguous: Facial expressions of emotions, in context. Emotion Review, 5(1), 60–65. doi:10.1177/1754073912451331
  • Hochhaus, L., & Marohn, K. M. (1991). Repetition blindness depends on perceptual capture and token individuation failure. Journal of Experimental Psychology: Human Perception and Performance, 17(2), 422–432. doi:10.1037/0096-1523.17.2.422
  • Huang, Y.-M., Baddeley, A., & Young, A. W. (2008). Attentional capture by emotional stimuli is modulated by semantic processing. Journal of Experimental Psychology: Human Perception and Performance, 34(2), 328–339. doi:10.1037/0096-1523.34.2.328
  • Jaeger, T. F. (2008). Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models. Retrieved from http://www.sciencedirect.com/science/article/pii/S0749596X07001337
  • Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 54–69. doi:10.1037/a0028347
  • Kanwisher, N. (1987). Repetition blindness: Type recognition without token individuation. Cognition, 27(2), 117–143. doi:10.1016/0010-0277(87)90016-3
  • Knickerbocker, H., & Altarriba, J. (2013). Differential repetition blindness with emotion and emotion-laden word types. Visual Cognition, 21(5), 599–627. doi:10.1080/13506285.2013.815297
  • Koivisto, M., & Revonsuo, A. (2008). Comparison of event-related potentials in attentional blink and repetition blindness. Brain Research, 1189, 115–126. doi:10.1016/j.brainres.2007.10.082
  • Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D. H. J., Hawk, S. T., & van Knippenberg, A. (2010). Presentation and validation of the Radboud faces database. Cognition & Emotion, 24(8), 1377–1388. doi:10.1080/02699930903485076
  • LeDoux, J. (1998). The emotional brain: The mysterious underpinnings of emotional life. New York: Simon and Schuster.
  • Maratos, F. A. (2011). Temporal processing of emotional stimuli: The capture and release of attention by angry faces. Emotion, 11(5), 1242–1247. doi:10.1037/a0024279
  • Maratos, F. A., Mogg, K., & Bradley, B. P. (2008). Identification of angry faces in the attentional blink. Cognition & Emotion, 22(7), 1340–1352. doi:10.1080/02699930701774218
  • Mathewson, K. J., Arnell, K. M., & Mansfield, C. A. (2008). Capturing and holding attention: The impact of emotional words in rapid serial visual presentation. Memory & Cognition, 36(1), 182–200. doi:10.3758/MC.36.1.182
  • McHugo, M., Olatunji, B. O., & Zald, D. H. (2013). The emotional attentional blink: What we know so far. Frontiers in Human Neuroscience, 7, 151. doi:10.3389/fnhum.2013.00151
  • Most, S. B., Chun, M. M., Widders, D. M., & Zald, D. H. (2005). Attentional rubbernecking: Cognitive control and personality in emotion-induced blindness. Psychonomic Bulletin & Review, 12(4), 654–661. Retrieved from http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=16447378&retmode=ref&cmd=prlinks
  • Most, S. B., Smith, S. D., Cooter, A. B., Levy, B. N., & Zald, D. H. (2007). The naked truth: Positive, arousing distractors impair rapid target perception. Cognition & Emotion, 21(5), 964–981. doi:10.1080/02699930600959340
  • Mowszowski, L., McDonald, S., Wang, D., & Bornhofen, C. (2012). Preferential processing of threatening facial expressions using the repetition blindness paradigm. Cognition & Emotion, 26(7), 1238–1255. doi:10.1080/02699931.2011.648173
  • Ohman, A. (2002). Automaticity and the Amygdala: Nonconscious responses to emotional faces. Current Directions in Psychological Science, 11(2), 62–66. doi:10.1111/1467-8721.00169
  • Ohman, A. (2005). The role of the amygdala in human fear: Automatic detection of threat. Psychoneuroendocrinology, 30(10), 953–958. doi:10.1016/j.psyneuen.2005.03.019
  • Ohman, A., Carlsson, K., Lundqvist, D., & Ingvar, M. (2007). On the unconscious subcortical origin of human fear. Physiology & Behavior, 92(1–2), 180–185. doi:10.1016/j.physbeh.2007.05.057
  • Ohman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130(3), 466–478. doi:10.1037/0096-3445.130.3.466
  • Ohman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowd revisited: A threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80(3), 381–396. doi:10.1037/0022-3514.80.3.381
  • Ohman, A., Soares, S. C., Juth, P., Lindström, B., & Esteves, F. (2012). Evolutionary derived modulations of attention to two common fear stimuli: Serpents and hostile humans. Journal of Cognitive Psychology, 24(1), 17–32. doi:10.1080/20445911.2011.629603
  • Olivers, C. N. L., & Meeter, M. (2008). A boost and bounce theory of temporal attention. Psychological Review, 115(4), 836–863. doi:10.1037/a0013395
  • Pessoa, L. (2005). To what extent are emotional visual stimuli processed without attention and awareness? Current Opinion in Neurobiology, 15(2), 188–196. doi:10.1016/j.conb.2005.03.002
  • Pessoa, L. (2008). On the relationship between emotion and cognition. Nature Reviews Neuroscience, 9(2), 148–158. doi:10.1038/nrn2317
  • Raymond, J. E., Raymond, J. E., Shapiro, K. L., Shapiro, K. L., Arnell, K. M., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performance, 18(3), 849–860. doi:10.1037/0096-1523.18.3.849
  • Schwabe, L., Merz, C. J., Walter, B., Vaitl, D., Wolf, O. T., & Stark, R. (2011). Emotional modulation of the attentional blink: The neural structures involved in capturing and holding attention. Neuropsychologia, 49(3), 416–425. doi:10.1016/j.neuropsychologia.2010.12.037
  • Stein, T., Peelen, M. V., Funk, J., & Seidl, K. N. (2010). The fearful-face advantage is modulated by task demands: Evidence from the attentional blink. Emotion, 10(1), 136–140. doi:10.1037/a0017814
  • Stein, T., Zwickel, J., Ritter, J., Kitzmantel, M., & Schneider, W. X. (2009). The effect of fearful faces on the attentional blink is task dependent. Psychonomic Bulletin & Review, 16(1), 104–109. doi:10.3758/PBR.16.1.104
  • Taylor, J. M., & Whalen, P. J. (2014). Fearful, but not angry, expressions diffuse attention to peripheral targets in an attentional blink paradigm. Emotion, 14(3), 462–468. doi:10.1037/a0036034
  • Vossel, S., Geng, J. J., & Fink, G. R. (2014). Dorsal and ventral attention systems: distinct neural circuits but collaborative roles. The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry, 20(2), 150–159. doi:10.1177/1073858413494269
  • Vuilleumier, P. (2005). How brains beware: Neural mechanisms of emotional attention. Trends in Cognitive Sciences, 9, 585–594. doi:10.1016/j.tics.2005.10.011
  • Whalen, P. J. (1998). Fear, vigilance, and ambiguity: Initial neuroimaging studies of the human amygdala. Current Directions in Psychological Science, 7(6), 177–188. http://doi.org/10.1111/1467-8721.ep10836912
  • Whalen, P. J. (2007). The uncertainty of it all. Trends in Cognitive Sciences, 11(12), 499–500. doi:10.1016/j.tics.2007.08.016
  • Whalen, P. J., Raila, H., Bennett, R., Mattek, A., Brown, A., Taylor, J., … Palmer A. (2013). Neuroscience and facial expressions of emotion: The role of Amygdala-prefrontal interactions. Emotion Review, 5(1), 78–83. doi:10.1177/1754073912457231
  • Yiend, J. (2010). The effects of emotion on attention: A review of attentional processing of emotional information. Cognition & Emotion, 24(1), 3–47. doi:10.1080/02699930903205698