ABSTRACT
Although expressions of facial emotion hold a special status in attention relative to other complex objects, whether they summon our attention automatically and against our intentions remains a debated issue. Studies supporting the strong view that attentional capture by facial expressions of emotion is entirely automatic reported that a unique (singleton) emotional face distractor interfered with search for a target that was also unique on a different dimension. Participants could therefore search for the odd-one out face to locate the target and attentional capture by irrelevant emotional faces might be contingent on the adoption of an implicit set for singletons. Here, confirming this hypothesis, an irrelevant emotional face captured attention when the target was the unique face with a discrepant orientation, both when this orientation was unpredictable and when it remained constant. By contrast, no such capture was observed when the target could not be found by monitoring displays for a discrepant face and participants had to search for a face with a specific orientation. Our findings show that attentional capture by emotional faces is not purely stimulus driven and thereby resolve the apparent inconsistency that prevails in the literature on the automaticity of attentional capture by emotional faces.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1. Similar observations apply to Huang et al.’s (Citation2011) study. Participants were instructed to search an array of faces for a target face indicated by a dot and to respond to the dot’s position. One face displayed an emotion while the other faces were neutral. The authors reported that performance was improved when the angry face was the target and impaired when it was a distractor. Thus, as in Hodsoll et al.’s (Citation2011) study, participants could use singleton-detection mode to locate the target. In addition, the target also sometimes coincided with the emotional face.
2. As there are only two possible face genders (male and female), it was not possible to assign a different gender to each face in a four-item display in order to induce observers to search for a specific known gender and to prevent them from searching for the face with the unique gender. We therefore used orientation as the target-defining feature, unlike Hodsoll et al. (Citation2011) who used gender.
3. The finding that unknown-orientation singleton search was considerably slower than known-orientation search is inconsistent with Bacon and Egeth’s (Citation1994) suggestion that searching for a singleton may be less cognitively demanding than searching for a specific feature (see Bravo & Nakayama, Citation1992; Lamy, Carmel, Egeth, & Leber, Citation2006; Lamy, Bar-Anan, Egeth, & Carmel, Citation2006, for similar findings and for a discussion of the notion of “default singleton detection mode”).