Abstract
According to cognitive and neurological models of the face-processing system, faces are represented at two levels of abstraction. First, image-based pictorial representations code a particular instance of a face and include information that is unrelated to identity—such as lighting, pose, and expression. Second, at a more abstract level, identity-specific representations combine information from various encounters with a single face. Here we tested whether identity-level representations mediate unfamiliar face matching performance. Across three experiments we manipulated identity attributions to pairs of target images and measured the effect on subsequent identification decisions. Participants were instructed that target images were either two photos of the same person (1ID condition) or photos of two different people (2ID condition). This manipulation consistently affected performance in sequential matching: 1ID instructions improved accuracy on “match” trials and caused participants to adopt a more liberal response bias than the 2ID condition. However, this manipulation did not affect performance in simultaneous matching. We conclude that identity-level representations, generated in working memory, influence the amount of variation tolerated between images, when making identity judgements in sequential face matching.
Notes
1We carried out an additional experiment using a sequential matching task that replicated this same pattern of results, in match trials, mismatch trials, and response bias. The purpose of this study was to investigate whether the effect of identity attribution varied as a function of delay. As it turned out, this interaction was nonsignificant, and the pattern of results was consistent across delay conditions. Thus, for the sake of brevity, we did not include this study in the paper, but for the interested reader we have included details in the Supplemental Material.