Abstract
The current study uses an RT distribution analysis approach to examine the self-bias in face categorization. In two experiments we systematically manipulated the decision boundaries between the self, familiar, and unfamiliar others in face categorization tasks. For the first time in studies of self-categorization we estimated parameters from ex-Gaussian fits of reaction time distributions in order to capture the influence of different boundaries on the latency distribution of the categorization responses. The results showed that the distribution of responses to self faces was stable regardless of the face context and the task demands, and changes in context only shifted the response distribution in time. In contrast, varying the decisions with familiar and unfamiliar faces changed the shape of the RT distributions in addition to shifting RTs in time. The results indicate that, in contrast to our perception of familiar and unfamiliar faces of other people, self face perception forms a unique perceptual category unaffected by shifts in context and task demands.
Acknowledgements
This work was supported by a grant from the ESRC (ES/J001597/1) and ERC Advanced Grant (323883) to GWH and from the Royal Society Newton International Fellowship (UK) and the National Natural Science Foundation of China (31170973) to JS.
Notes
1The accuracy performance was not considered in the present study because the error rate was very low, similar to previous studies of self-biases in face categorization.
2We have examined the variance of the individual estimates in two ways for each subject for Experiment 1A. First, we compared the observed and expected quantile estimates individually using Q-Q plots (see Figure A1 in the Appendix; Cleveland, Citation1985), which provide a graphical assessment of fit between the observed and fitted quantiles for individual participants (Heathcote et al., Citation2002). If the observed and expected values are similar, the quantile estimates should approximately lie on the diagonal line. Figure A1 shows that although there were relatively small sample sizes per condition (<60), there were good fits for each subject. Second, to confirm the results we introduced an additional bootstrapping approach to simulate the data using raw scores for each subject to create different sizes of data set (see Sui, He, & Humphreys, Citation2012, for a prior example)—50, 100, 200 per condition in order to examine the variance of the estimated ex-Gaussian parameters across the different data sets. Typically large data sets have a better fitting estimate. If the QMPE is robust with smaller sample sizes, there would be comparable estimates for parameters between the raw and simulated data sets. Figure A2 in the Appendix (self faces) and Figure A3 (the faces of familiar others) demonstrate the results for all subjects. There were generally similar estimates for the parameters (µ, σ, and τ) across the different data sets (Heathcote, Brown, & Cousineau, Citation2004). In cases where the raw data differed from the parameters derived from the large samples (e.g., Participants 8, 10, and 12; Figure A1), we reran the main ANOVAs omitting these participants. This made no substantive difference to the main results.