ABSTRACT
Religious people tend to believe atheists are immoral. Although some work suggests that atheists themselves agree, such findings could also reflect symmetric ingroup bias in the moral domain, where atheists likewise view religious targets as untrustworthy and immoral. We examined how American religious and atheist participants rated the morality of atheist and religious targets and assessed a potential intervention: learning that targets adhere to a moral code. Across three studies, both religious and nonreligious participants demonstrated clear ingroup favoritism, rating ingroup targets more moral than outgroup targets. However, this ingroup bias was reduced when participants learned the target adheres to a warm and coherent moral system rooted in philosophy and concern for others. These findings extended beyond evaluations to downstream social consequences such as distancing. Such findings challenge arguments that atheists view themselves as immoral and point the way forward toward reducing religious ingroup bias.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability statement
The data described in this article are openly available in the MBLWHOI Library at https://doi.org/10.17605/OSF.IO/847YP and https://doi.org/10.17605/OSF.IO/JXYBA.
Open Scholarship
This article has earned the Center for Open Science badges for Open Data and Open Materials through Open Practices Disclosure. The data and materials are openly accessible at https://doi.org/10.17605/OSF.IO/847YP and https://doi.org/10.17605/OSF.IO/JXYBA.
Supplementary material
Supplemental data for this article can be accessed on the publisher’s website.
Notes
1 Although the degree to which this is true may depend in part on the target, and partially reflect reporting bias (Shariff, Citation2015; Shariff et al., Citation2014).
2 For sufficient power, we combined participants identifying as atheist and agnostic into a single group.
3 We collected two additional studies to test hypotheses auxiliary to the main work. These studies are presented in supplemental materials.
4 Data was collected using Amazon.com’s Mechanical Turk. The data were largely collected before the spike in data quality issues in 2018 (the first two studies were all run in September and October, 2017, and the final study run in May 2019). However, we took several steps to ensure data quality. In addition to attention checks described in Study 1, we required that all participants complete a minimum of 50 prior HITs with an approval rating of 95% or higher prior to viewing the study and have their server location set to the United States.
5 We coded religious participants viewing a religion-unspecified target as “ingroup” as religion remains the predominant belief system in North America (Pew Research Center, Citation2014). Although not ideal, this provides a conservative test of our hypothesis. In future studies we identified targets as clearly atheist or religious.
6 We also conducted an exploratory principle components analysis with oblimin rotation to extract all eigenvalues greater than one. This analysis suggested two factors, with all three sociability items loading on one factor and all other items loading on the other factor, but these factors were themselves correlated r = .45, the alpha for the overall scale was high, and analyses on each individual scale showed a similar pattern (see supplement) so we present the combined analyses.
7 We re-ran all analyses including religious worldview as a covariate, in every study, to see if effects held above and beyond individual differences in religious worldview, and findings remained very similar, so we did not include it in the main analysis. However, we retained it as a manipulation check on the religions vs. atheist categorization variable.
8 Levene’s test was significant, so we adjusted the degrees of freedom.
9 We re-ran all analyses including political orientation as a covariate in every study to see if effects held above and beyond individual differences in political orientation. Findings remained very similar, so we did not include it in the main analysis.
10 We also considered an alternative classification of participants who selected “other:” Those who specified in an open-ended response a particular religious group (e.g., Episcopalian) remained classified as religious, whereas we removed all participants who failed to specify a modern organized religion (n = 21), such as those who wrote “spiritual but not religious,” “pagan,” etc. Results using this analysis were similar, so in the reported analysis we retained them in the “religious” category to improve power. Of note, we use the term “religious” to refer to people who identify with a religion, rather than people who are highly invested in their religion.
11 After analyzing the data from this sample, we discovered that 23 observations came from the same IP address. We reanalyzed the data with these observations removed. All effects remained the same. As such, we retained the observations in the reported analyses to improve power.
12 Note that this phrasing does not require that participants recognize or understand these arguments or have knowledge of these philosophers, but instead suggests that Tom himself understands these arguments.
13 Levene’s test was significant, so we adjusted the degrees of freedom.
14 We conducted analyses on helping likelihood controlling for overall trait evaluations as well. The three-way interaction remained significant and the patterns were similar. See supplemental materials for full results.
15 We conducted analyses on helping deservingness controlling for overall trait evaluations as well. The three-way interaction remained significant and patterns were similar. See supplemental materials for full results.