6,420
Views
12
CrossRef citations to date
0
Altmetric
Articles

Moral conformity in online interactions: rational justifications increase influence of peer opinions on moral judgments

, , , &
Pages 57-68 | Received 26 Aug 2016, Accepted 18 Apr 2017, Published online: 28 Apr 2017

Abstract

Over the last decade, social media has increasingly been used as a platform for political and moral discourse. We investigate whether conformity, specifically concerning moral attitudes, occurs in these virtual environments apart from face-to-face interactions. Participants took an online survey and saw either statistical information about the frequency of certain responses, as one might see on social media (Study 1), or arguments that defend the responses in either a rational or emotional way (Study 2). Our results show that social information shaped moral judgments, even in an impersonal digital setting. Furthermore, rational arguments were more effective at eliciting conformity than emotional arguments. We discuss the implications of these results for theories of moral judgment that prioritize emotional responses.

Introduction

People conform to a blatantly erroneous majority opinion, even on a simple perceptual task (Asch, Citation1956). Although a large body of research in social psychology has elucidated some of the varying conditions under which conforming behavior occurs – such as social setting, type of judgment, number and group membership of the confederates – contention remains about exactly what the conditions are (Bond & Smith, Citation1996).

Changes in how people interact socially – from synchronous in-person conversations to asynchronous and abstract digital communication – present new environments for conformity. Research predating the development of anonymous online settings suggests that, without direct, face-to-face contact, there won’t be the same level of pressure to conform (e.g., Allen, Citation1966; Deutsch & Gerard, Citation1955; Levy, Citation1960). Furthermore, early research during the development of online spaces suggest that, without nonverbal cues such as body language or prosody, digital communication will alter the ways in which we exchange information, communicate norms, and exert persuasive influence (Bargh & McKenna, Citation2004). Nonetheless, in certain online contexts, other studies have shown that laws of social influence, such as the foot-in-the-door technique, still hold in purely virtual settings (Eastwick & Gardner, Citation2009), and merely providing participants with numerical consensus information can change prejudicial beliefs about various racial groups (Stangor, Sechrist, & Jost, Citation2001) and the obese (Puhl, Schwartz, & Brownell, Citation2005). This suggests that, while there may have been initial doubts about the extent of conformity in anonymous online contexts, these new virtual spaces remain susceptible to social influence.

Prior research has also raised questions about whether conformity operates differently within certain domains, such as moral or evaluative judgments. Traditional philosophical views (e.g., Aristotle, Citation1941; Kant, Citation1996) emphasize that moral judgments should ideally be free from social influences, depending only one’s own judgment. In line with this ideal, more recent psychological experimentation suggests that people at least sometimes are less likely to conform when they have a strong moral basis for an attitude (Hornsey, Majkut, Terry, & McKimmie, Citation2003). In contrast, however, other studies have shown that at least some moral opinions can be influenced by social pressure in small group discussions (Aramovich, Lytle, & Skitka, Citation2012; Kundu & Cummins, Citation2012; Lisciandra, Postma-Nilsenová, & Colombo, Citation2013), and information about the distribution of responses elicits conformity in deontological, but not consequentialist, responses to the Trolley problem (Bostyn & Roets, Citation2016). Taking these ideas together, we were interested in whether the mere knowledge of others’ opinions online would produce conformity regarding moral issues, particularly in online contexts.

Study 1: impersonal statistics influence moral judgments

In Study 1, we examined participants’ sensitivity to anonymous moral judgments regarding ethical dilemmas. We presented participants with two stories, along with statistical information about how other participants had responded. Unlike other research providing distributions of responses (e.g., Bostyn & Roets, Citation2016) this information is similar to what users might see on a social media website like Twitter, Facebook, or Reddit, where users can see numerical information about how other users reacted to some opinion (e.g., ‘15 users liked this post’ or ‘35 users favorited this tweet’). While we provide no information about what proportion of participants responded this way to each scenario, this mirrors the experience of being in an online context where we are unaware of how many users have seen a post without reacting.

Method

Participants

Participants were recruited through the online labor market Amazon Mechanical Turk (MTurk) and redirected to Qualtrics to complete an online survey. All participants provided written informed consent as part of an exemption approved by the Institutional Review Board of Duke University. Each participant rated one of two scenarios; 302 participants rated Scenario A, while 290 participants rated Scenario B. Participants were restricted to those located in the US with a task approval rating of at least 80%. Although no demographic information was collected on our participants specifically, a typical sample of MTurk users is considerably more demographically diverse than an average American college sample (36% non-White, 55% female; mean age = 32.8 years, SD = 11.5; Buhrmester, Kwang, & Gosling, Citation2011). Numerous replication studies have also demonstrated that data collected on MTurk is reliable and consistent with other methods (Rand, Citation2012). Participants were compensated $.10 for their involvement.

Materials

Participants were randomly assigned to one of two scenarios. Scenario A, one of Haidt’s classic moral scenarios, describes a family that eats their dead pet dog (Haidt, Koller, & Dias, Citation1993). Scenario B involves the passengers of a sinking lifeboat that sacrifice an overweight, injured passenger. (See Table for full text of scenarios.) These scenarios were chosen partly because they fall under different moral foundations (Haidt & Graham, Citation2007). Because the foundations have been shown to exhibit dissimilar properties in other studies (e.g., Young & Saxe, Citation2011), we were interested in how the degree of conformity might vary in a scenario involving harm violations versus purity violations.

Table 1. Scenarios detailing moral violations in the purity (Scenario A) and harm (Scenario B) domains.

Procedure

Participants read an ethical dilemma and were asked how morally condemnable the agent’s actions were. Ratings were made on an 11-point Likert scale from 0 (completely morally acceptable) to 10 (completely morally condemnable). Participants were randomly assigned to one of three conditions in this survey. Two of the conditions contained a prime to induce conformity by providing an established opinion about the scenario. The form of that prime mirrored that seen on many social media websites (e.g., Facebook): it described the number of people who provided a given rating when viewing a similar scenario. For Scenario A, participants read the following: ‘58 people who previously took this survey rated it as morally condemnable [acceptable]’. Participants read an identical statement for Scenario B, except they were told that 65 people previously took the survey. To ensure that no deception was used, these numbers of people had indeed rated these scenarios that way in a previous experiment.

The final condition served as a baseline and contained no prime; participants merely read and rated the moral dilemma. This design was repeated in separate samples for scenarios A and B. While the core of the paradigm remained constant throughout our experiments, the survey from Study 1 Scenario B also contained a follow-up question measuring level of confidence and a catch question about details from the scenario.

Results

We performed a one-way ANOVA on moral ratings by condition for each scenario. In Scenario A, moral ratings differed significantly across three conditions, [F(2, 299) = 3.78, p = .024,  = .025]. Post-hoc Tukey tests of the three conditions indicated that the condemnable group (M = 7.09, SD = 2.98) gave significantly higher ratings (more condemnable) than the acceptable group (M = 5.80, SD = 3.67), p = .019, d = .39 (Figure ). Comparisons between the baseline group (M = 6.26, SD = 3.47) and the other two groups were not significant. The same results were obtained for Scenario B: moral ratings differed significantly across three conditions, [F(2,287) = 4.28, p = .015,  = .029]. Post-hoc Tukey tests of the three conditions indicated that the condemnable group (M = 6.08, SD = 2.90) gave significantly higher ratings (more condemnable) than the acceptable group (M = 4.82, SD = 3.08), p = .010, d = .42 (Figure ). Comparisons between baseline group (M = 5.43, SD = 2.97) and the other two groups were not significant. For illustrative purposes all figures show the average difference from baseline for each condition.

Figure 1. Statistical information about other participants’ moral judgments significantly influences individual responses.

Note: Error bars represent standard errors. *p < .05.
Figure 1. Statistical information about other participants’ moral judgments significantly influences individual responses.

Discussion

We found manipulations containing sparse statistical data about other participants’ attitudes were effective in inducing conformity in moral judgments. Though early research in conformity suggested that face-to-face interactions were critical, and both philosophical and psychological writing on moral judgments suggest it should be free from social influence, these results show that all that is required to induce conformity in moral judgments is to provide statistical information about how others responded. Even subtle social information in anonymous contexts seems to affect moral judgments.

Having observed conformity to manipulations containing only statistical information, we were next interested in how different kinds of arguments, specifically emotional and rational arguments, might be more or less effective at influencing moral judgments.

Study 2: rational arguments elicit more conformity than emotional arguments

Having observed conformity to primes using mere statistical information, we were interested in whether the effect could be strengthened by the addition of different types of arguments: those containing emotionally charged language to appeal to participants’ feelings or arguments using reasoning referring to consequences or moral principles. The distinction between emotional and rational arguments reflects some of the core predictions put forth by prominent psychological models of moral judgment. In the Social Intuitionist Model (SIM), for example, ‘moral intuitions (including moral emotions) come first and directly cause moral judgments’ (Haidt, Citation2001, p. 814), while reasoning is purely a post hoc defense of those emotional intuitions. The SIM predicts that moral conformity would only manifest by altering others’ emotional intuitions, thus in order to change what people think about a moral issue, they must first change how they feel.

This prediction is supported by a host of studies that measure changes in moral opinions after manipulating emotions and reasoning (for a review, see Avramova & Inbar, Citation2013). For example, inducing positive emotions through funny videos (Valdesolo & DeSteno, Citation2006), encouraging emotion regulation (Feinberg, Willer, Antonenko, & John, Citation2012), and prompting longer reflection (Paxton & Greene, Citation2010) all generated less harsh moral judgments. Furthermore, moral outrage from one scenario may spill over into harsher judgments of subsequent scenarios (Goldberg, Lerner, & Tetlock, Citation1999), and emotion drives higher ascription of intentionality in cases involving negative consequences (Ngo et al., Citation2015). Recent work utilizing virtual reality also demonstrates a discrepancy between hypothetical moral judgments and moral decisions taken in virtual environments, and this discrepancy seems modulated by emotional responses (Francis et al., Citation2016; Patil, Cogoni, Zangrando, Chittaro, & Silani, Citation2014). Other work, for example, suggests that emotions are instrumental for driving moral behavior (for a review, see Teper, Zhong, & Inzlicht, Citation2015). Therefore, this literature suggests that emotional manipulations would be particularly effective in swaying moral attitudes.

In accordance with these findings, we hypothesized that arguments appealing to participants’ emotions would affect their judgments more than arguments citing abstract principles, rights, or reasons. To test this hypothesis, we gave participants emotional or rational justifications for why the dilemma was either morally acceptable or morally condemnable according to previous participants.

Method

Participants

Again, participants were recruited online from Amazon Mechanical Turk and redirected to a survey on Qualtrics. Scenario A was rated by 506 participants, and 496 participants rated Scenario B. All participant restrictions and compensation rates were identical to Study 1. To ensure that participants interpreted the stimuli as intended, we recruited 160 additional subjects via Amazon Mechanical Turk, two of which were dropped for failing an attention check.

Procedure

Once more, participants were presented with a vignette describing a moral violation and asked how morally wrong they believed the agent’s actions were on a scale from 0 (completely morally acceptable) to 10 (completely morally condemnable). However, in this experiment, participants were randomly assigned either to a baseline or one of four experimental conditions. The four experimental conditions arose from a 2 × 2 between-subjects factorial design with statistical norm (condemnable vs. acceptable) as one IV, similar to Study 1, and argument type (emotional vs. rational) as the other. The condemnable emotional argument in Scenario B, for instance, stated: ‘75 people who previously took this survey rated it as morally condemnable and said something similar to “Those barbaric passengers committed a horrible murder!”’ Analogously, the condemnable rational argument in Scenario B was:

75 people who previously took this survey rated it as morally condemnable and said something similar to ‘The passengers do not have the right to judge who gets thrown off. Whether someone is large or small, injured or uninjured, it is never okay to take a life.’ (See Table for full text of Study 2 manipulations.)

The baseline condition contained no manipulations. Again, this paradigm was repeated for scenarios A and B. The content used for the arguments represents a combination of individual replies to a previous survey’s free response question prompting participants to either explain the rationale behind their rating or describe their emotional response to the scenario. To ensure that these naturalistic responses were interpreted as either rational or emotional by our subjects, we presented participants in our post hoc test with one random argument from Scenario A and another from Scenario B in a within-subjects design. Participants rated these arguments on a scale from 1 (‘Not at all rational [emotional]’) to 7 (‘Extremely rational [emotional]’).

Table 2. Study 2 manipulations representing actual participant responses from a prior study.

In order to compare the magnitude of conformity based on whether participants were conforming to condemnable information or acceptable information, we converted the raw moral ratings into a conformity index to account for the fact that the acceptable and condemnable conditions moved participant’s responses in opposite directions. This allows us to compare the magnitude of conformity based on whether participants were conforming to condemnable information or acceptable information.

To construct the conformity index, we calculated the difference in moral ratings from the baseline and sign-normalized for condition. Thus, positive scores represented agreement with the provided statistical norm, or conformity, while negative scores represented disagreement with the statistical norm, or non-conformity/anti-conformity. First, we subtracted the average of the baseline condition from each moral rating and took the absolute value of that number (see Figure for the raw differences from the baseline). Next, based on condition, we assessed whether the difference from the baseline represented conformity or non-conformity. On the moral rating scale, higher numbers corresponded to more condemnable ratings. Therefore, if a rating in the condemnable condition was greater than the baseline, it remained positive to represent conformity. If a rating in the condemnable condition was less than the baseline, it was made negative to represent non-conformity. There were no ratings in either scenario or for any condition that was exactly at the baseline. The opposite was done for the acceptable condition, where ratings below the baseline represented conformity (and thus stayed positive), while ratings above the baseline represented non-conformity (and thus made negative).

Figure 2. Rational arguments have a stronger effect on participants’ moral judgments than emotional arguments.

Note: Error bars represent standard errors.
Figure 2. Rational arguments have a stronger effect on participants’ moral judgments than emotional arguments.

Results

Our post hoc test of argument type revealed that, on the whole, participants rated the rational arguments as more rational (M = 4.71, SD = 1.86) than emotional (M = 4.20, SD = 2.07, t(314) = 2.28, p = .02, d = .26) on a 7-point scale. Similarly, participants rated emotional arguments as more emotional (M = 5.67, SD = 1.32) than rational (M = 4.13, SD = 1.88, t(314) = 8.37, p < .0001, d = .94) on a 7-point scale. This suggests that the participants in our main experiment interpreted our stimuli as intended.

To test the role of argument type and statistical norm, we conducted a 2 (argument type: emotional vs. rational) × 2 (statistical norm: condemnable vs. acceptable) between subjects ANOVA. Starting with the raw scores of Scenario A (see Figure ), we found a main effect of statistical norm [F(1, 401) = 15.89, p < .001,  = .038], replicating the results of Experiment 1. There was no main effect, however, of type of argument [F(1, 401) = 1.18, p = .28,  = .003], though the interaction between argument type and norm was significant [F(1, 401) = 5.94, p = .02,  = .015].

To explore directly the extent to which each condition elicited conformity, we conducted a 2 × 2 ANOVA using the conformity index. In Scenario A, there was a main effect of argument type [F(1, 401) = 5.94, p = .015,  = .015], such that the conformity index was significantly greater for rational arguments (M = 1.09, SD = 3.42) than for emotional arguments (M = .27, SD = 3.49). There was also a significant main effect of statistical norm [F(1, 401) = 5.48, p = .02,  = .013], such that acceptable judgments elicited more conformity (M = 1.07, SD = 3.46) than condemnable judgments (M = .28, SD = 3.45) . There was no significant interaction, however, between statistical norm and argument type [F(1, 401) = 1.18, p = .28,  = .003] for the conformity index.

A similar pattern of results obtained for Scenario B. Starting with the raw scores, we found a main effect of statistical norm [F(1, 394) = 10.53, p = .001,  = .026], again replicating the results of Experiment 1. There was no main effect, however, of type of argument [F(1, 401) = .92, p = .337,  = .002], though the interaction between argument type and norm was significant [F(1, 401) = 7.18, p = .008,  = .018].

To explore directly the extent to which each condition elicited conformity, we conducted a 2 × 2 ANOVA using the conformity index. There was a main effect of argument type [F(1, 394) = 7.18, p = .008,  = .018], such that the conformity index was significantly greater for rational arguments (M = .86, SD = 2.97) than for emotional arguments (M = .08, SD = 2.81). Here, acceptable judgments (M = .65, SD = 2.92) were no more prone to conformity than condemnable ones (M = .29, SD = 2.91) [F(1, 394) = 1.60, p = .21,  = .004]. Again, there was no significant interaction between statistical norm and argument type [F(1, 394) = .92, p = .34,  = .002].

Discussion

When presented with either rational or emotional justifications for moral judgments, participants conformed more to the rational justifications. These results are inconsistent with our second hypothesis and with predictions made more broadly by the SIM (Haidt, Citation2001), because our participants responded more to appeals citing reasons than to appeals citing emotions. This is unexpected given the body of literature demonstrating that manipulations of emotion are powerful tools in shaping judgment (Valdesolo & DeSteno, Citation2006; Feinberg et al., Citation2012; Paxton & Greene, Citation2010). Furthermore, the SIM suggests that moral judgments can only be affected by changing moral intuitions, though the model may be consistent with these findings, since post hoc reasoning of one person, via the ‘reasoned persuasion’ link in the model, may still impact the judgments of others. The reasoned persuasion link, however, remains largely unspecified, and it makes no predictions or claims about how that persuasion works, nor what kinds of persuasion should be most effective. We discuss potential explanations for our findings in the following section.

General discussion

In this paper we have shown that participants readily conformed to subtle statistical manipulations of their moral judgments. Furthermore, we have provided some evidence that arguments appealing directly to participants’ emotions did not induce conformity as strongly as rational appeals.

In the literature on conformity, some studies have drawn a distinction between normative social motivations to conform, which are characterized by a desire to avoid social isolation, and informational motivations, which are based on a need to be correct (Deutsch & Gerard, Citation1955). Several features of our experiments suggest that the nature of conformity in this context may be due to informational rather than social factors. First, the context of our experiments is much less personal than in other studies, which include face-to-face social interaction. Given the lack of social interaction and the lack of possibility for social feedback, the likelihood that participants are responding to direct social pressure seems low. Further, a previous study has shown that participants rely more heavily epistemologically on their peers when the answer to a question is more ambiguous and open to interpretation (Stangor et al., Citation2001). The nature of moral judgment can be quite ambiguous, and the stimuli in this experiment were designed to evoke competing intuitions. Therefore, our participants seem to be interpreting the number of supporters as evidence for the correct judgment about a very difficult moral question.

Additionally, contrary to the SIM and other literature on emotional manipulation, our emotional primes were not as successful in inducing conformity as their rational counterparts. These results do accord well, however, with recent critics of the SIM, such as those questioning the link between disgust and moral judgment (e.g., Landy & Goodwin, Citation2015; Johnson et al., Citation2016). Our results also fit into a burgeoning literature exploring the role of reasoning in moral judgment. Moral reasoning, this research suggests, can set the boundaries of what we consider moral (Royzman, Landy, & Goodwin, Citation2014), aid in discounting intuitions with no justifications, and correct for bias (see Paxton & Greene, Citation2010, for a review). Furthermore, controlling for demographic factors, the willingness to engage in rational thinking predicts wrongness judgments of purity violations like Scenario A of our study (Pennycook, Cheyne, Barr, Koehler, & Fugelsang, Citation2014).

Supporters of SIM may argue that perhaps these primes failed to make participants feel any emotions, or perhaps participants counterreacted to what they saw as excessive expressions of emotion. Even if that were the case, the arguments used were real responses given by participants and represent ecologically valid instances of emotional persuasion in many online settings, where the expression of emotion is done through written words rather than the ‘emotional’ stimuli explored in other studies (e.g., Valdesolo & DeSteno, Citation2006). Given the limitations of the expression of emotion through online media, our data suggest that the more effective tactic for persuasion regarding moral judgments, whether on the smaller scale between individuals or the larger scale of public opinion, may be rational appeals to abstract principles rather than expressions of emotions. It is worth noting, however, that our stimuli hardly capture the full breadth of emotional and rational arguments available. Future work might explore whether this pattern holds more broadly, or only for the stimuli in the present study.

Today, in contrast with Asch’s time, more of our social interactions and, consequently, discussions on matters of morality and politics are conducted across digital screens rather than face-to-face. Though it is reasonable to predict that the influence we have on each other’s opinions would be greatly diminished in this detached world, it appears that the power of social influence is retained. The exact consequences of an increasingly interconnected virtual web of people, ideas, and opinions remain to be seen. Future research may elucidate whether the robustness of conformity online will lead to good or bad consequences, whether it be through the facilitation of advances in knowledge as with ‘The Wisdom of Crowds’ effect (Golub & Jackson, Citation2010) or an amplification of erroneous noise through a ‘Groupthink’ phenomenon (Esser, Citation1998).

Disclosure statement

No potential conflict of interest was reported by the authors.

Acknowledgments

We thank Phil Costanzo for his helpful feedback.

References

  • Allen, V. L. (1966). Situational factors in conformity. Advances in Experimental Social Psychology, 2, 133–175.
  • Aramovich, N. P., Lytle, B. L., & Skitka, L. J. (2012). Opposing torture: Moral conviction and resistance to majority influence. Social Influence, 7, 21–34.10.1080/15534510.2011.640199
  • Aristotle. (1941). Ethica Nicomachea. In R. McKeon (Ed.), The basic works of Aristotle (pp. 935–1126). New York: Random House.
  • Asch, S. E. (1956). Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70(9), 1–70.10.1037/h0093718
  • Avramova, Y. R., & Inbar, Y. (2013). Emotion and moral judgment. Wiley Interdisciplinary Reviews: Cognitive Science, 4, 169–178. doi:10.1002/wcs.1216
  • Bargh, J. A., & McKenna, K. Y. (2004). The internet and social life. Annual Review of Psychology, 55, 573–590.10.1146/annurev.psych.55.090902.141922
  • Bond, R., & Smith, P. B. (1996). Culture and conformity: A meta-analysis of studies using Asch’s (1952b, 1956) line judgment task. Psychological Bulletin, 119, 111.10.1037/0033-2909.119.1.111
  • Bostyn, D. H., & Roets, A. (2016). An asymmetric moral conformity effect subjects conform to deontological but not consequentialist majorities. Social Psychological and Personality Science. doi:10.1177/1948550616671999
  • Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s mechanical turk a new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3–5.10.1177/1745691610393980
  • Deutsch, M., & Gerard, H. (1955). A study of normative and informational influences upon individual judgment. The Journal of Abnormal and Social Psychology, 51, 629–636.10.1037/h0046408
  • Eastwick, P. W., & Gardner, W. L. (2009). Is it a game? Evidence for social influence in the virtual world. Social Influence, 4, 18–32.10.1080/15534510802254087
  • Esser, J. K. (1998). Alive and well after 25 years: A review of groupthink research. Organizational Behavior and Human Decision Processes, 73, 116–141.10.1006/obhd.1998.2758
  • Feinberg, M., Willer, R., Antonenko, O., & John, O. P. (2012). Liberating reason from the passions: Overriding intuitionist moral judgments through emotion reappraisal. Psychological Science, 23, 788–795.10.1177/0956797611434747
  • Francis, K. B., Howard, C., Howard, I. S., Gummerum, M., Ganis, G., Anderson, G., & Terbeck, S. (2016). Virtual morality: Transitioning from moral judgment to moral action? PLoS One, 11, e0164374. doi:10.1371/journal.pone.0164374
  • Goldberg, J. H., Lerner, J. S., & Tetlock, P. E. (1999). Rage and reason: The psychology of the intuitive prosecutor. European Journal of Social Psychology, 29, 781–795.10.1002/(ISSN)1099-0992
  • Golub, B., & Jackson, M. O. (2010). Naive learning in social networks and the wisdom of crowds. American Economic Journal: Microeconomics, 2, 112–149.
  • Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814.10.1037/0033-295X.108.4.814
  • Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20, 98–116.10.1007/s11211-007-0034-z
  • Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65, 613.10.1037/0022-3514.65.4.613
  • Hornsey, M. K., Majkut, L., Terry, D. J., & McKimmie, B. M. (2003). On being loud and proud: Non-conformity and counter-conformity to group norms. British Journal of Social Psychology, 42, 319–335.10.1348/014466603322438189
  • Johnson, D. J., Wortman, J., Cheung, F., Hein, M., Lucas, R. E., Donnellan, M. B., … Narr, R. K. (2016). The effects of disgust on moral judgments testing moderators. Social Psychological and Personality Science, 7, 640–647.10.1177/1948550616654211
  • Kant, I. (1996). Kant: The metaphysics of morals. (M. J. Gregor Ed.). Cambridge: Cambridge University Press.10.1017/CBO9780511809644
  • Kundu, P., & Cummins, D. D. (2012). Morality and conformity: The Asch paradigm applied to moral decisions. Social Influence, 8, 268–279. doi:10.1080/15534510.2012.727767
  • Landy, J. F., & Goodwin, G. P. (2015). Does incidental disgust amplify moral judgment? A meta-analytic review of experimental evidence. Perspectives on Psychological Science, 10, 518–536.10.1177/1745691615583128
  • Levy, L. (1960). Studies in conformity. The Journal of Psychology, 50, 39–41.10.1080/00223980.1960.9916420
  • Lisciandra, C., Postma-Nilsenová, M., & Colombo, M. (2013). Conformorality. A study on group conditioning of normative judgment. Review of Philosophy and Psychology, 4, 751–764.
  • Ngo, L., Kelly, M., Coutlee, C. G., Carter, R. M., Sinnott-Armstrong, W., & Huettel, S. A. (2015). Two distinct moral mechanisms for ascribing and denying intentionality. Scientific Reports, 5.
  • Patil, I., Cogoni, C., Zangrando, N., Chittaro, L., & Silani, G. (2014). Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas. Social Neuroscience, 9, 94–107.10.1080/17470919.2013.870091
  • Paxton, J. M., & Greene, J. D. (2010). Moral reasoning: Hints and allegations. Topics in Cognitive Science, 2, 511–527.10.1111/(ISSN)1756-8765
  • Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2014). The role of analytic thinking in moral judgements and values. Thinking & Reasoning, 20, 188–214.
  • Puhl, R. M., Schwartz, M. B., & Brownell, K. D. (2005). Impact of perceived consensus on stereotypes about obese people: A new approach for reducing bias. Health Psychology, 24, 517.10.1037/0278-6133.24.5.517
  • Rand, D. G. (2012). The promise of mechanical turk: How online labor markets can help theorists run behavioral experiments. Journal of Theoretical Biology, 299, 172–179.10.1016/j.jtbi.2011.03.004
  • Royzman, E. B., Landy, J. F., & Goodwin, G. P. (2014). Are good reasoners more incest-friendly? Trait cognitive reflection predicts selective moralization in a sample of American adults. Judgment and Decision Making, 9, 175.
  • Stangor, C., Sechrist, G. B., & Jost, J. T. (2001). Changing racial beliefs by providing consensus information. Personality and Social Psychology Bulletin, 27, 486–496.10.1177/0146167201274009
  • Teper, R., Zhong, C., & Inzlicht, M. (2015). How emotions shape moral behavior: Some answers (and questions) for the field of moral psychology. Social and Personality Psychology Compass, 9(1), 1–14. doi:10.1111/spc3.12154
  • Valdesolo, P., & DeSteno, D. (2006). Manipulations of emotional context shape moral judgment. Psychological Science, 17, 476–477.10.1111/j.1467-9280.2006.01731.x
  • Young, L., & Saxe, R. (2011). When ignorance is no excuse: Different roles for intent across moral domains. Cognition, 120, 202–214.10.1016/j.cognition.2011.04.005

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.