1,743
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Downstream consequences of disclosing defaults: influences on perceptions of choice architects and subsequent behavior

, , &
Pages 25-48 | Received 05 Jun 2019, Accepted 17 Aug 2021, Published online: 05 Dec 2021

ABSTRACT

Transparency is a key factor in determining the permissibility of behavior change interventions. Nudges are at times considered manipulative from failing this condition. Ethicists suggest that making nudges transparent by disclosing them to decision makers is a way to mitigate the manipulation objection, but questions remain as to what downstream consequences disclosing decision makers of a nudge may cause. In this registered report, we investigated two such consequences: (1) whether disclosure affects perceptions of the choice architect and (2) whether disclosure influences subsequent behavior. To these ends, we present data from three pilot studies and two main experiments (total N = 2177). In both experiments, we used defaults to nudge participants towards prosocial behaviors with real consequences. Experiment 1 employed a mixed design examining changes in perceptions of the choice architect for participants presented with a nudge disclosure before or after choosing. Experiment 2 extended by investigating the effects of disclosure on the default effect, perceptions of the choice architect, and on a subsequent prosocial choice task. Results showed that (1) when presented before choosing the nudge disclosure did not influence perceptions of the choice architect. However, when presented after, perceptions deteriorated. (2) The disclosure, regardless of when presented, had no effect on participants’ behavior in a subsequent non-nudged choice. Additionally, the disclosure did not affect the nudge’s influence on the initial choice. We conclude that lack of transparency can hurt choice architects’ reputation and discuss under what circumstances this may materialize behaviorally. Materials, data, and code are available at osf.io/463af/.

A nudge is, as defined by Thaler and Sunstein (Citation2008, p. 6), “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.” Interventions of this sort have been used in a wide array of applications, such as for promoting green electricity use (Pichert & Katsikopoulos, Citation2008), increasing influenza vaccination (Chapman et al., Citation2010) and increasing choices of climate-friendlier food (Gravert & Kurz, Citation2019). However, nudges have simultaneously been criticized for being covert and manipulative (Bovens, Citation2009; Burgess, Citation2012). This criticism has led to calls for increased transparency in nudge interventions (e.g., Hansen & Jespersen, Citation2013; House of Lords, Citation2011; OECD, Citation2019; Sunstein, Citation2014). Although prior studies have shown that disclosing a nudge does not decrease its effect on the targeted choice (e.g. Loewenstein et al., Citation2015), little is known about other possible effects of disclosure. We suggest that providing a nudge disclosure may reflect positively on the choice architect and that this may have positive effects on decision makers’ subsequent behavior.

The present paper investigates the consequences of disclosing a nudge in two pre-registered and pre-reviewed experiments. First, we examine how increased transparency affects perceptions of the choice architect. Second, we investigate whether a nudge disclosure can have implications for subsequent interactions between the nudgee (the decision maker) and the nudger (the choice architect).

Nudging, manipulation, and transparency

While some nudges are readily visible and easy enough to understand (e.g. a sign warning of a broken bridge, Sunstein, Citation2018), others will commonly be hard to spot. This has brought on one of the most prominent criticisms of nudging: that nudging is manipulative. More specifically, nudges may be considered manipulative from interfering with a decision-maker’s choice process, without the decision maker being able to recognize or counteract the intervention (Wilkinson, Citation2013). The default nudge can be taken as an example. A default nudge utilizes the fact that when something is pre-selected and made the default course of action (unless actively changed), this often influences choices in line with the default. Defaults can be employed in ways that are hard even for the attentive person to notice, as for instance, when a government defaults enlisting citizens in cadaveric organ donation programs from birth (Johnson & Goldstein, Citation2003). A recent meta-analysis reports the average effect size of default effects to be d = .68 (Jachimowicz et al., Citation2019). Despite this potent effect, many people, even when aware that a default can be set, display “default neglect” in that they fail to understand how defaults can be used as a means of influence (Zlatev et al., Citation2017; see also Jung et al., Citation2018). Fewer still seem to believe defaults affect their own choices (Dhingra et al., Citation2012). Compared to more traditional policy instruments – such as incentives, taxes, or bans – the workings of a nudge will often be less transparent to the people it affects (Sunstein, Citation2014).

The most commonly suggested countermeasure to the manipulation objection is that choice architects should increase the transparency surrounding the nudge (e.g., OECD, Citation2019), thereby giving people a better chance of awareness. The strongest and most direct way to offer transparency is to explicitly disclose the decision maker of the presence and expected effect of the intervention concurrently with the choice task being presented. This can be done, for example, by accompanying the nudge with text conveying this information.

Concerns have been raised that there is a tension between high-level transparency and effectiveness in nudge interventions (Bovens, Citation2009; Burgess, Citation2012). That is, the nudge effect may considerably diminish when people are informed of the effect. Empirical research indicates that such concerns may be exaggerated, however (Bruns et al., Citation2018; Loewenstein et al., Citation2015; Michaelsen et al. Citation2020, and Michaelsen et al. Citation2021; Paunov et al., Citation2019a; Steffel et al., Citation2016). For instance, Loewenstein et al. (Citation2015) conducted an experiment where they used a default to nudge participants in a hypothetical choice with end-of-life-care decisions. The default exerted influence on people’s choices even when the intervention was disclosed to them before choosing, and default-adherence was on par with that of an undisclosed default. In a recent study by Bruns et al. (Citation2018), the default effect proved robust across a range of disclosures containing increasingly detailed information about the intervention.

Effects from disclosing nudges on perceptions of choice architects

Questions remain, however, as to what secondary consequences nudge disclosures may have – for instance, disclosing a nudge could potentially influence the nudgee’s perception of the person administering the nudge, i.e., the choice architect (perceptions of the choice architect; henceforth PoCA). Such perceptions may affect future interactions between the nudgee and the nudger (Krijnen et al., Citation2017).

Some prior experimental research suggests that such effects on PoCA are plausible. For instance, Paunov et al. (Citation2019a) investigated the effects of disclosing a default nudge on perceptions of trustworthiness of a choice architect. They presented participants with a hypothetical choice among elective university courses, where the university administration presented some of the courses as the default selection. Results indicated no changes in trustworthiness ratings among participants disclosed of the default, undisclosed participants, and undisclosed participants who imagined being aware of how default settings may affect choice. A second experiment, using the same choice context and experimental conditions, investigated feelings of being deceived by the choice architect. Here, results showed feelings of being deceived to be significantly lower among disclosed participants. Interestingly, both variables were found to be significantly related to the influence of the default nudge on participant’s choices.

Further, two experiments by Steffel et al. (Citation2016) investigated disclosure effects on willingness to work with a choice architect again, as well as perceived fairness and ethicality of the choice architect’s behavior. In one experiment (2b), involving choices of amenities in an apartment acquisition scenario, the disclosure of the default nudge was presented either before or after the choice. Findings showed participants receiving the disclosure before choosing to be more willing to work with the landlord (i.e., the choice architect) again in the future, as well as perceiving this person as being more ethical. In a second experiment (2a) where only half of the participants were disclosed, disclosed participants rated the choice architect’s behavior as fairer than did undisclosed participants. As with the experiments by Paunov et al. (Citation2019a) though, choices were hypothetical and without consequences for the participants. In an experiment involving monetary payoffs, Michaelsen et al. found a disclosure of a default nudge to lead to significantly lower fairness ratings (Michaelsen et al., Citation2020).

While not strictly a perception of the choice architect, though closely related, some studies have investigated perceived threat to freedom of choice in nudge interventions (Bruns et al., Citation2018; Michaelsen et al., Citation2021). Both of these studies found low ratings of perceived threat to freedom of choice, but the latter one found that disclosure resulted in significantly higher ratings among nudged participants when the choice involved higher stakes (a monetary payoff).

Overall, several findings point to the act of disclosing a nudge intervention leading to more favorable judgments of the choice architects. However, many of these studies have limitations that may impact the generality of their conclusions. Specifically, previous experiments have used low-stakes hypothetical choices that have not been consequential for the participants (e.g. asking participants to imagine making an important decision, rather than actually making one). As there are indications that increased stakes in choices can lead to different, more negative, perceptions, we believe more research is warranted. With a better understanding of PoCA, future nudge interventions can be designed to not only be effective and transparent but also to maintain a positive perception of the choice architect. Moreover, no previous study has examined the relationship of PoCA and later decision-making. It is plausible that more positive perceptions of a choice architect may make people more willing to engage in subsequent prosocial behavior related to the nudge.

Downstream consequences on subsequent behavior

To properly assess the outcome of a nudge intervention, or any intervention for that matter, one should account for the possibility of downstream behavioral consequences. If nudges show prosocial effects not only on the targeted behavior but also on later related behaviors, the societal value of nudging may in fact be larger than previously thought. On the other hand, there is also the risk of people nudged in one way at the first stage reacting by subsequently displaying the opposite behavior. This could then cancel out the positive effect from the nudge or, even worse, render it negative.

Arguments supporting positive spillover – that is, subsequent behavior consistent with the previously nudged behavior – can be found in cognitive consistency theories (e.g. Festinger, Citation1957), according to which people, in order to avoid cognitive dissonance, strive to act in a way that is consistent with their prior behavior. The same prediction can be made on the basis of self-perception theory (Bem, Citation1972), which postulates that people ascertain their self-identity, and infer their attitudes, by observing their own previous behavior. More recent studies on moral licensing, however, show that people can also use initial prosocial behavior as an excuse to, later on, behave with more moral latitude (Monin & Miller, Citation2001; Mullen & Monin, Citation2016). Consequently, studies of behavioral spillover from prosocial choice have produced mixed results (for reviews, see e.g. Dolan & Galizzi, Citation2015; Nash et al., Citation2017; Truelove et al., Citation2014). Specifically, for nudge interventions, there are, to our knowledge, only handful of studies exploring spillover to subsequent choices (Capraro et al., Citation2019; d’Adda et al., Citation2017; Ghesla et al., Citation2019; Hedesström et al., Citation2019). Apart from the study by Capraro et al., these studies fail to find strong evidence for either negative or positive nudge-induced spillover. A meta-analysis in Hedesström et al. (Citation2019) of the seven default-nudge experiments in that paper, combined with the previous literature on spillover from defaults, finds an aggregate effect of Hedge’s g = .10 in favor of positive spillover from default nudges. Capraro et al., however, use a morality salience prompt to nudge prosocial behavior, and document a stronger positive effect on behavioral spillover from this nudge.

Some empirical findings suggest that disclosures may have implications for subsequent behavior. Specifically, if the choice architect’s effort to be transparent is seen as a favor by the person receiving it (thus inducing positive PoCA), he or she may wish to benevolently reciprocate the choice architect in future interactions. For instance, two studies by Paese and colleagues have shown that offering a disclosure in a negotiation setting resulted in participants trusting one’s opponent more, and later reciprocating by acting more truthfully and making lower demands (Paese & Gilin, Citation2000; Paese et al., Citation2003). Furthermore, feelings of sympathy or benevolence may not even be a prerequisite for reciprocity to occur; people may feel obligated to return a favor simply because something was done for them (Cialdini, Citation2009; Regan, Citation1971).

On the other hand, if the disclosure merely serves to convey to the decision maker that he or she is subjected to an influence attempt, this attempt may be perceived negatively and lead to subsequent negative reciprocity, i.e., punishing behavior. Evidence of punishing behavior is, for instance, found in research on ultimatum games (Fehr & Fischbacher, Citation2003; Henrich et al., Citation2006) and public goods games (Fehr & Gächter, Citation2000; Herrmann et al., Citation2008). In the context of nudging, we suggest that negative reciprocity may occur if a government or other choice architect fails to disclose their use of nudges but people later find out through other channels, possibly after having already been nudged.

Thus, disclosing nudges may be motivated not only by ethical concerns. Increased transparency may also lead to more positive perceptions of the choice architect, as some of the literature reviewed here suggests. However, this needs to be substantiated by research on more applications, and crucially, in choices that have real (not only hypothetical) consequences for the persons being affected. Related research suggests the stakes of the choice have consequences for how the nudge is perceived. It is also plausible that disclosure of nudges may have desirable effects on subsequent behavior. However, at the same time, arguments can be mounted that disclosures could negatively impact both perceptions of, and future interactions with, the choice architect. These issues motivated the present studies.

The present studies

The two research aims of the present paper are to investigate the influence of disclosing a nudge intervention on 1) perceptions of a choice architect (PoCA) and on 2) subsequent behavior. Additionally, we contribute to the literature on how disclosure affects nudge-adherence with regard to principally targeted behavior by testing this in a novel form of consequential choice.

To these ends, we conducted two studies. The first study used a mixed experimental design to investigate how a nudge disclosure affects participants’ PoCA and their choice, and whether PoCA was changed if the nudge was instead disclosed after initial ratings and choice was made. The second study used a between-groups experimental design and expanded on the previous study by also looking at consequences from a nudge disclosure on subsequent behavior.

Our specific research questions and hypotheses are provided separately for each study below.

Preliminary studies

We conducted three preliminary studies prior to the present studies. The overarching goals of the preliminary studies were to inform the design and procedure of the upcoming studies as well as identifying viable contexts for studying perceptions of the choice architect (PoCA). As such, the preliminary studies were exploratory. In all three preliminary studies, participants were recruited through the online survey panel Amazon Mechanical Turk (MTurk; www.mturk.com), and paid 0.30–0.45 USD for their time. Although Preliminary Study 3 is the most directly relevant to the present studies (see below), we report all studies we have conducted as part of this project in the interest of transparency and because their results might bear on the boundary conditions of inferences we draw based on the present studies (see, e.g., McGuire, Citation2013; Vosgerau et al., Citation2019). Here, we present brief summaries of each preliminary study. More thorough reports of the methods and results are available in online supplementary materials on OSF (https://osf.io/463af/).

Preliminary Study 1

The purpose of the first preliminary study (N= 413) was to provide exploratory data to assess (1) the viability of the general hypothesis that disclosures could influence PoCA and subsequent behavior and (2) the extent to which disclosures with different contents might yield different effects. We tested how different disclosure statements might affect PoCA and influence the likelihood of later performing a favor for the choice architect (our spillover measure) in a hypothetical setting. Here, we found that the disclosure that simply indicated the presence and purpose of the default resulted in significantly higher PoCA ratings than no disclosure. More elaborate disclosures (which contained statements regarding the choice architects desire to not be manipulative or a suggestion that the preselection would benefit the participant, or both) produced only marginally higher PoCA ratings than no disclosure. Disclosure condition did not significantly predict participants’ reported likelihood of agreeing to perform the favor. However, participants’ PoCA ratings significantly predicted their rated likelihood of agreeing to the request. This finding provides at least some tentative evidence that disclosed defaults might result in more favorable PoCA as well as spillover to subsequent behavior. Also, in light of these results, we decided to use the disclosure indicating the presence and purpose of the default in the coming studies, as we did not have compelling evidence to favor a more elaborate disclosure statement.

Preliminary Study 2

In the second preliminary study (N= 301), we were interested in exploring another factor that might influence the extent to which disclosures affect PoCA. Specifically, we tested how disclosing a personalized choice architecture – that is, a nudge based on the specific preferences of the participant – would affect people’s choices and PoCA. We thought that personalized choice architecture could possibly be understood as manipulative but could also be perceived as a signal that the choice architect was trying to act in the chooser’s interests. This experiment used a consequential donation choice wherein participants had the option to either keep or donate half of a $0.20 bonus to a charity. The charity was selected based on the participant’s self-reported values provided in an earlier part of the survey. Participants were randomly assigned to receive a disclosure about the default and the personalization or to receive no disclosure. This study provided no evidence that disclosures about defaults influence PoCA either positively or negatively. However, there were a number of methodological issues with this study that might explain the null effects. We elaborate on these issues in the supplementary reports (https://osf.io/463af/).

Because we did not find evidence that disclosures about personalized defaults influenced PoCA, we decided not to further pursue the role of personalization here.

Preliminary Study 3

In the third preliminary study (N = 290), we investigated whether timing of disclosure matters for PoCA. In particular, we were interested in exploring whether disclosures provided after the participant has made a choice could lead to less favorable PoCA ratings. This experiment used a consequential pro-social choice in which participants decided if they wanted to take an additional, unpaid survey. Our reasoning for investigating the timing of disclosure stems from the idea that individuals who are informed that they have been exposed to an attempt of manipulation would especially take issue with this if informed after no longer having the opportunity to counteract the attempt. The experiment used two measures of PoCA, enabling us to get PoCA measures for undisclosed, pre-choice disclosed and post-choice disclosed participants. The results of this study showed that post-choice disclosed participants gave significantly poorer PoCA ratings in the second measurement, after being told about the default, than when they were undisclosed. This suggests that receiving the disclosure only after having made the choice caused people to view the choice architect as less transparent and ethical.

Study 1: perceptions of the choice architect and timing of nudge disclosure

In this experiment, we investigated two research questions. First, what effects will disclosing a default nudge, in a consequential choice situation, have on decision makers’ perceptions of the choice architect (PoCA)? Second, what effects will disclosing a default intervention, in a consequential choice, have on the choice targeted by the default and on a reassessment of this choice after post-choice disclosure (or reminder of the disclosure)? Stimulus material used for this study, along with commented code for all analyses and data cleaning, is provided in at osf.io/463af/.

Method

Recruitment procedure and inclusion criteria

Participants were recruited through the online survey panel Amazon Mechanical Turk (MTurk; www.mturk.com). Each was paid USD 0.40 for participation, based on a payment of USD 0.10 per minute and an estimated median completion time of 4 minutes. In order to be eligible for the present study, participants had to be above 18 years of age, and currently reside in the U.S. Further, in order to enter the survey, participants had to previously have completed at least 100 work tasks (“HITs”) on MTurk, have an acceptance rate of at least 95% for this previous work and not have participated in similar studies by our lab in the preceding six months.

Participants and sample size considerations

We aimed to recruit N = 660 participants, resulting in n = 330 per experimental condition before an expected 25% exclusion rate from failures in an attention check. After these exclusions, we expected approximately n = 247 per condition, 494 in total, giving us 80% power to detect group differences of d = .25 in an independent samples t-test, using the conventional alpha level of .05. This would be an increase in power as compared to our previous pilot, sufficient to detect even what should be considered fairly small differences in PoCA ratings for our main research question. Effects smaller than .25 are likely to have little or no theoretical or practical value. Based on the data from Preliminary Study 3 (which closely resembled the present design), an effect of .25 corresponds to approximately one-fifth of a scale point on a 7-point scale.

We preregistered to request our target 660 participants and keep any additional participants that may have started the questionnaire before it was automatically taken offline from reaching this number. If we failed to reach our target amount after 1 week of advertising the study on MTurk, we preregistered to take the survey offline and, depending on the number of entries collected, decide between analyzing the data as is or to relaunch the survey at a later date. This decision would have been made without looking at the data collected thus far.

We received 663 completed responses. Of these, 248 failed the attention check (described below), leaving us with a final sample of 415 participants (age: M = 38.3, SD = 12.9; gender: 53.7% women). Thus, the final sample was somewhat smaller than the 494 expected beforehand. In terms of statistical power, this equates to a change from being able to detect differences of d = .28 instead of the expected d = .25 in an independent samples t-test with 80% power and alpha level at .05.

Design

The experiment employed a mixed design with one between-group manipulation (timing of nudge disclosure: pre- vs. post-choice) and two repeated measures (Choice, and the PoCA scale, see below).

In the experimental task, participants were asked to choose between agreeing or refusing to complete a short additional questionnaire in order to provide data for student research projects. For all participants, agreeing to participate in the additional questionnaire was presented as the default (pre-selected) alternative. Thus, if the participant made no change on this page, to agree to complete the additional questionnaire was chosen by the participant moving forward in the questionnaire. For both conditions, the choice alternatives were presented in a drop-down menu with only the default alternative immediately visible, and a change could be made simply by two clicks.

Participants were randomly assigned to, at the time of making their choice (time 1), either receive or not receive a disclosure informing of the default nudge being in place and that it may affect the participant’s choice. After having made their choice and having answered the PoCA scale (PoCA, time 1), those participants who received the disclosure (pre-disclosed condition) were reminded of the content of the disclosure. Participants who did not receive the disclosure at the time of their choice now received the disclosure for the first time (post-disclosed condition). After receiving the post-choice disclosure (or reminder), participants had the opportunity to change their minds regarding the choice (Choice, time 2) and regarding their answers on the PoCA scale (PoCA, time 2). Retention of participants’ previous responses was provided as the default for both the choice and for the answers to the PoCA scale. The experimental design is displayed in . The study was set up in the survey creation platform Qualtrics (https://www.qualtrics.com), and participants were randomized into one of the two experimental conditions through the Qualtrics software.

Figure 1. Experimental design, Study 1.

Figure 1. Experimental design, Study 1.

Procedure

The different parts of the experiment were presented in the following order: (1) a geometrical shape comparison filler task (the ostensible purpose of the survey; estimated completion time of 2 min); (2) an attention check in the form of an “instructional manipulation check” (IMC; Oppenheimer et al., Citation2009) (see Exclusions and missing data, below); (3) a choice task where participants could agree or decline to respond to a short additional questionnaire, without payment, in order to provide data for student research projects (Choice, time 1); (4) follow-up questions on how participants perceived the choice architect (PoCA, time 1); (5) a post-choice disclosure, or a reminder of the pre-choice disclosure (depending on condition); (6) an opportunity to change choice and answers on the PoCA questions (Choice, time 2; PoCA, time 2); (7) demographics; (8) the additional questionnaire to those participants who agreed to participate (at Choice, time 2).

Outcome measures

Additional questionnaire request (Choice, time 1)

We recorded participant’s choice between agreeing and refusing to participate in a short additional questionnaire, without additional payment, in order to provide data for student research projects. The choice was dichotomous and forced the participant to leave a response of agreeing or refusing to participate. The additional questionnaire itself was a short, 22-item, version of the Moral Foundations Questionnaire (MFQ) (Graham et al., Citation2011), plus one additional item concerning political orientation.

Perceptions of choice architect (PoCA, time 1)

The scale consisted of seven adjectives (four positive and three reverse-coded negative words) describing the choice architect, and participants provided responses on a 7-point Likert scale (1 = Strongly disagree, 7 = Strongly agree) regarding to what extent they agreed that the adjectives described the choice architect. The adjectives used in this scale were being fair, honest, trustworthy, open, manipulative, deceptive and controlling.

Choice, time 2, and PoCA, time 2

Directly after completing the first PoCA measure, participants were informed (or reminded, depending on condition) about the presence and purpose of the default nudge and then given the opportunity to change their responses on the items of the PoCA scale, as well as their answer regarding participating in the short additional survey. Retention of the participant’s previous responses was provided by a default for the questionnaire choice (i.e., if the participant declined at time 1, to decline was pre-selected at time 2), and as a numeric reminder for the PoCA scale items. This was done to aid participants’ remembrance of their previous answers and serve as a reference point for them when deciding whether to change any answers.

Exclusions

In order to ensure that participants paid sufficient attention to instructions we used an IMC (Oppenheimer et al., Citation2009), designed as an extension of the filler task used for the first part of the survey. Instead of continuing to rate the similarity of geometrical shapes on a numerical scale, participants paying attention to the set of new instructions learned that for the next trial they were expected to respond by selecting a new answer option labelled “Other” and type a plus sign (+) in a text box adjacent to it. Participants who failed to answer this attention check correctly, or failed to complete the full survey, were excluded from confirmatory analyses.

Preregistered hypotheses and exploratory tests

We have structured our hypotheses below by grouping them thematically. Please note that the categories are not mutually exclusive. The R code (R Core Team, Citation2020) for analyses was registered prior to data collection and can be found at osf.io/463af/.

Perceptions of the choice architect

Hypothesis 1: We predicted that being subjected to a disclosure of the default nudge would bring a decrease in PoCA ratings, when the disclosure was presented after the nudged choice, but not before it.

Hypothesis 2: We predicted that PoCA (at both time 1 and time 2) would be associated with agreement to complete the additional questionnaire (Choice, time 1), such that those who agreed to the request would provide higher PoCA-ratings.

Hypothesis 3: We predicted PoCA ratings (time 2) would predict the choice (time 2) of whether or not to complete the additional questionnaire, such that those who provided higher PoCA ratings would be more likely to agree with the request at time 2.

Disclosure and choice

Exploratory Test: Would disclosing the presence and expected influence of an opt-out default nudge influence the extent to which people acted in line with the nudge? Due to a lack of effect in our Preliminary Study 3, we took an exploratory approach to this question.

Hypothesis 4: We predicted an interaction between disclosure condition and choice, such that participants in the post-disclosed condition who agreed to the request initially would be more likely to change their choice, compared to those who declined the initial request and compared to those in the pre-disclosed condition.

Results

Preregistered analyses

Perceptions of the choice architect

The PoCA scale was found to be adequately reliable at time 1, α = .88, 95% CI [.87, .90], and at time 2, α = .90, 95% CI [.88, .91].

For Hypothesis 1, we tested whether PoCA changed from time 1 to time 2 and whether this change varied as a function of receiving the disclosure at the time of the choice or afterward. This was assessed using a linear mixed effects model fit using the lme4 package (Bates et al., Citation2015) for R, regressing PoCA on disclosure condition (disclosure prior to choice vs. disclosure after choice) and the timing of the measure (time 1 vs. time 2), with a random intercept for each participant. The fixed factors used treatment contrast with the reference group set as the first listed condition. Degrees of freedom for t-statistics for the coefficients were approximated with the Satterthwaite technique obtained using the lmerTest package (Kuznetsova et al., Citation2017).

An overview of the descriptive results can be found in . At time 1, the disclosure conditions did not significantly differ in their PoCA ratings (pre-disclosed M = 5.46, SD = 1.08; post-disclosed M = 5.53, SD = 1.10), b= 0.07, 95% CI [−0.15, 0.30], t(483.21) = 0.63, p = .53. Unexpectedly, pre-disclosed participants gave significantly lower PoCA ratings at time 2 (M = 5.36, SD = 1.18) compared to time 1, b= −0.10, 95% CI [−0.19, −0.02], t(413.00) = 2.32, p = .02. However, consistent with our prediction, this tendency to give lower PoCA ratings at time 2 was amplified among post-disclosed participants (M = 5.25, SD = 1.28), b= −0.18, 95% CI [−0.30, −0.05], t(413.00) = 2.75, p = .006.

Table 1. Descriptive statistics for outcome variables in Study 1.

For Hypothesis 2, we tested whether agreement to complete the additional questionnaire (time 1) was associated with higher PoCA ratings at time 1 and time 2, respectively. This was assessed using two independent samples t-tests. Both predictions were supported. For the measurement of PoCA at time 1, participants who had agreed to complete the additional questionnaire provided higher ratings (M = 5.66, SD = .94) than participants who had declined the request (M = 5.11, SD = 1.29), t(185.23) = 4.37, p < .001, Cohen’s d = 0.53, 95% CI [0.31, 0.74]. For PoCA at time 2, participants who had agreed to the request similarly gave higher PoCA-ratings (M = 5.48, SD = 1.09), than participants who had declined (M = 4.90, SD = 1.42), t(192.49) = 4.06, p < .001, Cohen’s d = 0.48, 95% CI [0.27, 0.69].

In Hypothesis 3, we had predicted that PoCA ratings (time 2) would predict choice (time 2). In support of this prediction, an independent samples t-test showed that PoCA ratings were significantly higher among participants agreeing to complete the additional questionnaire (M = 5.52, SD = 1.05) than among participants refusing to (M = 4.82, SD = 1.45), t(184.26) = 4.88, p < .001, Cohen’s d = 0.59 95% [0.38, 0.80]. All but one person (288/289) who agreed to complete the additional questionnaire did so.

Disclosure and choice

In an exploratory analysis, we tested whether the disclosure influenced agreement to complete the additional questionnaire (Choice, time 1) using a logistic regression model. We found that being subjected to the disclosure prior to choosing (66% agreed) was not associated with a significant difference in agreement, as compared to not being subjected to the disclosure prior to choosing (73% agreed), b = 0.33, OR = 1.40 95% CI [.92, 2.14], p = .118.

Finally, for Hypothesis 4, we did not find support of an interaction effect where participants in the post-disclosed condition who had agreed to complete the questionnaire at time 1 were more likely to change their choice at time 2. We tested this prediction in a two-stage logistic regression. In the first stage, we assessed whether the initial choice predicted change in choice, regressing change in choice on the initial choice. Agreeing to the additional request (choice, time 1) did not significantly predict changing choice at time 2, b = −0.85, OR = 0.43 95% CI [.12, 1.56], p= .184. In the second stage, we assessed whether the initial choice interacted with disclosure condition to predict change in choice by adding disclosure condition and the interaction term to the model. Neither disclosure condition nor the interaction term was significant, condition: b = −1.14, OR = 0.32 95% CI [.02, 2.25], p = .316, interaction term: b = 0.68, OR = 1.96 95% CI [.12, 57.14], p = .644, and again nor was choice at time 1: b = −1.00, OR = 0.37 95% CI [.07, 1.72], p= .201. Overall, only 10 out of 415 participants changed their initial choice when prompted the second time.

Exploratory analyses

As a robustness check, we calculated robust standard errors for the logistic regression models. In all cases, the robust standard errors were consistent with the conventional calculations. A comparison of the conventional and robust standard errors is presented in the Supplementary Materials.

Discussion

Study 1 investigated how a nudge disclosure affected perceptions of the choice architect, and whether the perceptions of the choice architect and the nudge disclosure would be associated with how people acted in the nudged choice. The results were similar to those of Preliminary Study 3. Study 1 found no evidence that providing a nudge disclosure prior to choosing influenced perceptions of the choice architect, or influenced participants’ choice of agreeing to a pro-social request. However, when the nudge’s presence and expected influence were disclosed to participants after choosing, they perceived the choice architect less favorably. While participants’ choices were positively associated with their attitude towards the choice architect, there was no indication that post-disclosed participants became frustrated enough to want to change their initial choice when given the chance.

The simplest explanation for why participants did not change their choice when asked a second time, despite the negative perceptions induced by the post-choice disclosure, may be that their perception of the choice architecture were not lowered enough for negative behavioral reciprocity to ensue. Alternatively, this finding could be interpreted in light of social psychology theories postulating that people strive to act consistently (e.g. Festinger, Citation1957). If participants perceived changing their initial choice as an act of inconsistency, they may have preferred sticking to it rather than changing their mind.

On the latter interpretation, the nudge may be considered ethically problematic, since the disclosure failed to weaken its influence on choice despite participants’ negative reaction (cf. discussion in Steffel et al., Citation2016). Another matter left in need of elucidation is whether the opt-out default nudge in Study 1 was strong enough to influence choice to begin with – that is, strong enough to produce a default effect. Study 2 sheds light on these issues by adding a non-nudged baseline condition, and by substituting the reassessment of Choice 1 for a new, separate, second choice that avoids direct carryover from participants’ previous behavior.

Study 2: perceptions of the choice architect and subsequent behavior

In this experiment, we built upon Study 1 and used similar materials, but with some additions and modifications. We used the same experimentally manipulated choice task as in Study 1 (“Choice, time 1”) and again measured perceptions of the choice architect (PoCA). However, Study 2 also measured behavior in a second prosocial choice task (Choice 2). In contrast to in Study 1, Choice 1 and PoCA were measured once. Full stimulus materials and code for data cleaning and analyses are provided in at osf.io/463af/.

Method

Participants and sample size considerations

Participants were recruited using the same procedure, and with the same inclusion criteria, as stated for Study 1. Participants were paid USD 0.40 for participation, based on a payment of USD 0.10 per minute of participation and an estimated median completion time of 4 minutes.

We aimed to recruit N = 1,000 individual participants, resulting in n = 250 per experimental condition before an anticipated 25% exclusion rate from failures in an attention check. With this expected exclusion rate, we expected approximately n = 187 remaining per condition, giving us 80% power to detect group differences of d = .29, in an independent samples t-test, using the conventional alpha level of .05. Based on the data from Preliminary Study 3, an effect of d = .30 is equivalent to one-quarter of a point on a 7-point scale. If nudge disclosures have a theoretically or practically meaningful effect on PoCA, we expect them to be larger than d = .30. Thus, we reasoned that this sample size should provide adequate power for detecting hypothesized effects, should they exist.

More participants than expected failed the attention check, which led to us collecting more total responses than expected (see Deviations from Preregistration, below). In total, we received 1637 completed responses. Of these, 53.7% failed the attention check, leaving us with a final sample of 758 participants (age: M = 37.9, SD = 11.5; gender: 46.3% women).

Design

The present study was a between-groups experiment, with four conditions. The survey was set up in Qualtrics, and randomization to one of the four experimental conditions was done through the Qualtrics software. The experimental conditions differed on two dimensions, both regarding the presentation of the choice task (Choice 1). This choice task was the same as in Study 1. That is, participants chose whether or not to agree to complete an additional unpaid questionnaire at the end of the main survey.

First, conditions differed in which choice option was preselected in Choice 1. In three conditions, the default was set to agree to the request to complete the additional questionnaire. In the fourth condition, to decline the additional questionnaire was the default.

Second, conditions differed regarding when or if a nudge disclosure was presented in relation to Choice 1. As in Study 1, the disclosure informed participants that we had pre-selected the “I Agree”-option to encourage people to agree with the request. Specifically, in the pre-disclosed condition, participants received the nudge disclosure concurrently with making Choice 1 (and were reminded of the disclosure before answering the PoCA-measure). In the post-disclosed condition, participants received the nudge disclosure after having made Choice 1. In the never-disclosed condition, participants never received a nudge disclosure. Lastly, In the decline-default condition, the condition where the default was to not agree with the request in Choice 1, participants also never received a disclosure of the choice format. In the conditions where participants received a disclosure, the same disclosure text was used as in Study 1. The experimental design is displayed in .

Figure 2. Experimental design, Study 2.

Figure 2. Experimental design, Study 2.

Procedure

The experiment consisted of the following parts, presented to participants in this order: (1) a geometrical shape comparison filler task; (2) an instructional manipulation check (IMC); (3) a first choice task concerning if participants were willing to complete a short additional questionnaire without payment (Choice 1) (for one condition, a disclosure was presented before choosing); (4) depending on condition: a post-choice disclosure, a reminder of the disclosure or no further information; (5) follow-up questions on how participants perceived the choice architect (PoCA); (6) demographics; (7) a second choice task concerning if participants were willing to sign up for additional unpaid studies (Choice 2); (8) the additional questionnaire referenced in Choice 1, presented only to participants who agreed to participate.

Outcome measures

Additional questionnaire request (Choice 1)

A binary choice between agreeing or refusing to complete an additional questionnaire. This was the same outcome measure described as “Choice, Time 1” in Study 1.

Perceptions of choice architect (PoCA)

Similarly, the PoCA measure was the same as described in Study 1.

Sign-up for invitation to future study (Choice 2)

We recorded participants’ choice whether or not to sign up for receiving an invitation to participate in an upcoming, unpaid study. The study was described as being part of a student research project. Participants signed up by entering their email address in a textbox. The choice consisted of either entering an email address or not, making this outcome a dichotomous variable. Note that we recorded sign-ups and not actual participation in the additional study. However, by entering their email address, participants indicated their willingness to act against their self-interest to benefit the choice architect (by volunteering to take on an unpaid, time-consuming task). No actual survey invitations were sent out.

Exclusions

We excluded participants on the same basis as in Study 1, that is, failure to respond correctly to an IMC in the initial shape comparison filler task.

Preregistered hypotheses and exploratory tests

The first request (Choice 1)

Hypothesis 1. We predicted a default effect for choice 1, such that when the request to complete an additional questionnaire was presented in an opt-out format (“agree”-default) participants would be more likely to agree than when the request was presented in an opt-in format (“decline-default”).

Exploratory Test 1. We exploratorily tested whether being subjected to the disclosure prior to choosing had any effect on Choice 1.

Perceptions of the choice architect

Hypothesis 2. We predicted that participants would provide lower PoCA ratings when they received the disclosure after they had made their choice, compared to the condition in which they received the disclosure at the time of the choice.

Hypothesis 3. We predicted that PoCA ratings would predict agreement with both Choice 1 and Choice 2.

The second request (Choice 2)

Hypothesis 4. We predicted that participants in the post-disclosed condition would provide their email address less frequently compared to participants in the pre- disclosed condition.

Exploratory Test 2. Exploratory, we tested whether participants’ choice on the first request predicted choice on the second request.

Exploratory Test 3. We tested a mediation model to assess the consistency of the data with the hypothesis that disclosure influenced PoCA, which in turn influenced the second choice.

Deviations from preregistration

Study 2 deviated in two ways from the plan given In Principle Acceptance by the journal.

First, before data collection had begun, we noted misspecifications in the preregistered study materials and analysis code. Specifically, one part of the choice task instructions (Choice 1) for the decline-default condition was stated erroneously (it was stated as applicable to the agree-default conditions). In the code, we found a typo, and that two registered analyses (Exploratory Test 1 and Exploratory Test 2) were specified to run with less data (data from fewer experimental conditions) than was possible. With the approval of the editors, we corrected the mistakes and re-specified the analyses before data collection begun. Both the initial and corrected materials and code are available at osf.io/463af/.

Second, after data had been collected (“batch 1,” collected late August 2020), we found that more participants than expected failed the study’s attention check. Specifically, we had expected a loss of 25% of the sample but ended up with a loss of 66%. For comparison, Study 1 (collected in late April 2020), saw a loss of 37% for the same attention check. The high failure rate left us underpowered for testing the study’s hypotheses. We therefore refrained from further analyses at the time and asked the editorial team for permission to continue collecting data up to the point specified in the preregistration (i.e., 750). After external review, the request to continue collecting data was granted, and we collected a second batch of data in mid-February 2021 (“batch 2”). In batch 2, the attention check failure rate was close to what was originally expected (and what had been found in Study 1), with 32% of the sample failing the check. Batch 2 was collected in two steps. We first requested an additional 500 responses on the recruitment platform. The day after, based on the pass-rate for the 500, we then requested 120 more, and in total (including batch 1) ended up with a sample of 758 participants who passed the attention check. As per preregistered, we retained the surplus eight responses for the analyses.

While we have no way of knowing for sure, we speculate that the unexpectedly high loss of respondents in batch 1 may have been due to an influx of new, more naive, participants to the MTurk panel due to the Coronavirus pandemic (Arechar & Rand, Citation2021). Comparing the samples from Study 1, and batches 1 and 2 from Study 2, we find no notable differences in demographic information (see Supplementary Material). With one exception, no differences were found between batches 1 and 2 for the preregistered analyses in Study 2 (detailed below).

Results

Preregistered analyses

The first request (Choice 1)

In Hypothesis 1, we predicted a default effect, such that when agreement to the first request was preselected, participants would agree to the request at a higher rate compared to in the condition where refusal of the request was preselected. To test this hypothesis, we fitted a logistic model regressing agreement with the first choice on disclosure condition. For this analysis, we collapsed the never-disclosed and post-disclosed conditions, as they were procedurally identical up to this point in the experiment. As can be seen in , Hypothesis 1 was supported: Participants more frequently agreed with the first request when agreement was preselected.

Table 2. Descriptive statistics for outcome variables in Study 2.

Table 3. Regression analyses for Study 2.

We were also interested in whether the disclosure of the default nudge influenced the agreement with the first request (Exploratory Test 1). There was no significant difference in agreement to the first request between those who received the disclosure before making the choice and those who did not receive the disclosure before the choice, b= −0.04, 95% CI [−0.44, .036], OR = 0.98, z= 0.21, p= .83.

Perceptions of the choice architect

The PoCA scale again demonstrated high internal reliability, Cronbach’s α = .88, 95% CI [.86, .89]. In Hypothesis 2, we predicted that participants would provide lower PoCA ratings if they received the disclosure after they had made their choice, compared to if they received the disclosure at the time of the choice. We tested this hypothesis using a linear model regressing PoCA ratings on disclosure condition. The analysis for this research question excluded data from the decline-default condition. The disclosure condition factor used treatment contrasts, with the pre-disclosed condition as a reference group. As can be seen in , Hypothesis 2 was supported: Participants who received the disclosure after making their choice gave significantly lower PoCA ratings compared to those who received the disclosure before making their choice.

As predicted in Hypothesis 3, PoCA ratings significantly predicted agreement with the first choice, t(380.67) = −3.64, p< .001, d= 0.29, 95% CI [0.13, 0.44], and with the second choice, t(476.82) = −2.13, p= .03, d= 0.17, 95% CI [0.01, 0.32]. Put differently, participants agreeing to the request in the first choice provided higher PoCA ratings (M= 5.38, SD = 1.09) than participants refusing the request (M= 5.02, SD = 1.33). The same pattern was found for the second choice. Participants agreeing to the request gave somewhat higher PoCA ratings (M= 5.40, SD = 1.16) than participants refusing the request (M= 5.20, SD = 1.18). However, in a logistic regression model predicting Choice 2 from PoCA, data collection timing, and their interaction, we found a significant interaction indicating that the relationship between PoCA and Choice 2 was driven by participants recruited in the second data collection (February 2021), b= 0.34 [0.06, 063], z= 2.35, p = .019 (see Supplementary Materials for further detail). This finding decreases our confidence that PoCA reliably predicted Choice 2.

Second request

In Hypothesis 4, we predicted that participants in the post-disclosed condition would provide their email addresses less frequently compared to participants in the pre-disclosed condition. We tested this hypothesis using a logistic model regressing agreement with the second request on disclosure condition. The disclosure condition factor used treatment contrasts, with the pre-disclosed condition as a reference group. As can be seen in , there was no significant difference in agreement with the second request between the conditions.

Additionally, agreement to the first choice significantly predicted agreement on the second choice (Exploratory Test 2), b= 1.72, 95% CI [1.29, 2.18], OR = 5.61, z= 7.63, p< .001.

Finally, we tested a mediation model to assess the consistency of the data with the hypothesis that disclosure influences PoCA, which in turn influences the second choice (Exploratory Test 3). Because we did not find support for the hypothesis that disclosure influenced agreement with the second choice, the plausibility of such a model was substantially reduced. However, for having preregistered the analysis, we examined the model anyway. In this model, we compared only the pre-disclosed and post-disclosed conditions, treatment contrast coded with pre-disclosed as a reference group. Consistent with the results reported above, disclosure condition significantly predicted PoCA ratings, b= −0.43, 95% CI [−0.68, −0.18], z= 3.36, p= .001. However, PoCA ratings did not significantly predict agreement with the second request, b= 0.03, 95% CI [−0.01, 0.07], z= 1.55, p= .12. Again, disclosure condition did not significantly predict agreement with the second request, b= 0.02, 95% CI [−0.08, 0.12], z= 0.40, p= .69. The indirect effect of disclosure condition through PoCA was not significant, b= −0.01, 95% CI [−0.03, 0.01], z= 0.89, p= .88. In short, there was no support for the proposed mediating relationship.

Exploratory analyses

In addition to examining whether the post-disclosed condition provided lower PoCA ratings than the other conditions, we examined whether any of the conditions differed from each other using a series of Tukey’s HSD comparisons. Consistent with the results above, participants in the post-disclosed condition provided significantly lower PoCA ratings compared to the three other conditions: pre-disclosed p= .002, never-disclosed p= .001, and decline-default p< .001. The pre-disclosed and never-disclosed conditions did not significantly differ, p = .99. The pre-disclosed and the decline-default conditions did not significantly differ, p = .82, and the never-disclosed and decline-default conditions did not significantly differ, p = .89.

As can be seen in , there was furthermore no indication that the “standard” opt-out default nudge (i.e., never-disclosed condition) led to either positive or negative spillover in the second choice, as compared to the no-nudge opt-in default format (i.e., decline-default condition).

Robustness checks

Because we deviated from our initial data collection plan (see Deviations from Preregistration), we investigated whether sampling time (batch 1 vs. batch 2) moderated any of the significant effects we observed. With the exception noted above (Hypothesis 3), sampling time did not significantly interact with the predictors in any of the models. The details of each of these analyses are presented in full in the Supplementary Materials.

Additionally, we calculated robust standard errors for the logistic regression models. In all cases, the robust standard errors were consistent with the conventional calculations. A comparison of the conventional and robust standard errors is presented in the Supplementary Materials.

Discussion

As predicted, we found that pre-selecting the agree-option led to higher agreement to complete the additional questionnaire in Choice 1. Disclosing the default nudge to participants prior to choosing did not cancel out the effect of the default. However, whether the nudge was disclosed to participants before or after choosing affected how they perceived the choice architect. Learning about the nudge after having made one’s choice produced lower PoCA ratings than in all other conditions, that is, regardless of whether the nudge was disclosed or not, and regardless of what the default was set as. Perceptions of the choice architect were also related to participants’ choices, in that agreeing with the requests in Choice 1 and Choice 2, respectively, was linked to perceiving the choice architect more favorably (although for Choice 2, it should be remembered that the effect was driven by participants in the data collection batch 2).

The findings seemingly suggest a causal chain where a disclosure can affect perceptions of the choice architect, which may in turn affect a person’s behavior in downstream interactions with the same choice architect. However, this chain was not supported in a mediation analysis. Notably, the difference in timing of the disclosure (presented before or after choosing) did not affect participants’ likelihood to agree with the request in the second choice task.

The results are consistent with, and partly directly replicate, the findings from Study 1. Similarly to, in the first study, participants post-disclosed of the nudge did not display adversarial behavior when given the chance in a second choice, despite lowered perceptions of the choice architect. In relation to the discussion in Study 1, this lends some credence to the idea that it was not an inability to counteract the influence of the default that led participants to stick with their initial choice when given the opportunity to reassess. Rather, it seems that in both studies, the way in which Choice 1 was designed with regard to the choice format (opt-out or opt-in) and disclosure format (absence, presence and timing) did not exert meaningful influence on behavior beyond the first choice.

General discussion

In two experiments and three pilot studies, we investigated downstream consequences of disclosing a default nudge to participants. There were three sets of findings. (1) Disclosing the default nudge to participants at the time of choosing did not significantly alter their choices. (2) Perceptions of the choice architect were unaffected (or only affected to a small extent) by whether or not participants were disclosed of the default nudge at the time of choosing, or by whether the default was set to nudge participants to agree with the prosocial request or not. However, when participants were disclosed of the nudge after they had made their choice, participants perceived the choice architect less favorably. (3) Whether or not the nudge was disclosed to participants, even after the choice, did not affect subsequent decision-making; not when given the chance to re-assess the initial choice, and not in a new, separate, pro-social choice task presented by the same choice architect.

These results are largely consistent with previous findings. Regarding effects on choice, previous research has almost unanimously found that disclosing a default nudge has no or negligible effects (Bruns et al., Citation2018; Loewenstein et al., Citation2015; Michaelsen et al., Citation2020, Citation2021; Steffel et al., Citation2016, Wachner, Adriansee, & De Ridder (Citation2020; this issue); but see also Paunov et al. (Citation2019a, Citation2019b). In particular, our studies corroborate Loewenstein et al. (Citation2015), who found that participants post-disclosed of a default nudge stood by their decision when given the chance to change. Study 1 extends by replicating this result in a pro-social and consequential choice task (Loewenstein et al. used a hypothetical and self-concerned choice task). The prosocial extension is informative to policymakers looking to apply transparent default nudges in a wider array of contexts. Study 2 further suggests that the extent to which a default nudge, or a nudge disclosure, creates spillover to a subsequent, separate, choice may be limited (but see the meta-analysis in Hedesström et al., Citation2019, finding a small positive effect for default nudges). Corroborating this result, Wachner et al. (Citation2020; this issue) found no differences in a subsequent choice between participants disclosed and not disclosed of a default nudge in a first choice. Whether or not spillover to subsequent choices occur may be moderated by several contextual factors (see, e.g., Mullen & Monin, Citation2016). Due to the potentially high impact, positive or negative, on the net effect of the intervention, we encourage future studies to continue investigating how nudges and disclosures of nudges may influence downstream choices.

Regarding the effects on perceptions of the choice architect, previous studies have shown mixed results. As reviewed in the introduction, Paunov et al. (Citation2019a) found that a nudge disclosure did not influence the perceived trustworthiness of the choice architect, but that it did increase perceptions of being deceived. However, a subsequent study (known to us after this paper received in Principle Acceptance, and therefore not reviewed in the introduction) failed to replicate the latter result (Paunov et al., Citation2019b). That a nudge disclosure, when presented at the time of choosing, may only have a modest effect on how the choice architect is perceived is corroborated by the present studies. Our findings are also consistent with results from Steffel et al. (Citation2016; Experiment 2b) in that participants seem to think less of the choice architect when the nudge disclosure is presented after the choice. Since Study 2 of this paper included a neutral reference point, the never-disclosed experimental condition, we can conclude that it is seemingly post-choice disclosure that has a negative effect on perceptions of the choice architect rather than pre-choice disclosure having a positive impact.

The effect sizes for effects on how the choice architect was perceived were relatively small in all studies. However, we believe that it is not unlikely they are underestimations of the effects found in many out-of-the-lab contexts. People likely hold stronger “hands-off!”-attitudes (i.e., prefer not to be nudged) towards influence from the government and commercial enterprises than they do towards the researchers behind a short online study. In addition, many applications of nudges will target behaviors that are more costly in terms of money or effort than the present case, presumably increasing any frustration experienced by individuals subjected to the intervention.

That we did not find evidence of adversarial behavior in subsequent choices from post-disclosed participants is not to say that negative reciprocity, meaning punishing behaviour, may not occur in other instances (cf. Krijnen et al., Citation2017). Again, out-of-the-lab applications of nudges may oftentimes spark stronger emotions than the present choice contexts, which in turn may be more likely to affect behavior in subsequent interactions with the same choice architect. Furthermore, whether behavioral spillover occurs is likely moderated by what the subsequent choice context concerns and how it is framed to the decision maker. Some choice contexts may be “strong” enough to override most influence from lowered perceptions of the choice architect. For instance, when the decision maker has a strong a priori preference. In other situations, the relation may be more malleable, such as when preferences are less firm and one’s liking of the choice architect could be used as a choice heuristic (cf. a “partisan bias” in judging the acceptability of a nudge, Tannenbaum et al., Citation2017). How tightly coupled the contents of the choice are to the choice architect is another plausible moderator. People’s reciprocity considerations, be them positive or negative, should more likely be triggered when the choice closely connects to the perceived wishes and desires of the choice architect.

In conclusion, the present studies found little evidence that disclosing a nudge to the decision maker led to additional positive effects in the form of more favorable perceptions of the choice architect or positive reciprocal behavior. While preemptively disclosing participants of the nudge produced no positive downstream benefits, the results suggest a precautionary benefit in that future negative perceptions may be mitigated in cases where the decision maker later finds out about the nudge. Choice architects may not normally intend to disclose a nudge after-the-fact, but it is not unlikely that a person subjected to a nudge later on finds out about the intervention from other sources, such as peers or the media. Consequently, the present findings suggest that choice architects risk harming their reputation in the long run if they do not adopt a transparent approach when subjecting people to nudges.

Supplemental material

Supplemental Material

Download MS Word (27.4 KB)

Disclosure statement

The authors declare no competing interests.

Data availability statement

All data, materials, and analysis codes are available at https://osf.io/463af/.

Supplementary material

Supplemental data for this article can be accessed here.

Additional information

Funding

This work was supported by the Marcus and Amalia Wallenberg Foundation, under grant MAW2015.0106, awarded to the fourth author.

References

  • Arechar, A. A., & Rand, D. G. (2021). Turking in the time of COVID. Behavior Research Methods, 1–5. https://doi.org/10.3758/s13428-021-01588-4
  • Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1. https://doi.org/10.18637/jss.v067.i01
  • Bem, D. J. (1972). Self-perception theory. In L. Berkowitz (Ed.), Advances in experimental social psychology (pp. 1–62). New York: Academic Press.
  • Bovens, L. (2009). The ethics of nudge. In T. Grüne-Yanoff & S. O. Hansson (Eds.), Preference change (pp. 207–220). Springer.
  • Bruns, H., Kantorowicz-Reznichenko, E., Klement, K., Jonsson, M. L., & Rahali, B. (2018). Can nudges be transparent and yet effective? Journal of Economic Psychology, 65, 41–59. https://doi.org/10.1016/j.joep.2018.02.002
  • Burgess, A. (2012). ‘Nudging’ healthy lifestyles: The UK experiments with the behavioural alternative to regulation and the market. European Journal of Risk Regulation, 3(1), 3–16. https://doi.org/10.1017/S1867299X00001756
  • Capraro, V., Jagfeld, G., Klein, R., Mul, M., & De Pol, I. (2019). Increasing altruistic and cooperative behaviour with simple moral nudges. Scientific Reports, 9(1), 1–11. https://doi.org/10.1038/s41598-019-48094-4
  • Chapman, G., Li, M., Colby, H., & Yoon, H. (2010). Opting in vs opting out of influenza vaccination. Journal of the American Medical Association, 304(1), 43–44. https://doi.org/10.1001/jama.2010.892
  • Cialdini, R. B. (2009). Influence: The psychology of persuasion (5th ed.). Collins.
  • d’Adda, G., Capraro, V., & Tavoni, M. (2017). Push, don’t nudge: Behavioral spillovers and policy instruments. Economics Letters, 154, 92–95. https://doi.org/10.1016/j.econlet.2017.02.029
  • Dhingra, N., Gorn, Z., Kener, A., & Dana, J. (2012). The default pull: An experimental demonstration of subtle default effects on preferences. Judgment and Decision Making, 7(1), 69.
  • Dolan, P. M., & Galizzi, M. (2015). Like ripples on a pond: Behavioral spillovers and their implications for research and policy. Journal of Economic Psychology, 47, 1–16. https://doi.org/10.1016/j.joep.2014.12.003
  • Fehr, E., & Fischbacher, U. (2003). The nature of human altruism. Nature, 425(6960), 785–791. https://doi.org/10.1038/nature02043
  • Fehr, E., & Gächter, S. (2000). Cooperation and punishment in public goods experiments. American Economic Review, 90(4), 980–994. https://doi.org/10.1257/aer.90.4.980
  • Festinger, L. (1957). A theory of cognitive dissonance. Stanford university. Press.
  • Ghesla, C., Grieder, M., & Schmitz, J. (2019). Nudge for good? Choice defaults and spillover effects. Frontiers in Psychology, 10, 1-14. https://doi.org/10.3389/fpsyg.2019.00178
  • Graham, J., Nosek, B., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366–385. https://doi.org/10.1037/a0021847
  • Gravert, C., & Kurz, V. (2019). Nudging à la carte: A field experiment on climate-friendly food choice. Behavioural Public Policy, 1–18. https://doi.org/10.1017/bpp.2019.11
  • Hansen, P., & Jespersen, A. (2013). Nudge and the manipulation of choice: A framework for the responsible use of the nudge approach to behaviour change in public policy. European Journal of Risk Regulation, 4(1), 3–28. https://doi.org/10.1017/S1867299X00002762
  • Hedesström, M., Michaelsen, P., Nyström, L., Luke, T. J., & Johansson, L.-O. (2019). What’s the net benefit of a nudge? Exploring behavioral spillover from choosing a default. Manuscript in Preparation.
  • Henrich, J., McElreath, R., Barr, A., Ensminger, J., Barrett, C., Bolyanatz, A., Cardenas, J. C., Gurven, M., Gwako, E., Henrich, N., & Lesorogol, C. (2006). Costly punishment across human societies. Science, 312(5781), 1767–1770. https://doi.org/10.1126/science.1127333
  • Herrmann, B., Thöni, C., & Gächter, S. (2008). Antisocial punishment across societies. Science, 319(5868), 1362–1367. https://doi.org/10.1126/science.1153808
  • House of Lords, Science and Technology Select Committee. (2011). Behaviour Change: Second report of Session 2010-12.
  • Jachimowicz, J., Duncan, S., Weber, E., & Johnson, E. (2019). When and why defaults influence decisions: A meta-analysis of default effects. Behavioural Public Policy, 3(2), 159–186. https://doi.org/10.1017/bpp.2018.43
  • Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5649), 1338–1339.
  • Jung, M. H., Sun, C., & Nelson, L. D. (2018). People can recognize, learn, and apply default effects in social influence. Proceedings of the National Academy of Sciences, 115(35), E8105–E8106. https://doi.org/10.1073/pnas.1810986115
  • Krijnen, J., Tannenbaum, D., & Fox, C. R. (2017). Choice architecture 2.0: Behavioral policy as an implicit social interaction. Behavioral Science & Policy, 3(2), 1–18. https://doi.org/10.1353/bsp.2017.0010
  • Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. (2017). lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software, 82(13), 1–26. https://doi.org/10.18637/jss.v082.i13
  • Loewenstein, G., Bryce, C., Hagmann, D., & Rajpal, S. (2015). Warning: You are about to be nudged. Behavioral Science & Policy, 1(1), 35–42. https://doi.org/10.1353/bsp.2015.0000
  • McGuire, W. J. (2013). An additional future for psychological science. Perspectives on Psychological Science, 8(4), 414–423. https://doi.org/10.1177/1745691613491270
  • Michaelsen, P., Johansson, L.-O., & Hedesström, M. (2021). Experiencing default nudges: Autonomy, manipulation, and choice-satisfaction as judged by people themselves. Behavioural Public Policy, 1–22. https://doi.org/10.1017/bpp.2021.5
  • Michaelsen, P., Nyström, L., Luke, T. J., Johansson, L.-O., & Hedesström, M. (2020). Are default nudges deemed fairer when they are more transparent? People’s judgments depend on the circumstances of the evaluation. Preprint. psyarxiv.com/5knx4/
  • Monin, B., & Miller, D. (2001). Moral credentials and the expression of prejudice. Journal of Personality and Social Psychology, 81(1), 33–43. https://doi.org/10.1037/0022-3514.81.1.33
  • Mullen, E., & Monin, B. (2016). Consistency versus licensing effects of past moral behavior. Annual Review of Psychology, 67(1), 363–385. https://doi.org/10.1146/annurev-psych-010213-115120
  • Nash, N., Whitmarsh, L., Capstick, S., Hargreaves, T., Poortinga, W., Thomas, G., Sautkina, E., & Xenias, D. (2017). Climate‐relevant behavioral spillover and the potential contribution of social practice theory. Wiley Interdisciplinary Reviews. Climate Change, 8(6), e481. https://doi.org/10.1002/wcc.481
  • OECD. (2019). Tools and ethics for applied behavioural insights: The BASIC toolkit.
  • Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4), 867–872. https://doi.org/10.1016/j.jesp.2009.03.009
  • Paese, P., & Gilin, D. (2000). When an adversary is caught telling the truth: Reciprocal cooperation versus self-interest in distributive bargaining. Personality & Social Psychology Bulletin, 26(1), 79–90. https://doi.org/10.1177/0146167200261008
  • Paese, P., Schreiber, W., & Taylor, A. (2003). Caught telling the truth: Effects of honesty and communication media in distributive negotiations. Group Decision and Negotiation, 12(6), 537–566. https://doi.org/10.1023/B:GRUP.0000004334.14310.90
  • Paunov, Y., Wänke, M., & Vogel, T. (2019a). Transparency effects on policy compliance: Disclosing how defaults work can enhance their effectiveness. Behavioural Public Policy, 3(2), 187–208. https://doi.org/10.1017/bpp.2018.40
  • Paunov, Y., Wänke, M., & Vogel, T. (2019b). Ethical defaults: Which transparency components can increase the effectiveness of default nudges? Social Influence, 14(3–4), 104–116. https://doi.org/10.1080/15534510.2019.1675755
  • Pichert, D., & Katsikopoulos, K. V. (2008). Green defaults: Information presentation and pro-environmental behaviour. Journal of Environmental Psychology, 28(1), 63–73. https://doi.org/10.1016/j.jenvp.2007.09.004
  • R Core Team. (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/
  • Regan, D. (1971). Effects of a favor and liking on compliance. Journal of Experimental Social Psychology, 7(6), 627–639. https://doi.org/10.1016/0022-1031(71)90025-4
  • Steffel, M., Williams, E., & Pogacar, R. (2016). Ethically deployed defaults: Transparency and consumer protection through disclosure and preference articulation. Journal of Marketing Research, 53(5), 865–880. https://doi.org/10.1509/jmr.14.0421
  • Sunstein, C. R. (2014). Why nudge? The politics of libertarian paternalism. Yale University Press.
  • Sunstein, C. R. (2018). Misconceptions about nudges. Journal of Behavioral Economics for Policy, 2(1), 61–67.
  • Tannenbaum, D., Fox, C. R., & Rogers, T. (2017). On the misplaced politics of behavioural policy interventions. Nature Human Behaviour, 1(7), 1-7. https://doi.org/10.1038/s41562-017-0130
  • Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Penguin Books.
  • Truelove, H. B., Carrico, A. R., Weber, E. U., Raimi, K. T., & Vandenbergh, M. P. (2014). Positive and negative spillover of pro-environmental behavior: An integrative review and theoretical framework. Global Environmental Change, 29(C), 127–138. https://doi.org/10.1016/j.gloenvcha.2014.09.004
  • Vosgerau, J., Simonsohn, U., Nelson, L., & Simmons, J. (2019). 99% impossible: A valid, or falsifiable, internal meta-analysis. Journal of Experimental Psychology. General, 148(9), 1628–1639. https://doi.org/10.1037/xge0000663
  • Wachner, J., Adriaanse, M. & De Ridder, D. (2020). The influence of nudge transparency on the experience of autonomy, Comprehensive Results in Social Psychology. https://doi.org/10.1080/23743603.2020.1808782
  • Wilkinson, T. (2013). Nudging and manipulation. Political Studies, 61(2), 341–355. https://doi.org/10.1111/j.1467-9248.2012.00974.x
  • Zlatev, J. J., Daniels, D. P., Kim, H., & Neale, M. A. (2017). Default neglect in attempts at social influence. Proceedings of the National Academy of Sciences, 114(52), 13643–13648. https://doi.org/10.1073/pnas.1712757114