581
Views
2
CrossRef citations to date
0
Altmetric
Original Articles

Dissonance-induced false memories: Evidence from a free-choice paradigm

&
Pages 571-579 | Received 23 Jan 2014, Accepted 13 May 2014, Published online: 09 Jun 2014
 

Abstract

People often misremember the past as consistent with the present. Recent research using an induced-compliance paradigm has revealed that cognitive dissonance is one mechanism that can underlie this memory distortion. We sought to replicate and extend this finding using a free-choice paradigm: Participants made either an easy or a difficult choice between two smartphones and, either immediately or two days later, reported their memories for their decision experience. Participants who made a difficult decision produced the spread-of-alternatives effect expected by dissonance theory, and they were also more likely than those in the easy conditions to misremember their initial decision more favourably than they had initially rated it. Overall, our findings replicate the effect of dissonance on memory distortion and, further, show that the effect generalises to other dissonance-inducing situations.

We would like to thank Nicholas Bonomo and Marita Salwierz for their help in data collection.

We would like to thank Nicholas Bonomo and Marita Salwierz for their help in data collection.

Notes

1 A few participants' ratings did not include the exact values prescribed a priori for the construction of the manipulation. In these cases, experimenters chose phone options whose ratings were comparably favourable in difficult decision conditions (e.g. both rated as 8s), or whose ratings differed by at least two points in easy decision conditions. Excluding these participants did not change the pattern of results, so they are included in our analyses. The use of two similarly undesirable phones would also theoretically produce dissonance. However, we decided against this form of dissonance induction to avoid the risk of participants disengaging from the task due to a perceived lack of sufficient reward. Finally, although an alternative method for distinguishing easy and difficult decision groups is to use participants' own ratings of the decision, we opted for the present method to remain consistent with methods employed in previous dissonance research (e.g. Harmon-Jones & Harmon-Jones, Citation2002).

2 Eight participants (6.5%) across the four conditions could not remember which phone they had selected. These participants were reminded of their actual selection before proceeding with the memory items. Excluding these participants did not affect the pattern of results, so they are included in the reported analyses.

3 Participants also completed open-ended memory tests for their decision options' technical specifications. The vast majority of participants indicated that they could not recall any of the specifications, preventing meaningful statistical analysis of these data. We do not discuss these measures further.

4 Delay was not included in this analysis because it was manipulated after participants had completed all the relevant spread-of-alternatives items.

5 The lack of an effect of decision difficulty on participants' initial decision experience ratings, however, is somewhat surprising, as the other results indicate the manipulation was indeed successful. Perhaps the parity of initial decision ratings for the two groups is itself the product of dissonance reduction among those in the difficult decision condition (i.e. elevated ratings resulting from rationalisation).

6 Izuma and Murayama's (Citation2013) meta-analysis of free-choice studies (k = 4) that included methodological features controlling for this potential statistical artefact revealed a mean effect size (d = .26 [.10, .42]) that was smaller than one reported in another recent meta-analysis (Kenworthy, Miller, Collins, Read, & Earleywine, Citation2011); they argued, then, that past studies “substantially overestimated the effect due to the methodological artifact” (p. 7). However, the Kenworthy et al. effect size they used for comparison (d = .61 [.56, .66]) actually represents the average mean effect size of dissonance effects across several paradigms, not just the free-choice paradigm. The mean effect size for the free-choice paradigm alone was slightly smaller, with a wider confidence interval (d = .59 [.46, .73]). In addition, it is noteworthy that Kenworthy et al. conducted their meta-analysis not only to estimate the size of the spread-of-alternatives effect, but also to examine the mediating mechanisms underlying it. They confined their literature search to studies published in a small number of journals, selected only studies that reported significant results and based their mean effect size estimate on only 21 effect sizes from 18 articles. Consequently, their mean effect size is likely an overestimate of the true population effect size.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.