541
Views
1
CrossRef citations to date
0
Altmetric
Articles

Psychological discrepancy in message-induced belief change: Empirical evidence regarding four competing models

ORCID Icon, ORCID Icon & ORCID Icon
Pages 235-259 | Received 07 Dec 2020, Accepted 17 Aug 2021, Published online: 29 Sep 2021
 

ABSTRACT

Message discrepancy is the difference between the position of an advocated belief in a message and the position of a message receiver’s initial belief, and psychological discrepancy is how the message’s discrepancy is perceived by the receiver. The present study tested Fink et al.’s [(1983). Positional discrepancy, psychological discrepancy, and attitude change: Experimental tests of some mathematical models. Communication Monographs, 50(4), 413–430] psychological discrepancy model plus three other models to determine whether psychological discrepancy affects the weight of a message, the scale value of the message, neither, or both. These models were tested in an experiment that manipulated psychological discrepancy with a 3 (high vs. moderate vs. low message scale value) × 3 (wide vs. moderate vs. narrow perspective) between-subjects design (N = 448). The original Fink et al. model was the most supported. The results help explain how psychological processes bring about belief change.

Acknowledgements

We thank Stan A. Kaplowitz (Michigan State University) and Bruce W. Hardy (Temple University) for the critical reading and feedback while serving on Huang's dissertation committee. Sungeun Chung (Sungkyunkwan University) and Kun Xu (University of Florida) are thanked for their valuable discussions on mathematical modeling and the interpretation of results.

Disclosure statement

This article is based on the dissertation completed by Huang (2020); the co-advisors were E. L Fink and D. A. Cai. The dissertation was assisted by a doctoral dissertation completion grant from the Graduate Board Fellowship Committee of Temple University. We have no known conflict of interest to disclose.

Notes

1 Although we use importance to define weight here, the psychological meanings of weight have been related to various concepts in different studies. For an incoming message, those concepts include a message’s salience, relevance (Anderson, Citation1981), “amount of information” (Anderson, Citation2008, p. 43; Saltiel & Woelfel, Citation1975), and informativeness (Fiske, Citation1980). The message weight can also be operationalized as message evaluation (Cacioppo et al., Citation1983; Eagly & Telaak, Citation1972), attention (Fiske, Citation1980; Meffert et al., Citation2006), and perceived importance (Anderson & Alexander, Citation1971; Zalinski & Anderson, Citation1989). Because these concepts are all expected to facilitate belief change, we use them to operationalize the weight construct in our experiment (see the Method section later in this manuscript for detail). For a person’s prior beliefs before message receipt, weight represents the strength (Anderson, Citation2008; Chung et al., Citation2012) or the certainty (Petty et al., Citation2007) of a person’s pre-existing beliefs.

2 Earlier theoretical accounts also included the cognitive response approach (Brock, Citation1967), which posited that a message receiver should generate more counterarguments to a highly discrepant message than to a moderately discrepant one, which reduced the change induced by an extremely discrepant message. This theoretical proposition was inconsistent with later evidence showing that discrepancy did not correlate with the number of generated counterarguments (Kaplowitz & Fink, Citation1991). This evidence suggested that cognitive elaboration was not involved in the effect of discrepancy on final position. However, when participants were given a longer time to think about their final positions, a positive and significant correlation between discrepancy and number of counterarguments was found (Kaplowitz & Fink, Citation1997, p. 92). This finding suggested that the effect of discrepancy on final position could be switched from peripheral processing to central processing so that cognitive responses could be activated. For our purpose, because our study design aligned more with the design by Kaplowitz and Fink (Citation1991), where participants were not given extra time to think about their final positions, we assumed that cognitive elaboration was not involved in the effect of discrepancy on final position in our study.

3 This manuscript focuses on static belief change models rather than on a dynamic model (Chung et al., Citation2008). In a dynamic model, which examines belief change over time, after message receipt, a person’s belief position can move toward an equilibrium position, with possible oscillation (i.e., oscillating with overdamped, critically damped, or underdamped motion; Kaplowitz et al., Citation1983). Here we assume that when a person’s belief position after message receipt is measured, the person’s belief position has reached a new equilibrium position. Time and oscillation are beyond the scope of this manuscript. The symbol e represents the transcendental number that may be defined as the limit of (1 + 1/x)x as x approaches infinity; e = 2.71828 …, and it is the base of natural logarithms.

4 γ can be estimated in a nonlinear regression analysis based on data. In Fink et al. (Citation1983, pp. 426–428), γ was estimated to be 0.004 (SE = 0.003) in their single-message condition, and 0.015 (SE = 0.004) in their double-message condition.

5 The process of information integration has three components: valuation, integration, and response (Anderson, Citation1981). Valuation turns a stimulus’s scale value into a subjective value. Integration is the process of combing different pieces of information via cognitive algebra (see Online Supplemental Material 1). Response turns the implicit result from integration into explicit ratings on some scale.

6 We summarize the theoretical basis of the four assumptions as follows: Information integration theory is the basis of all the models in our study. The cognitive dissonance approach explains the weight discounting model as well as the weight discounting component in the complex model. The social judgment approach explains the scale-value model, the independent model, as well as the scale value varying component in the complex model. Perspective theory is the framework that leads to Equation 4, which determines how we can manipulate psychological discrepancy empirically.

7 In a computational approximation, we first specified a distribution for each parameter. Then we randomly drew a certain number of values given the specified distribution. The next step was to create a list of combinations of parameter values. For example, if there were three parameters and we drew five values from each parameter, there would be 15 combinations of parameter values. If the derivative was with regard to P, we specified a wide range of P values and let the computer calculate a series of derivatives for each of the 15 combinations. The sign of each calculated derivative was evaluated to discern a pattern.

8 To be eligible to participate in the study, an MTurk worker must have had 5,000 or more approved MTurk tasks and a 98% or above approval rating for completed tasks. These criteria were set to maintain a desirable level of data quality. Each participant was paid $2.20. This payment was decided according to the U.S. federal minimum wage, $7.25 an hour (effective in November 2019; U.S. Department of Labor, Citation2009). Given that the estimated time to complete the survey for the main study was 18 min (based on pilot studies), $7.25 / 60 × 18 ≈ $2.20. The actual completion time approximately met our expectation, M = 19.96 min, Mdn = 15.84 min, N = 448.

9 With an alpha level of .05 and a 5% chance of falsely retaining a false H0 (i.e., power = .95), a total sample size of 450 (i.e., 50 participants in each condition) would be needed to detect a significant interaction effect with a Cohen’s (Citation1988) f between 0.20 and 0.25.

10 A congeneric relationship means that the relationships between item true scores are linear. “The congeneric model assumes that each individual item measures the same latent variable, with possibly different scales, with possibly different degrees of precision, and with possibly different amounts of error” (Graham, Citation2006, p. 935).

11 An exploratory factor analysis (EFA) using maximum likelihood extraction and the direct-oblimin rotation with four factors revealed that the attention item loaded extremely poorly on all factors. Therefore, the attention item was excluded from the subsequent analyses. An EFA based on the remaining 10 items using the same extraction and rotation method with four factors indicated a good fit, χ2(11, N = 443) = 18.45, p = .07. After reviewing the pattern of loadings and the intercorrelations between the factors (see Online Supplemental Tables 7 to 9), we averaged the 10 items to form a single wp score based on the fact that (1) the subscales constructed based on the EFA results were all significantly and positively correlated with each other (see Online Supplemental Table 9); (2) the determinant of the correlation matrix of the 10 items was .00045, indicating strong linear dependence among these items; (3) there was only one factor that had an eigenvalue greater than one (see Online Supplemental Table 7), indicating strong evidence of unidimensionality; (4) the four items that loaded highly on the first factor had a sufficient level of reliability, ω = .91, 95% CI [.90, .93]; (5) the Pearson correlation coefficient between the first factor’s factor score and the 10-item average was .90, p < .01, indicating that the first factor was well represented by the 10-item average. Although we believe that we have presented strong evidence showing that our measurement of perceived weight and use of the 10-item average was reasonable, there may be other variables that can be used to further improve the measurement of perceived weight.

12 We also used the perceived message scale value as the dependent variable to corroborate the successful manipulation of s. A two-way ANCOVA was conducted with U and s as independent variables, and sp as the dependent variable. Results revealed a significant main effect of s, F(2, 427) = 44.00, p < .001, η2 = .17. Neither the main effect of U nor the interaction effect was significant. The linear contrast, in which sp is an increasing function of s, was significant, F(1, 427) = 79.24, p < .001, η2 = .16, and the quadratic contrast of s was also significant, F(1, 427) = 8.84, p = .003, η2 = .02. The significant quadratic effect of s was not due to a downturn in sp.

13 We included 45 cases who did not have a missing value in one of the wp items to conduct the same linear regression analysis for testing wp=wΔ(ψ)=we-γψ in H4. All estimated coefficients had the same sign and the same significance level as those reported in (Panel a), which did not change the interpretation of the results.

14 We included all cases to conduct the same linear regression analysis for testing sp=sΔ(ψ)=seγψ in H5. The estimated coefficients for the intercept and D had the same sign and the same significance level as those reported in (Panel b). The estimated coefficients for 1/P and D/P were negative but not significant, which did not change the interpretation of the results.

15 We included all cases to conduct the same nonlinear regression analysis for testing sp = sΔ(ψ) + s0 = sk(1 – e-γψ) + s0. The result showed that the estimated b1, b2, and b3 were nonsignificant. Despite the discrepancy in the result, we believe that our decision of excluding some of the participants is justified because those participants were considered out of the scope of the four models in various ways as explained earlier in the “Hypothesis Testing” section.

16 In regression analysis, two models are nested when one of the models contains all the terms in the other model and at least one additional term. The tested models here were not nested (see Online Supplemental Table 10).

17 We conducted a sensitivity test due to the nonnormal distributions of the regression residuals. This sensitivity test specified alternative models to the ones in Supplemental Table 10; these alternative models included religiosity (a measured variable) as a predictor (see Huang, Citation2020, pp. 212–213). The alternative weight discounting model and the alternative scale-value model had a milder violation of the normality assumption for the regression residuals than the models in Supplemental Table 10. Based on AIC, AICc, and BIC, the weight discounting model was still the most plausible model, with the scale-value model as a close second, although the two models differed only slightly.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 183.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.