10,730
Views
16
CrossRef citations to date
0
Altmetric
Original Articles

Combining Anecdotal and Statistical Evidence in Real-Life Discourse: Comprehension and Persuasiveness

ABSTRACT

The persuasiveness of anecdotal evidence and statistical evidence has been investigated in a large number of studies, but the combination of anecdotal and statistical evidence has hardly received research attention. The present experimental study therefore investigated the persuasiveness of this combination. It also examined whether the quality of anecdotal evidence affects persuasiveness and to what extent people comprehend the combination of anecdotal and statistical evidence. In an experiment, people read a realistic persuasive message that was relevant to them. Results showed that anecdotal evidence does not benefit from the inclusion of statistical evidence or from its intrinsic quality. The analysis of readers’ cognitive thoughts showed that only some participants comprehended the relationship between anecdotal and statistical evidence.

Introduction

When describing recent phenomena in the world or highlighting urgent problems, newsmakers and influencers often use base-rate statistics to demonstrate the large-scale impact of a phenomenon and use narratives to exemplify it (Gibson, Callison, & Zillmann, Citation2011; Zillmann & Brosius, Citation2000). For instance, news about the impact of the financial crisis on households may include both the percentage of households with financial difficulties and a cover story about one family facing that crisis. A brochure urging readers to support children in Africa may underline the scope of the problem with statistics and directly speak to the reader’s heart with a story about a child’s difficult situation.

The two strategies seem to have a different appeal. The strength of base rates (statistics, or statistical evidence) lies in showing how widespread a certain phenomenon is. The appeal of exemplars (case histories, narrative evidence, story evidence, or anecdotal evidence) is to describe one concrete event. In journalism large numbers of studies have investigated the combined use of base rates and exemplars (for a review, see Zillmann & Brosius, Citation2000). These studies have aimed to examine how readers or listeners perceive incidence rates of events in the news. Base rates and exemplars are not only used in relatively objective news reporting; they are also used in a persuasive setting, aiming to convince readers or listeners of a certain standpoint. However, research on the combination of base rates and exemplars in this setting is scarce (see Allen et al., Citation2000). Therefore, the present article reports on a study examining the persuasive impact of the combination of base rates (statistical evidence) and exemplars (anecdotal evidence). It investigates three important questions. First, does an anecdote have more impact if the sample from which it is taken is also presented (i.e., statistical evidence)? Second, does an anecdote have more impact if it is similar to the advocated case in the message? Studies have demonstrated such impact (e.g., Hoeken & Hustinx, Citation2009) but have not examined the issue in a realistic setting with a message relevant to readers. Third, the present study examines not only persuasion but also the comprehension of the combination of an anecdote and a corresponding sample in statistical evidence. This unique comprehension perspective is studied in a real-life setting of discourse about environmental issues.

The remainder of the article is structured as follows. First, the literature on the persuasiveness of anecdotal and statistical evidence, and of their combination, is reviewed. Second, it is explained how the similarity of anecdotal evidence with the case it supports may impact its persuasiveness. Finally, the importance of studying the comprehension of the combination of anecdotal and statistical evidence is underlined.

Anecdotal and statistical evidence

In diverse fields, from advertising and argumentation to cognitive psychology and mass communication, researchers have long been interested in the impact of the kind of data in messages on readers’ beliefs and attitudes. These data have often been labeled “evidence,” defined as “data (facts or opinions) presented as proof for an assertion” (Reynolds & Reynolds, Citation2002, p. 429). The persuasiveness of different types of evidence has been empirically investigated for more than 60 years, attracting considerable research attention, from what is probably the first empirical study (Cathcart, Citation1955) until more recent investigations (e.g., Han & Fink, Citation2012; Hornikx & Ter Haar, Citation2013; Kim et al., Citation2012). These empirical studies have inspired critical analyses (e.g., Hornikx, Citation2007; Kellermann, Citation1980) and reviews (e.g., Allen & Preiss, Citation1997; Baesler & Burgoon, Citation1994; Reinard, Citation1988).

Four types of evidence have commonly been distinguished: anecdotal, statistical, causal, and expert evidence. Anecdotal (or narrative) evidence consists of one case, whereas statistical evidence consists of numerical information about a large number of cases. Causal evidence consists of an explanation, and expert evidence consists of a confirmation by an expert. The distinction between different types of evidence originates from debate handbooks aiming at educating students for a legal career (e.g., Warnick & Inch, Citation1989). Inspired by these handbooks, experimental studies in the field of communication first compared the persuasiveness of anecdotal, statistical, and expert evidence (see Reinard, Citation1988). Later experiments added causal evidence to the comparison (Slusher & Anderson, Citation1996).

Most studies have compared the persuasiveness of anecdotal evidence versus statistical evidence. Their findings have been summarized in narrative reviews (Baesler & Burgoon, Citation1994; Hornikx, Citation2005) and meta-analyses (Allen & Preiss, Citation1997; Zebregs, Van den Putte, Neijens, & De Graaf, Citation2015). Allen and Preiss (Citation1997) included 16 comparisons in their analysis and concluded that statistical evidence is more persuasive than anecdotal evidence. More recently, Zebregs et al. (Citation2015) ran a similar analysis, based on 15 (partly different) comparisons. Their analysis was done for different persuasion measures separately, as anecdotal evidence has been said to be particularly powerful on behavioral intentions (see Kopfman, Smith, Yun, & Hodges, Citation1998). The meta-analysis of Zebregs et al. (Citation2015) showed that whereas statistical evidence had more impact on beliefs and attitudes than anecdotal evidence, anecdotal evidence was found to lead to higher levels of intention than statistical evidence.

These meta-analytic results underline that both types of evidence have their merits. Allen et al. (Citation2000, p. 332), however, observe that “current research treats the issue as though the use of evidence in a message requires a tradeoff, as if the use of one form precludes the use of another form of evidence.” It makes sense to investigate the impact of using both statistical and anecdotal evidence. In their additive model for the effectiveness of evidence, Kim et al. (Citation2012) argue that a larger number of pieces of evidence is expected to generate higher persuasiveness than a single piece of evidence. Combining anecdotal or statistical evidence with expert evidence, they found some support for their additive model. The combination of statistical and anecdotal evidence, however, is much more natural: Anecdotal evidence is often an example taken from the sample of cases presented as statistical evidence. Although the importance of examining the impact of the combination of anecdotal and statistical evidence has been regularly underlined (Allen & Preiss, Citation1997; Kopfman et al., Citation1998; Lindsey & Yun, Citation2003), it has received very limited attention. Allen et al. (Citation2000) is probably the only examination. They compared anecdotal evidence, statistical evidence, and the combination of anecdotal with statistical evidence and reported that the combination was more persuasive than either of the two types of evidence in isolation.

In journalism studies, the combination of statistical evidence and anecdotal evidence has been widely examined under the heading of base-rate information and exemplars. News reports generally use both types of evidence to demonstrate certain phenomena (Zillmann & Brosius, Citation2000). The relevant question was how readers assess incidence rates depending on a given base rate and a handful of exemplars. Typically, the exemplars were or were not consistent with the base rate (e.g., Brosius & Bathelt, Citation1994; Gibson et al., Citation2011). For instance, in Gibson et al. (Citation2011) the base rate stated that 67% (or 33%) of people who traveled to a specific African country contracted traveler’s diarrhea. There were nine exemplars of persons, six of whom reported suffering from traveler’s diarrhea. This distribution was consistent with the base rate in one condition (6 of 9 = 67%) but not in the other (i.e., 33%). In these kinds of studies, participants have been found to generally follow the distribution in the exemplars and not the base-rate information.

From a theoretical perspective, exemplification has a strong impact on the way people assess incidence rates (see Zillmann & Brosius, Citation2000), and experiments in journalism have corroborated this. Such journalism studies, however, have not addressed the question as to how persuasive combining the two kinds of information is. With only Allen et al. (Citation2000) as a first step in the direction, the question has not yet been resolved. Given that statistical evidence has been found to be a strong type of evidence (Allen & Preiss, Citation1997) and that multiple pieces of evidence are more persuasive than a single piece (additive model of Kim et al., Citation2012), it is expected that including statistical evidence in a message with anecdotal evidence has a more positive effect on the persuasiveness than anecdotal evidence alone:

H1:

Anecdotal evidence taken from a sample presented as statistical evidence is more persuasive than anecdotal evidence alone.

Similar and dissimilar anecdotal evidence

In the context of news reports, readers expect that journalists objectively select the exemplars from a large sample of cases (Zillmann & Brosius, Citation2000). In the context of a sender with a persuasive intent, however, readers expect that the senders make a specific choice for an anecdote, namely the anecdote that best supports the claim they put forward (Zillmann & Brosius, Citation2000).

A relevant question is what makes for a good anecdote. Insights from argumentation theory help to address this question. When evidence is used to support a claim, a line of argumentation is built. Each line of argumentation can be characterized by the way in which the argument supports the claim: the argumentation scheme (e.g., Van Eemeren & Grootendorst, Citation1992). Argumentation theorists have developed normative criteria that can be used to identify and evaluate argumentation schemes (e.g., Van Eemeren & Grootendorst, Citation1992; Walton, Macagno, & Reed, Citation2008). An argument is stronger to the extent that it meets more normative criteria specified for a given argumentation scheme. Studies have empirically compared evidence that differs to the extent to which it meets such criteria. The results from these studies generally support the idea that complying with these criteria makes for more persuasive evidence (e.g., Hoeken, Šorm, & Schellens, Citation2014; Hoeken, Timmers, & Schellens, Citation2012; Hornikx & Hoeken, Citation2007).

What makes for the best anecdote depends on the type of claim. Anecdotal evidence can be connected to two different argumentation schemes, depending on the claim: the argument by generalization and the argument by analogy (Hoeken & Hustinx, Citation2009). In an argument by generalization, the generality of an effect in the claim (e.g., “Family businesses in Spain benefit from Chinese investors”) is inferred from the cases in the evidence. This means that for a strong argument by generalization, there should be a sufficient amount of data (i.e., a number of cases in which Spanish businesses have indeed benefited from Chinese investments). Whereas this criterion is typically met by statistical evidence, it is not met by anecdotal evidence. This is why empirical research comparing anecdotal with statistical evidence, which was mostly based on this argument by generalization, has found superior scores for statistical evidence on beliefs and attitudes (Allen & Preis, Citation1997; Zebregs et al., Citation2015). Anecdotal evidence can also be part of an argument by analogy. In that case, what is claimed to be true or probable for a case in the claim (e.g., “The local food business Alvarez in Alicante will benefit from Chinese investors”) is supported by a case in the evidence (e.g., “The food business Herrera in Almería recently got financial aid from Chinese investors, and that help has been successful”). The quality of anecdotal evidence increases with the similarity with the case in the claim (e.g., Walton, et al., Citation2008). For the Spanish case, what matters is that Alicante and Almería share similar characteristics. If the quality increases, does that affect the persuasiveness of the anecdotal evidence? A few studies have addressed this question.

Hoeken and Hustinx (Citation2009, Study 3) showed that the similarity between the case in anecdotal evidence and the case in the claim indeed affects the persuasiveness of anecdotal evidence: Similar anecdotal evidence was more persuasive than dissimilar anecdotal evidence. Hoeken et al. (Citation2012) report the same result in a comparable study. In these two studies, participants were confronted with short fragments of only claims with evidence. Hoeken and Hustinx (Citation2007) examined whether readers were also sensitive to quality manipulations when the claim and evidence were embedded in a longer text. In the short text fragments, similar anecdotal evidence was found to be more persuasive than dissimilar anecdotal evidence; in the longer text fragments with additional information irrelevant to the evidence, this effect of similarity was absent. The messages in Hoeken (Citation2001) were not just text fragments but realistic texts: a real-life letter that proposed a policy for a given city. The anecdotal evidence supporting this claim originated from a city that was similar or dissimilar to the given city. In this study the impact of similarity was not observed: Both types of anecdotal evidence were equally persuasive.

In summary, the quality of anecdotal evidence has been found to affect the persuasiveness of short text fragments but not of longer text fragments or realistic texts. In all cases, however, the texts were not relevant to the readers. Relevance of the topic, or issue involvement, is an important theoretical determinant of the way in which people process persuasive messages (Chaiken, Citation1987; Petty & Cacioppo, Citation1986). It has been demonstrated that people are more sensitive to argument quality when they are motivated to read a message that is relevant to them than when they are less motivated (Petty & Cacioppo, Citation1986; Petty, Rucker, Bizer, & Cacioppo, Citation2004; but see Park, Levine, Westerman, Orfgen, & Foregger, Citation2007). Theoretically, the quality of evidence should matter for persuasiveness, certainly in situations with high issue-involvement (cf. Walton et al., Citation2008; Chaiken, Citation1987), but the expected difference between similar and dissimilar anecdotal evidence has not yet been examined in a realistic text targeted at a relevant audience. The present study aims to fill this gap by investigating whether the similarity of anecdotal evidence matters for readers when the message is relevant to them:

RQ1:

Is similar anecdotal evidence more persuasive than dissimilar anecdotal evidence in a real-life message that is relevant to the readers?

Comprehension of the combination of anecdotal and statistical evidence

Whereas the persuasiveness of evidence types has been examined in a large number of experimental studies, it has been largely neglected how people process evidence types in support of claims. In discourse studies on argumentation, there has been research attention on the processing of arguments in discourse. Voss et al. (Citation1993), for instance, developed and tested a model of argument processing, according to which an important aspect is people evoking their own attitudes and beliefs when reading claims with arguments. Studies have examined various aspects of human processing of arguments, for instance, in terms of how quickly people respond to argumentative discourse (e.g., Wolfe, Tanner, & Taylor, Citation2013), to what extent they recognize the structure of such discourse (e.g., Chambliss & Murphy, Citation2002) or what they recall exactly after reading such discourse (e.g., Britt, et al., Citation2007). This interest concerns argumentation independently of the types of arguments. In the area of evidence types, Allen et al. (Citation2000, p. 335) have underlined the need for measuring what people do when they encounter evidence types: “One issue still unresolved in the literature is the nature of cognitive processing.” Only two studies have examined this issue. Kopfman et al. (Citation1998) and Feeley, Marshall, and Reinhart (Citation2006) both examined the cognitive thoughts evoked after reading anecdotal or statistical evidence. Kopfman et al. (Citation1998) expected and found that a message with statistical evidence generated more cognitive thoughts than a message with anecdotal evidence. Feeley et al. (Citation2006) replicated this study with methodological improvements and were unable to find the same results. What is relevant for the current study is how people process the combination of anecdotal and statistical evidence, which is unknown. This study shares the interest of discourse studies on argument processing but focuses on the comprehension of the specific combination of two types of evidence: anecdotal and statistical evidence. Because anecdotal and statistical evidence are naturally connected (the anecdote is part of the sample in the statistical evidence), it is useful to gain insights into how people make sense of this combination: Do they comprehend how they relate to each other? Insights into how they comprehend this combination may improve our understanding of the persuasiveness of this combination of anecdotal and statistical evidence:

RQ2:

How do readers comprehend the combination of anecdotal and statistical evidence in a real-life message that is relevant to them?

The present study consists of an experiment in which participants were given a realistic letter about an environmental proposal of their own municipality. The letter contained similar or dissimilar evidence, and did or did not contain statistical evidence.

Methods

Materials

Participants of two different cities in the Netherlands were given a letter from their municipality concerning an environmental issue. Inhabitants of Nijmegen pay part of their environmental waste tax through a tax on the price of particular litter bags. Inhabitants of this city were given a letter from their municipality thta announced its decision to increase the price of these litter bags by 1 euro. The other half of the participants from Arnhem received another letter, telling them their municipality considered increasing the price of 0.5-liter drink bottles with a €0.15 deposit to reduce waste on the streets. Four versions of each letter were constructed. The letters differed with respect to the evidence presented to support the claim that the price increase (of litter bags or of drink bottles) would keep the town cleaner: Anecdotal evidence had a high or a low quality, and statistical evidence was or was not presented. Appendix 1 provides the four versions of the text about litter bags.

All letters included anecdotal evidence. Anecdotal evidence consisted of another city where the increase in the price of litter bags or of drink bottles had indeed helped to keep that town cleaner. The quality of anecdotal evidence was manipulated through the degree of similarity between the anecdotal city and the inhabitants’ city (cf. Hoeken, Citation2001): Half of the letters mentioned a similar city and the other half a dissimilar city. In a previous study (Hornikx & Houët, 2009), the town of Tilburg (M = 4.14) had already been found to be more similar to Nijmegen than the town of Wassenaar (M = 2.08) (on a seven-point scale, where a higher score implies a higher similarity). These two towns were used in the anecdotal evidence in the litter bags letter: “A test in Tilburg/Wassenaar last year showed that an increase in the price of litter bags has led to an improvement in the quality of waste disposal.” To stress the similarity between Nijmegen and Tilburg, the letter mentioning Tilburg stated that both towns had an old center and a vibrant student community. For the water bottle letter (Arnhem), a pretest was conducted among 53 people (age: M = 31.40, SD = 12.21; range, 19–60; 54.7% were men; 45.3% indicated having obtained a bachelor’s degree). On a seven-point scale (where a higher score implies a higher similarity), the largest difference was found between the town of Zwolle (M = 4.57, SD = 1.18) and the town of Middelburg (M = 2.57, SD = 1.12; F(1, 52) = 62.80, p < .001, η2 = .55). The similarity between Arnhem and Zwolle was highlighted in the letter with Zwolle by indicating that both towns were medium-sized towns that were important logistical centers in the eastern part of the country.

In half of the letters the general success of the price increase was mentioned by including statistical evidence. For the litter bags letter, for instance, the text read, “The government has commissioned a test in fourteen towns in the Netherlands last year. The test has shown that an increase in the price of litter bags has led to an improvement of the quality of waste disposal. In the towns that participated a positive result was found within a year.” The other half of the letters did not contain this statistical evidence.

The four versions of each letter were similar in layout and fonts. Each letter contained the municipality’s address and logo and was signed by a fictitious municipal employee.

Participants

In total, 286 inhabitants took part in the study: 144 from Nijmegen, who evaluated the text concerning litter bags, and 142 from Arnhem, who evaluated about the text concerning water bottles. On average, participants were 32.40 years old (SD = 12.83; range, 16–80); 51.4% of them were women. The education of participants ranged from elementary school to university (60.2% indicated having obtained a bachelor’s or a master’s degree). Aggregating over topic (litter bags or drink bottles), the four versions of the letter (high-low evidence quality by presence/absence of statistical evidence) did not differ with respect to the participants’ mean age (F (3, 282) < 1), their gender distribution (χ2 (3) = 4.37, p = .22), or their level of education (χ2 (12) = 13.74, p = .32).

Design

The experiment had a 2 (quality of anecdotal evidence: strong, weak) × 2 (statistical evidence: present, absent) × 2 (topic: litter bags or drink bottles) between-subjects design. Each version of the letter was read by nearly the same number of participants (i.e., 35 or 36).

Instrumentation

The persuasiveness of the letter was measured with beliefs, attitudes, and intention. Beliefs were assessed on the basis of two relevant beliefs, “Price increase [of litter bags/water bottles] leads to a cleaner city” and “Price increase [of litter bags/water bottles] leads to a decrease of the amount of waste in the city,” with three semantic differentials including “realistic–not realistic” and twice “probable–improbable” (α = .88). Attitudes were measured with four 7-point differentials for a statement (“The €1 litter bag price increase is” or “I believe deposits of €0.15 on half a liter bottles are”), followed by four items: “good–bad,” “reasonable–not reasonable,” “not necessary–necessary,” and “negative–positive” (α = .92). Intention was measured by asking participants whether they would vote in favor or against the proposal (or whether they would abstain from voting; these cases were excluded from the analyses).

The persuasion measures were followed by three manipulation checks. Perceived difficulty of the letter was measured with three 7-point semantic differentials (α = .80), such as “easy–difficult” and “complex–simple.” The perceived vividness of the letter was measured with three 7-point semantic differentials, but because of the low reliability of the construct (α = .37), the analysis was done with only one item: “concrete–abstract.” Finally, the perceived similarity between participants’ city and the city in the anecdotal evidence was checked with a seven-point Likert scale (where a higher score implies a higher similarity).

Only if statistical evidence was presented in the letter were participants’ cognitive responses about the combination of anecdotal and statistical evidence elicited on the basis of two open questions. The first question was “In the letter, a test in fourteen towns in the country was mentioned. What is your opinion about the writer’s choice to highlight [Similar_City]/[Dissimilar_City]?” Each response was coded by two independent coders in two ways: the valence of the response (positive, negative, neutral; κ = .94) and the content of the response (κ = .72). The content of the response was classified into four categories, of which only one response indicated a sufficient level of comprehension: high/low comparability between the cities. The other categories were, I do not know the anecdotal city, highlighting that another city is not important, stating that it depends on the other cities in the test. For example, the answer “Bad, Wassenaar does not resemble my own city” was coded as “negative” and as “low comparability between the cities.”

The second question was “One of the 14 towns was [Similar_City]/[Dissimilar_City]. What characteristics do you think the other 13 towns in the test have? (for example, location or inhabitants).” Two coders also analyzed responses to this question; five categories emerged (κ = .69). There were four categories of thoughtful answers indicative of a sufficient level of comprehension: one reflecting the notion of representativeness of the towns in the sample for the larger population and three categories reflecting similarity between the towns in the sample and the anecdote or the participants’ own town. An example of the representativeness answer was “I hope they are a representation of the different towns in the Netherlands.” Only one category was considered as not indicative of sufficient comprehension; this was the case if participants only simply repeated the examples in the question: location and/or inhabitants. provides examples for all categories of answers given in reaction to the two questions. For all three different coding questions, coders reached agreement in the cases for which they originally had divergent codings.

Table 1. Coding categories with examples for comprehension.

Finally, participants’ relevant personal characteristics were assessed. When measuring the impact of argumentative discourse, it is important to note that readers tend to stick their opinion before exposure to the discourse (i.e., myside bias, see Wolfe & Britt, Citation2008). Therefore, participants’ attitude toward the environment was assessed to control for possible differences between the participants in the different conditions. Involvement with the letter’s topic was measured with four statements (inspired by Cho & Boster, Citation2005) followed by seven-point Likert scales (α = .68). Two examples of such a statement were “I believe it is important that Nijmegen is a clean town” and “The amount of waste in Nijmegen is an important issue.” Participants’ environmental awareness was measured with a seven-point Likert scale “I consider myself environmentally aware.” The questionnaire ended with questions about participants’ age, gender, and level of education.

Procedure and statistical tests

Participants were approached at different locations in the two cities: the railway station, the city center, and the university campus. Participation was voluntary, and there was no reward. After participants had filled in the paper-and-pencil questionnaire, there was a debriefing. They were told that the local university had created the letter and that their city was not considering increasing the prices of litter bags or of drinking bottles. They also received this information in a written note.

H1 and RQ1 on the impact of including statistical evidence and the quality of anecdotal evidence respectively were addressed with χ2 tests (for intention) and MANOVAs (for persuasion consisting of beliefs and attitudes). RQ2 on the cognitive thoughts was addressed with χ2 tests.

Results

Aggregating across topic

The impact of the two independent variables of interest (anecdotal evidence quality, presence of statistical evidence) on persuasion was investigated in two different letters, one for litter bags and one for drinking bottles. There were no interaction effects between the two variables of interest and the topic of the letter (Topic × Quality: F (2, 277) = 2.19, p = .11; Topic × Statistical Evidence: F (2, 277) < 1; Topic × Statistical Evidence × Quality: F (2, 277) < 1). Therefore, the results below are presented across the two topics (increasing the power of the statistical tests).

Manipulation checks

The four letters did not differ with respect to their perceived difficulty (F (3, 281) = 1.46, p = .23; M = 5.22, SD = 1.13) or perceived vividness (F (3, 282) = 1.82, p = .14; M = 4.06, SD = 0.92). The manipulation of the quality of anecdotal evidence was successful: The similar city (M = 4.26, SD = 1.48) was perceived to be more similar to the inhabitants’ own city than the dissimilar city (M = 2.90, SD = 1.39; F (1, 182) = 63.55, p < .001, η2 = .18). Finally, participants in the four letters considered themselves equally environmentally aware (F (3, 282) < 1; M = 4.77, SD = 1.39) and indicated they were equally involved in the topic of waste in their own city (F (3, 282) < 1; M = 4.56, SD = 1.10).

Persuasiveness

H1 expected that anecdotal evidence taken from a sample presented as statistical evidence would more persuasive than anecdotal evidence alone. A MANOVA (including beliefs and attitudes together) showed no main effect of Statistical Evidence (F (2, 281) = 1.12, p = .33): Beliefs and attitudes were not more positive when the anecdotal evidence was sampled from statistical evidence than when it was presented in isolation (). When it comes to voting intention, participants were not found to be more positive in the condition with statistical evidence (63.6% positive) than in the condition without statistical evidence (61.3% positive; χ2 (1) = 0.12, p = .73).

Table 2. Persuasiveness in function of presence of statistical evidence and quality of anecdotal evidence.

RQ1 addressed the question whether the quality of the anecdotal evidence would affect persuasion. The MANOVA (including beliefs and attitudes together) showed no main effect of Quality (F (2, 281) = 1.75, p = .18): Beliefs and attitudes were not more positive when the anecdotal evidence was similar than when it was dissimilar to the city in the letter (). When it comes to voting intention, participants were not more positive in the condition with high-quality anecdotal evidence (66.7% positive) than in the condition with low-quality anecdotal evidence (57.9% positive; χ2 (1) = 1.77, p = .18). There was no interaction effect between Quality and Statistical Evidence (F (2, 281) = 1.60, p = .20).

Comprehension of anecdotal and statistical evidence

After reading a letter that contained both anecdotal and statistical evidence, participants were presented with two questions about the anecdotal city mentioned in the letter, eliciting cognitive thoughts about the combination of anecdotal and statistical evidence (RQ2). Most participants (89%) wrote down an opinion about the selection of that city that was coded in terms of valence. Their opinions differed significantly according to the manipulated similarity with their own city (χ2 (2) = 33.82, p < .001). That is, 73% of the responding participants were positive when they had read about similar city, whereas only 28% were positive when they had read about dissimilar city.

Sixty-five percent of the participants gave answers that could be coded in terms of one of four relevant categories. It appeared that 72% of the responding participants referred to the (high or low) similarity between the two cities, which is indicative of a sufficient level of comprehension. The other three responses, which were not considered as demonstrating comprehension from the participants, were relatively rare: 18% of the participants responded that they did not know the anecdotal city, 7% indicated that the result counted and not the example city presented, and 3% responded that it depended on the other cities investigated. There was no effect of the quality of anecdotal evidence on the distribution of responses over the four categories (χ2 (3) = 2.92, p = .40).

The second question was about what characteristics participants assigned to the other 13 cities mentioned in statistical evidence. Apparently, this question was hard for participants: Only 51% of the participants responded. Of the 73 responding participants, 9 gave an answer that was already given in the questionnaire (about the location of the cities and/or the inhabitants). As a result, 64 responding participants provided an answer that was indicative of comprehension, which constitutes 45% of the total number of participants. More than half of these responding participants referred to similarity in one of three different ways (). The remaining participants indicated that the other 13 cities were representative of a larger population. The responses in the five categories did not significantly differ according to manipulated quality of the city that was mentioned in the letter (χ2 (4) = 8.62, p = .07).

Table 3. Number of participants mentioning characteristics of the 13 other cities in the statistical evidence.

Discussion

Whereas the persuasiveness of anecdotal evidence and statistical evidence has been subject to numerous experimental investigations (see reviews by Allen & Preiss, Citation1997; Zebregs et al., Citation2015), the potential impact of the combination of both types has hardly been examined. The present study filled this gap by investigating the persuasiveness of this combination: Does the presence of statistical evidence increase the impact of anecdotal evidence? Does the quality of anecdotal evidence affect the persuasive outcome? And to what extent do readers comprehend the combination of anecdotal and statistical evidence? These questions were addressed in the context of a realistic persuasive message that was relevant to the readers.

The combination of anecdotal and statistical evidence was expected to be more persuasive than anecdotal evidence alone on the basis of the results of Allen et al. (Citation2000) and the additive model of Kim et al. (Citation2012), but H1 was not supported by the data. One potential explanation lies in a ceiling effect of the anecdotal evidence: If anecdotal evidence in itself was very persuasive, it makes sense that adding statistical evidence does not impact persuasion. However, it was determined that this ceiling effect did not occur: For anecdotal evidence alone, scores on attitudes and beliefs did not exceed 4.50 on a seven-point scale, and the percentage of positive voters was 61.3%. A second potential explanation may lie in the difficulty people have in combining both the information from the anecdotal evidence and from the statistical evidence. This is elaborated on in the discussion below, when results of the comprehension of the combination of anecdotal and statistical evidence are evaluated.

Half of the letters contained similar anecdotal evidence (high-quality) and half of the letters contained dissimilar anecdotal evidence (low-quality). In this study, contrary to theoretical expectations, readers were not found to be sensitive to the quality of anecdotal evidence: The similar and dissimilar variants were equally persuasive. This finding cannot be attributed to a failed manipulation: The pretests were successful, and the question about the similarity between the city in the letter and their own city was answered in line with the manipulation. The absence of an effect of evidence quality may be explained by the manipulation being part of a longer text (cf. Hoeken & Hustinx, Citation2007). This is the first study in which anecdotal evidence quality was examined in a real-life setting, with a realistic letter that is relevant to the readers. Readers can be expected to be relatively highly motivated to scrutinize a message that is relevant to them (Chaiken, Citation1987; Petty & Cacioppo, Citation1986), but data from this study show this does not lead to a differential impact of the quality of anecdotal evidence. This result sheds light on previous research documenting an effect of evidence quality in laboratory setting in which participants only read a claim with evidence (e.g., Hoeken et al., Citation2012, Citation2014; Hornikx & Hoeken, Citation2007). In a setting that is close to real life, the quality of arguments may not matter much.

Finally, this study was the first to examine readers’ comprehension of the combination of anecdotal and statistical evidence. When it comes to their thoughts related to the choice of anecdotal evidence in this combination, 65% of all readers mentioned the (dis)similarity with their own city when they were asked to write down what they thought about the municipality’s choice of naming the city in the letter. In relation to the question as to what the cities in the sample of the statistical evidence would look like, the results showed that 45% of the participants were able to provide an answer that was indicative of comprehension of the combination of anecdotal and statistical evidence. Some readers indicated they believed that the sample was representative of all kinds of cities in the country. Other readers referred to the notion of similarity, either referring to the anecdotal city in the letter, to their own city, or to the other cities in the sample. In summary, some participants appeared to comprehend the relationship between the anecdotal evidence and the statistical evidence by referring to the distinct notions of representativeness and similarity.

More research is definitively needed to gain more insights into comprehension. The present study examined cognitive thoughts that can be considered as the outcome of people’s processing of the information in the argumentative letter. For future research, it would be very useful to examine the online processing of this information, for instance, through sentence-by-sentence reaction times (e.g., Wolfe, et al., Citation2013) or through think-aloud protocols (e.g., Whitney & Budd, Citation1996).

Another possible limitation of this study is related to the persuasion measures, which were self-reported. The problem of self-report measures mainly affects behavior (Rhodes & Ewoldsen, Citation2013). In the present study participants’ voting intention was used a proxy of voting behavior. Although the naturalistic setting is likely to decrease the tendency to report inaccurate intentions, the use of this self-report measure of behavioral intention is still a limitation.

In conclusion, in real-life discourse, anecdotal evidence in a persuasive message that is relevant to readers does not seem to benefit from the inclusion of statistical evidence or from its intrinsic quality. The responses related to comprehension indicate that only a minority of participants comprehended the relationship between anecdotal and statistical evidence. Future research is needed to examine whether and how comprehension plays a role in the effects of additional evidence and evidence quality.

Acknowledgments

The author wishes to thank Frank van Meurs and Hans Hoeken in particular for their suggestions on improving this manuscript.

References

  • Allen, M., Bruflat, R., Fucilla, R., Kramer, M., McKellips, S., Ryan, D. J., Spiegelhoff, M. (2000). Testing the persuasiveness of evidence: Combining narrative and statistical forms. Communication Research Reports, 17, 331–336.
  • Allen, M., & Preiss, R. W. (1997). Comparing the persuasiveness of narrative and statistical evidence using meta-analysis. Communication Research Reports, 14, 125–131.
  • Baesler, E. J., & Burgoon, J. K. (1994). The temporal effects of story and statistical evidence on belief change. Communication Research, 21, 582–602.
  • Britt, M. A., Kurby, C. A., Dandotkar, S., & Wolfe, C. R. (2007). I agreed with what? Memory for simple argument claims. Discourse Processes, 45, 52–84.
  • Brosius, H.-B., & Bathelt, A. (1994). The utility of exemplars in persuasive communications. Communication Research, 21, 48–78.
  • Cathcart, R. S. (1955). An experimental study of the relative effectiveness of four methods of presenting evidence. Speech Monographs, 22, 227–233.
  • Chaiken, S. (1987). The heuristic model of persuasion. In M. P. Zanna, J. M. Olson, & C. P. Herman (Eds.), Social influence: The Ontario symposium (Vol. 5, pp. 3–39). Hillsdale, NJ: Lawrence Erlbaum.
  • Chambliss, M. J. & Murphy, P. K. (2002). Fourth and fifth graders representing the argument structure in written texts. Discourse Processes, 34, 91–115.
  • Cho, H., & Boster, F. J. (2005). Development and validation of value-, outcome-, and impression-relevant involvement scales. Communication Research, 32, 235–264.
  • Eemeren, F. H. van, & Grootendorst, R. (1992). Argumentation, communication, and fallacies: A pragma-dialectical perspective. Hillsdale, NJ: Lawrence Erlbaum.
  • Feeley, T. H., Marshall, H. M., & Reinhart, A. M. (2006). Reactions to narrative and statistical written messages promoting organ donation. Communication Reports, 19, 89–100.
  • Gibson, R., Callison, C., & Zillmann, D. (2011). Quantitative literacy and affective reactivity in processing statistical information and case histories in the news. Media Psychology, 14, 96–120.
  • Han, B., & Fink, E. L. (2012). How do statistical and narrative evidence affect persuasion? The role of evidentiary features. Argumentation and Advocacy, 49, 39–58.
  • Hoeken, H. (2001). Convincing citizens: The role of argument quality. In D. Janssen & R. Neutelings (Eds.), Reading and writing public documents (pp. 147–169). Amsterdam/Philadelphia: Benjamins.
  • Hoeken, H. & Hustinx, L. (2007). The influence of additional information on the persuasiveness of flawed arguments by analogy. In F. H. Van Eemeren, J. A. Blair, C. A. Willard, & B. Garssen (Eds.), Proceedings of the sixth conference of the International Society for the Study of Argumentation (pp. 625–630). Amsterdam, Netherlands: Sic Sat.
  • Hoeken, H., & Hustinx, L. (2009). When is statistical evidence superior to anecdotal evidence in supporting probability claims? The role of argument type. Human Communication Research, 35, 491–510.
  • Hoeken, H., Šorm, E., & Schellens, P. J. (2014). Arguing about the likelihood of consequences: Laypeople’s criteria to distinguish strong arguments from weak ones. Thinking and Reasoning, 20, 77–98.
  • Hoeken, H., Timmers, R., & Schellens, P. J. (2012). Arguing about desirable consequences: What constitutes a convincing argument? Thinking and Reasoning, 18, 394–416.
  • Hornikx, J. (2005). A review of experimental research on the relative persuasiveness of anecdotal, statistical, causal, and expert evidence. Studies in Communication Sciences, 5, 205–216.
  • Hornikx, J. (2007). Is anecdotal evidence more persuasive than statistical evidence? A comment on classic cognitive psychological studies. Studies in Communication Sciences, 7, 151–164.
  • Hornikx, J., & Hoeken, H. (2007). Cultural differences in the persuasiveness of evidence types and evidence quality. Communication Monographs, 74, 443–463.
  • Hornikx, J., & Houët, T. (2009). De overtuigingskracht van normatief sterke en normatief zwakke anekdotische evidentie in het bijzijn van statistische evidentie. In W. Spooren, M. Onrust, & J. Sanders (Eds.), Studies in Taalbeheersing, volume 3 (pp. 125–133). Assen: Van Gorcum.
  • Hornikx, J., & ter Haar, M. (2013). Evidence quality and persuasiveness: Germans are not sensitive to the quality of statistical evidence. Journal of Cognition and Culture, 13, 483–501.
  • Kellermann, K. (1980). The concept of evidence: a critical review. Journal of the American Forensic Association, 16, 159–172.
  • Kim, S.-Y., Allen, M., Gattoni, A., Grimes, D., Herrman, A. M., Huang, H., … Zhang, Y. (2012). Testing an additive model for the effectiveness of evidence on the persuasiveness of a message. Social Influence, 7, 65–77.
  • Kopfman, J. E., Smith, S. W., Yun, J. K. A., & Hodges, A. (1998). Affective and cognitive reactions to narrative versus statistical evidence organ donation messages. Journal of Applied Communication Research, 26, 279–300.
  • Lindsey, L. L. M., & Yun, K. A. (2003). Examining the persuasive effect of statistical messages: A test of mediating relationships. Communication Studies, 54, 306–321.
  • Park, H. S., Levine, T. R., Westerman, C. Y. K., Orfgen, T., & Foregger, S. (2007). The effects of argument quality and involvement type on attitude formation and attitude change: A test of dual-process and social-judgment predictions. Human Communication Research, 33, 81–102.
  • Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to attitude change. New York, NY: Springer.
  • Petty, R. E., Rucker, D. D., Bizer, G. Y., & Cacioppo, J. T. (2004). The Elaboration Likelihood Model of persuasion. In J. S. Seiter, & G. H. Gass (Eds.), Perspectives on persuasion, social influence, and compliance gaining (pp. 65–89). Boston, MA: Allyn & Bacon.
  • Reinard, J. C. (1988). The empirical study of the persuasive effects of evidence: The status after fifty years of research. Human Communication Research, 15, 3–59.
  • Reynolds, R. A., & Reynolds, J. L. (2002). Evidence. In J. P. Dillard, & M. Pfau (Eds.), The persuasion handbook: Developments in theory and practice (pp. 427–444). Thousand Oaks, CA: Sage.
  • Rhodes, N., & Ewoldsen, D. R. (2013). Outcomes of persuasion: Behavioral, cognitive, and social. In J. P. Dillard, & L. Shen (Ed.), The Sage handbook of persuasion: Developments in theory and practice (2nd ed., pp. 53–69). Thousand Oaks, CA: Sage.
  • Slusher, M. P., & Anderson, C. A. (1996). Using causal persuasive arguments to change beliefs and teach new information: The mediating role of explanation availability and evaluation bias in the acceptance of knowledge. Journal of Educational Psychology, 88, 110–122.
  • Voss, J. F., Fincher-Kiefer, R., Wiley, J., & Silfies, L. N. (1993). On the processing of arguments. Argumentation, 7, 165–181.
  • Walton, D. N., Reed, C., & Macagno, F. (2008). Argumentation schemes. Cambridge, UK: Cambridge University Press.
  • Warnick, B., & Inch, E. S. (1989). Critical thinking and communication: The use of reason in argument. New York, NY: Macmillan.
  • Whitney, P., & Budd, D. (1996). Think-aloud protocols and the study of comprehension. Discourse Processes, 21, 341–351.
  • Wolfe, C. R. & Britt, M. A. (2008). The locus of the myside bias in written argumentation. Thinking and Reasoning, 14, 1–27.
  • Wolfe, M. B., Tanner, S. M., & Taylor, A. R. (2013). Processing and representation of arguments in one-sided texts about disputed topics. Discourse Processes, 50, 457–497.
  • Zebregs, S., Putte, B. van den, Neijens, P., & de Graaf, A. (2015). The differential impact of statistical and narrative evidence on beliefs, attitude, and intention: A meta-analysis. Health Communication, 30, 282–289.
  • Zillmann, D., & Brosius, H.-B. (2000). Exemplification in communication: The influence of case reports on the perception of issues. Mawhaw, NJ: Erlbaum.

APPENDIX 1

Text of the material of the four conditions of City 1 (translation of the original Dutch text)

Dear citizen,

This letter informs you about a municipal decision with regard to the price of our municipal litter bags. The municipality is planning to increase the price of these litter bags by 1 euro by next January 1.

… indicates that the higher budget resulting from the increased litter bag price has provided more opportunities to keep the town visibly cleaner. Litter bags are collected on a more regular basis and waste treatment now functions more efficiently. … the municipality of City1 has decided to increase the litter bag price by 1 euro. In doing so, more can be done to make City1 an even cleaner place.

If you have any questions concerning this topic, please feel free to contact the municipality office.

Yours faithfully,

Frits Steghel

Department of Environmental Affairs