519
Views
2
CrossRef citations to date
0
Altmetric
Articles

Confirmation Bias in Analysts’ Response to Consensus Forecasts

, &
 

Abstract

This paper provides evidence of confirmation bias by sell-side analysts in their earnings forecasts. We show that analysts tend to put higher weight on public information when the current forecast consensus is more consistent with their previous forecasts. We further find that analysts with better past forecasting performance, longer firm-specific experience, or forecasting earlier, tend to be more subject to confirmation bias, consistent with some of the existing cognitive and social psychology theories. The results remain significant after controlling for analyst incentives. Finally, we distinguish evidence of confirmation bias from that of other important behavioral biases such as conservative bias.

JEL classification:

Notes

2 Park et al. (Citation2013) also examine confirmation bias in the financial market. They use data of 502 investors on a message board operator in South Korea as a field experiment.

3 Following Chen and Jiang (Citation2006) empirical framework, we measure analysts’ overconfidence based on how they overweight private information after good performance in the past. Such measure was originally inspired by Gervais and Odean (Citation2001). Similar measure is also used in Hilary and Menzly (Citation2006). Later in the literature, Moore and Healy (Citation2008) present a reconciliation of three distinct ways to define overconfidence: overestimation of one’s actual performance, overplacement of one’s performance relative to others, and excessive precision in one’s beliefs. The measure we use in this study matches best with the description of the first type.

4 To give a more concrete example, suppose a company announces earnings of $1 per share for the first quarter of a fiscal year, with a positive SUE of 0.2. However, an analyst’s forecast prior to the announcement is $1.5 a share for the quarter and $6 per share for the whole year. After the quarterly earnings announcement, the analyst realizes that her prior annual forecast is too optimistic, and revises the forecast downward. This induces a negative relation between SUE and subsequent analyst forecast revision that cannot be attributed to confirmation bias. In general, the sign of forecast revision does not have to be the same as the sign of SUE when analysts rationally incorporate information from announced earnings into their forecasts.

5 Based on the sample used in this study during the period from 2000 to 2018, when firms report positive SUEs, 23% of individual analyst forecasts prior to quarterly earnings announcements are above the reported earnings (i.e., being too optimistic despite positive SUE). When firms report negative SUEs, 62% of individual analyst forecasts are below the reported earnings (i.e., being too pessimistic despite negative SUE). These are the cases when analysts’ subsequent rational (downward) forecast revisions may be misinterpreted as underweighting earnings announcement information by using SAME-DIRECTION REV.

6 Section 4.2 provides more detailed discussion on why SUE SIGN CHANGE, and an alternative measure used in Pouget, Sauvagnat, and Villeneuve (Citation2017), do not necessarily suggest inconsistency between an analyst’s prior belief and public information.

7 Morewedge and Kahneman (Citation2010) discuss three features of associative activation and their implications in intuitive judgments: associative coherence, attribute substitution, and processing fluency. They mainly attribute the cause of confirmation bias to the associative coherence feature. However, the attribute substitution feature may also play a role in confirmation bias.

8 Our measure of Consensus is slightly different from Chen and Jiang (Citation2006) in that, we use rank inverse-weighting consensus instead of days inverse-weighting, and include only the most recent forecast by each analyst, instead of all forecasts, issued within 90 days before the current forecast, to reflect more up-to-date information. In Section 6.2, we provide evidence that our main results are robust under several alternative measures of consensus.

9 In Section 6.2, we further construct a more timely-measured Consensus based on forecasts within a 30-day window. Then we construct a fourth measure of inconsistency, in the same way as Inconsist2 but using the more timely-measured consensus.

10 The main results displayed in the tables report standard errors clustered at the analyst level, following prior studies such as Pouget, Sauvagnat, and Villeneuve (Citation2017) and Hirshleifer et al. (Citation2021). But the results are robust to two-way clustering by analyst and by time.

11 We use Tr as a control variable to replace both the variables Ability and Tr_orthn in Chen and Jiang (Citation2006). In unreported tables, we also conduct robustness tests using their settings instead, and find similar results. They define Ability as the frequency that an analyst’s forecasts move the new consensus in the right direction towards the actual earnings over the entire sample period. They then run a linear regression of Tr on Ability, and define the residual as Tr_orthn, representing an analyst’s prior performance that is not explained by her true ability.

12 The Wason four-card selection task is often considered as the first to find a form of confirmation bias. In fact, Wason and Johnson-Laird (Citation1972) also mentioned that “content is crucial” for understanding human’s logical reasoning.

13 Following Pouget, Sauvagnat, and Villeneuve (Citation2017), we use methods in DellaVigna and Pollet (Citation2009) and Hirshleifer, Lim, and Teoh (Citation2009) to identify quarterly earnings announcement dates. In our final sample, about 98% have reliable quarterly earnings announcement dates and are included in the regressions in Table 6. In the tested sample, about 47% forecasts were issued within 30 days following a quarterly earnings announcement.

14 If we view earnings announcements as providing more thematic information and peer consensus as more abstract information, these findings can potentially be explained by the “thematic-materials” effect (or “content effect”, or subsumed by the “memory cueing hypothesis” of Griggs and Cox Citation1982) as discussed in Gigerenzer and Hug (Citation1992).

15 Another related concern is the gradually “walk-down” optimistic forecasts to beatable forecasts phenomena documented by Richardson, Teoh, and Wysocki (Citation2004). Analysts’ gradual downward forecast revisions may also present an underreaction to public information at the beginning, followed by an overreaction as they turn pessimistic when it gets really close to earnings announcements.

16 There are three possible reasons for our different findings relative to Chen and Jiang (Citation2006). The first is the different sample period. Second, we use annual forecasts instead of quarterly forecasts. Third, the regression in Chen and Jiang (Citation2006) includes the interaction term between DEV and incentive but does not include the incentive measure per se as an explanatory variable. We additionally include the incentive measure in the regression, to ensure that the interpretation on the coefficient of the interaction term is not biased.

17 Andrews, Logan, and Sinkey (Citation2018) use field data from a college football poll of experts and distinguish confirmation bias from availability heuristics, primacy effects, Bayesian over/underreaction, and herding behaviors with regression-discontinuity approach.

18 Confirmation bias may also be related to the effect of attribute substitution (Morewedge and Kahneman Citation2010). Attribute substitution is the underweighting of new information when the information is ambiguous, or uncertain. By comparison, confirmation bias refers to the underweighting of new information when it is inconsistent with one’s prior. Ambiguity and inconsistency could certainly be related. However, inconsistency is measured relative to one’s prior, while ambiguity may or may not rely on a comparison with the prior belief.

19 Chen and Jiang (Citation2006) propose the Overconfidence Hypothesis which “presumes that analysts are ignorant of their ability, and become overconfident (i.e., overestimate their own ability) after a run of good performance due to their attribution bias in learning” (Griffin and Tversky Citation1992; Gervais and Odean Citation2001), and “predicts a positive relation between overweighting (of private information) and analysts’ prior forecast accuracy”. As explained in footnote 11, our Table 2 is slightly different in that we use Tr to replace both Ability and Tr_orthn in Chen and Jiang (Citation2006), but results are robust as well when exactly replicating their settings. Hilary and Menzly (Citation2006) also cite Daniel, Hirshleifer, and Subrahmanyam (Citation1998) model and Gervais and Odean (Citation2001) analytical framework of self-attribution and overconfidence to motivate their empirical tests but refer to it as “dynamic overconfidence”.

20 Bernhardt, Campello, and Kutsoati (Citation2006) use an S-statistic to measure herding by comparing a group of analysts’ forecasts to the actual earnings and the consensus forecast to determine whether they are unbiased.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.