392
Views
0
CrossRef citations to date
0
Altmetric
Articles

The Information-Leveling Role of Management Forecast Consistency in Facilitating Investment Efficiency

, ORCID Icon, ORCID Icon & ORCID Icon
Pages 519-543 | Received 06 Jan 2020, Accepted 06 Jul 2022, Published online: 04 Aug 2022
 

ABSTRACT

This study examines whether voluntary disclosure enhances investment efficiency through its information-leveling role in the capital markets. We argue that investment efficiency improves with the quality of managers’ past earnings forecasts because past forecast quality alleviates information frictions that inhibit firm access to investment capital. Our empirical results are consistent with this argument. To start, we document a baseline positive association between management forecast consistency and investment efficiency. Building on this result, we find that the consistency effect strengthens when cross-sectional attributes indicate higher information frictions and when industry-wide accounting quality is negatively shocked. In addition, we find that consistency effects are stronger for financially constrained firms and that consistency is associated with lower bid-ask spreads and positive changes in equity issuance. Overall, we show that building a reputation for high-quality management earnings forecasts can help firms overcome information frictions that contribute to suboptimal investment.

Acknowledgments

Accepted by Beatriz García Osma. We thank Jacky Chau, Novia Chen, Qin Li, Beatriz García Osma, Feng Tian, Guojun Wang, three anonymous reviewers, and participants at the 2017 American Accounting Association Mid-Atlantic Region Meeting, the 2017 Taiwanese Accounting Association Annual Meeting, the 2018 Haskell & White Corporate Reporting & Governance Academic Conference, the 2019 American Accounting Association Western Region Meeting, and the 2020 Hawaii Accounting Research Conference for helpful comments and suggestions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplemental data

Supplemental data for this article can be accessed on the Taylor & Francis website, doi:10.1080/09638180.2022.2105368

Notes

1 In statistical terms, MFC captures the precision of management forecast signals. Precision is a defining attribute of informativeness in disclosure models (Verrecchia, Citation2001), and it is routinely used in empirical studies to conceptualize and operationalize information quality (Biddle et al., Citation2009; Ng, Citation2011; Bhattacharya et al., Citation2012; Roychowdhury et al., Citation2019). Forecast accuracy is distinct from forecast precision because forecast accuracy measures the magnitude of forecast error, whereas forecast precision measures the variability of forecast error.

2 We do not test whether the information-leveling effect dominates the forecasting ability effect in explaining the link between voluntary disclosure quality and investment efficiency. Prior literature suggests that both information leveling and managerial abilities can individually and incrementally affect investment efficiency (Roychowdhury et al., Citation2019).

3 Our choice to focus on consistency, rather than accuracy, as a forecast reputation attribute is based primarily on empirical considerations (e.g., maximize test power), as Hilary and Hsu (Citation2013) and Hilary et al. (Citation2014) show that consistency’s effect on forecast informativeness is stronger, both economically and statistically, than the corresponding effect of accuracy. Readers should not interpret our consistency focus as suggesting accuracy is incapable of playing an information-leveling role in facilitating investment efficiency; indeed, Hilary et al. (Citation2014) suggest that accuracy may be more influential in certain settings (e.g., settings with low investor sophistication). We acknowledge that forecast consistency may entail large forecast errors that can be perceived negatively by investors (e.g., firms may consistently ‘lowball’ earnings forecasts to maximize meet-or-beat probabilities). To the extent that investors view consistent forecasts negatively, we should observe weaker (not stronger) consistency effects in our tests.

4 See, for example, Ajinkya and Gift (Citation1984), Verrecchia (Citation1990), Coller and Yohn (Citation1997), Tasker (Citation1998), Verrecchia (Citation2001), Balakrishnan et al. (Citation2014), Guay et al. (Citation2016), and Balakrishnan et al. (Citation2019).

5 Of course, investors can verify managers’ earnings forecasts against subsequently realized earnings, suggesting that verification challenges associated with managers’ forecasts are short-lived. However, waiting to verify managers’ forecasts against earnings reports can entail high opportunity costs for investors if value-relevant information from the forecasts is impounded into stock prices ahead of the earnings report’s release. Indeed, prior research suggests that investors place high importance on forecast timeliness, even as timeliness generally sacrifices forecast accuracy (Schipper, Citation1991; Clement & Tse, Citation2003). Thus, managers’ forecasting reputations are likely to be important to investors, as a good reputation adds confidence that a timely response to a given forecast will increase investing profits.

6 While forecasts that are always $0.05 below realized earnings are less accurate than forecasts that are equally likely to be $0.03 above or below realized earnings, a Bayesian should prefer the former set of forecasts because they predict realized earnings with 100% certainty after adjusting for the predictable forecast bias. By contrast, the latter set of forecasts cannot predict earnings with 100% certainty because they are inaccurate and lack a predictable bias.

7 Prior studies find that investors are more responsive to managers’ disclosures when there is lower uncertainty about managers’ disclosure objectives (Fischer & Verrecchia, Citation2000; Ferri et al., Citation2018). Thus, to the extent that investors detect and understand managers’ objectives for biasing their forecasts (e.g., avoid negative earnings surprises), they should be less likely to discount the information content of forecasts with strategic biases.

8 Goodman et al. (Citation2014) document a positive association between MFA (employed as a proxy for managers’ forecasting abilities) and investment efficiency. Thus, evidence consistent with H1 might also be consistent with MFC linking to investment efficiency through the managerial ability channel. This possibility is explored in Section 5.2.4 and it motivates the formation of H2 to help us better identify the information-leveling channel.

9 Because we focus on firms with a history of regular earnings guidance, we assume that there is a minimum level of earnings quality in our sample such that forecasts of earnings have the potential to be informative to investors (in other words, if earnings quality is so poor as to be considered value-irrelevant, then it is unlikely that MFC would facilitate investment efficiency through information leveling). This assumption is consistent with Lennox and Park (Citation2006), who find that past earnings informativeness is positively associated with the issuance of earnings guidance.

10 Throughout the paper, we define industries following Fama and French (Citation1997). For each industry-year regression, we require a minimum of 30 observations.

11 We rank consistency by industry-year because macroeconomic and industry-level conditions are likely to be strong determinants of a firm’s information environment quality. Conditioning ranks on information environment quality is important because the information-leveling ability of management forecasts can be high even when the full sample rank of forecast consistency is low (e.g., forecasts issued in highly volatile environments).

12 Throughout the paper, continuous variables are winsorized at the 1% and 99% levels. A correlation matrix is available in the online Supplement.

13 In all regressions, we cluster standard errors by firm. We also run our tests with standard errors clustered by firm and by year, following the procedure in Petersen (Citation2009). All results (untabulated) are robust to two-way clustering.

14 If we remove MFC from the regression, the coefficient on MFA becomes significant, consistent with prior studies.

15 De Franco et al. (Citation2011) measure the comparability of firm i's earnings with firm j’s earnings as (-1) * the average absolute difference between the predicted earnings obtained from each firm’s earnings-returns function (estimated from regressions of quarterly earnings on quarterly stock returns over the past sixteen quarters) using firm i's return as the input to both functions. The industry median comparability value for firm i is the median i-j comparability value among all i-j pairings from firm i's industry.

16 One might question why investors would value management forecast consistency after a peer restatement event because the event calls into question the quality of the forecasting object (i.e., earnings). Gleason et al. (Citation2008) report a relatively modest investor response to peer restatements (e.g., on page 92 reports a mean announcement window abnormal return of -0.5% for peer firms compared with a -19.8% mean return for restating firms) and responses vary based on observable contagion susceptibility factors (e.g., accruals levels, common auditors). These findings suggest that peer restatements are unlikely to irreparably damage accounting quality at the industry level and that investors consider firm-level signals when assessing the implications of peer restatements for firms’ information risk.

17 We use restatements of annual reporting where the restatement period is at least one year long and where the restatement corrects ‘fraudulent’ reporting (RES_FRAUD = 1) or results from an SEC investigation (RES_SEC = 1) as indicated by the Audit Analytics’ Non-Reliance Restatements database.

18 Unfortunately, the peer firm restatement test does not fully eliminate the potential influence of confounding factors and we cannot be sure that all forecasting decisions affecting MFC strictly precede investing decisions affecting INV_EFF (because these decisions are unobservable). However, the test greatly challenges the plausibility of confounding effects because it requires confounds to exert a stronger effect on each firm’s MFC and INV_EFF after a peer restatement. Moreover, potential confounds are further challenged by the fact that MFC will be slow to change because it is measured with five years of past forecasts (years t-4 to t).

19 When we estimate the baseline model (i.e., Equation 1) with the peer firm restatement sample, we find that the coefficient on MFC is significantly positive (untabulated). Thus, we also find support for H1 using our peer firm restatement sample.

20 Another potential way to control for managers’ forecasting abilities is to estimate our models in first differences (i.e., change-based models). Unfortunately, because MFC is measured over a five-year horizon, there is limited temporal variation in year-over-year MFC changes in our sample, which greatly limits our ability to detect variation in investment efficiency related to variation in management forecast consistency.

21 These determinants include size, analyst following, the book-to-market ratio, leverage, earnings volatility, profitability, institutional ownership, R&D intensity, reporting a loss, reporting restructuring charges, the propensity to meet-or-beat the analyst consensus earnings forecast, and the percentage of industry peer firms issuing guidance.

22 Larcker and Rusticus (Citation2010, page 199) discuss the difficulty of selecting appropriate instruments in disclosure research. Although one might argue that MBE and/or %PEERGUIDE could affect year t+1 investment decisions, we note that our instruments have measurement attributes that provide advantages over other potential choices (e.g., MBE measures meet-or-beat propensities starting from year t-4; %PEERGUIDE is based on broad industry definitions).

23 Specifically, we perform a chi-square test based on the number of observations (N) and the model R2 obtained from a regression of the second-stage model residuals on the second-stage model regressors (including our instruments MBE and %PEERGUIDE). Assuming N*R2 ∼ χ2 (1), we calculate a test statistic of 1.774 with a p-value of 0.18. We also find that the coefficients on MBE and %PEERGUIDE are individually and jointly insignificant in explaining the second-stage residuals.

24 We also performed propensity score matching analysis to address a concern that our results could reflect inherent differences between ‘high’ and ‘low’ MFC firms that our baseline model is unable to control for. This analysis is summarized and reported in the online Supplement.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.