917
Views
14
CrossRef citations to date
0
Altmetric
Original Articles

Evaluating the effectiveness of model specifications and estimation approaches for empirical accounting-based valuation models

&
Pages 660-682 | Published online: 21 Oct 2013
 

Abstract

This study considers the effectiveness of different model specifications and estimation approaches for empirical accounting-based valuation models in the UK. Primarily, we are interested in the accounting determinants of market value and, in particular, whether accounting-based valuation models can be estimated that not only have in-sample explanatory power but also potentially can be used as a tool of financial statement analysis in developing useful estimates of value out-of-sample. This requires models to be estimated on one sample, and tested for effectiveness on a different sample. Then, issues of model specification arise, together with choosing between methods of estimating the empirical models, in identifying the effectiveness of each combination. Using the criteria of bias and accuracy to capture effectiveness, we suggest estimation methods and models that, overall, provide the most effective models in this context.

Notes

1. See, for example, Barth et al. (Citation1992, Citation1998, Citation2001), Aboody et al. (Citation1999), Landsman (Citation1986), and Choi et al. (Citation1997).

2. The book value and earnings model is frequently attributed to Ohlson (Citation1995), even though Ohlson (Citation1995) derives a model involving either book value, excess earnings and ‘other information’, or book value, earnings, net shareholder cash flows and ‘other information’.

3. Our way of splitting the sample into high- and low-intangible groups, however, is different from that of Choi et al. (Citation2006). They use the classification of ‘high-technology’ and ‘low-technology’ group in Francis and Schipper (Citation1999). As we will see below, our approach of splitting the sample does generally result in superior performance using the criteria of this study.

4. Linear information dynamics (Ohlson Citation1995) suggests E(RIt + 1) = ωRIt + γOIt . Hence, to estimate ‘other information’, we need to estimate next period's expected residual income E(RIt+1). Ohlson (Citation2001) assumes that E(Et+1) can be treated as observable and equal to the consensus earnings forecast at time t with, therefore, E(RIt + 1) = E(Et + 1) − kBVt . The latter expression, kBVt + 1 , is composed of k, the cost of capital at time t, which can be estimated, and BVt, which is observable at time t.

5. Kungwal et al. (Citation2013) provide a detailed discussion on both measures. They argue that both measures suffer from different problems from both theoretical and empirical perspectives, and both are likely to impose sample selection biases. In terms of observations lost and contribution to explanatory power, however, the alternative measure of Akbar and Stark (Citation2003a) is far superior to the proxy involving consensus analysts' forecasts in the UK context. Given that the measure adds to explanatory power, its incorporation into a value model might help reduce out-of-sample valuation errors.

6. This proxy for ‘other information’ is going to be a noisy estimate, because: (1) the error term in Equation (5) is suppressed, suggesting the actual error term is incorporated into the measure of ‘other information’; and (2) last year's ‘other information’ does not predict this year's ‘other information’ deterministically.

7. Garrod and Valentincic (Citation2005) use both book value and sales deflation in estimating models of market value. When estimating their models using sales as the deflator, they find that the constant term in the regression is insignificantly different from zero, suggesting that their model does not suffer from sales-related omitted variables.

8. Our application of the proportional valuation error metric, however, is different from that of Choi et al. (Citation2006). Whereas they compare two different underlying valuation models, we compare different model specifications to estimating valuation models, including the performance of valuation models with and without the incorporation of ‘other information’, the effectiveness of deflating with various proxies for scale, estimating the deflated model with and without a constant term, and also the influence of estimating models on high- and low-intangible firms separately.

9. Implicit in the approach of Choi et al. (Citation2006) is that the coefficients of the linear information dynamics system they estimate are stable over time. As a consequence, and given a particular start date for the data, it makes sense to progressively pool more and more years' of data to estimate coefficients. We adopt a similar underlying assumption – that the accounting-based valuation model is stable over time – and, hence, follow a similar approach of progressively pooling more and more years' of data to estimate the coefficients of the model. See Section 6 of the below for a description of the results of some tests which relax this assumption.

10. We could have examined for differences in methods of treating extreme observations but, given the number of comparisons already existing, we decided to focus on one relatively standardised way of treating them.

11. For the dataset collected in this study for the relatively complete model, there are a great number of zero observations for variables such as RD expenditures, dividends, capital contributions and capital expenditures. For these variables, only the top 1% is deleted.

12. Loss-making firms are thought to have different characteristics for valuation. We identify loss-making firms as a sub-sample, and further split the profit-making firms into high and low intangible asset firms. However, splitting the full sample into three dimensions for estimation does not lower bias or improve accuracy.

13. This finding still holds for the basic model and if we include OI in either the basic or extended model.

14. Untabulated results of the valuation errors resulting from estimating without and with a constant term in the deflated equation, when SALES, NoSHARES and OMV are used as the deflator, are consistent with our previous findings. Hence, both sets of results are reported in and . Other untabulated results confirm that estimating valuation models on high and low-intangible firms separately, instead of pooling the full sample for estimation, almost always provides better performance, either with the alternative OI measure or without any measure of OI in the estimated equation. We measure OIAF as the consensus (mean) analysts' forecast for firm i for financial year t+1. The forecast is collected from I/B/E/S and uses forecasts six months after the balance sheet date. The item F1MN is used, which is the mean value for all FY1 (next fiscal year end to be reported) estimates for a firm. As F1MN is provided on a per share basis and, hence, it is multiplied by the number of shares in issue to produce an earnings forecast.

15. We would wish to remind the reader that regression models cannot be estimated with a constant term included when BV and MV are the deflators.

16. Also it has the advantage that the dependent variable becomes the market-to-book ratio, a ratio that can be of interest to financial analysts and those interested in understanding the determinants of risk.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 183.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.