1,166
Views
3
CrossRef citations to date
0
Altmetric
ECONOMETRICS

Assessment of model risk due to the use of an inappropriate parameter estimator

ORCID Icon & | (Reviewing editor)
Article: 1710970 | Received 30 Jul 2019, Accepted 22 Dec 2019, Published online: 09 Jan 2020

References

  • Agresti, A. (2015). Foundations of linear and generalized linear models. New Jersey: John Wiley and Sons.
  • Agresti, A., & Kateri, M. (2011). Categorical data analysis. Heidelberg Berlin Germany: Springer.
  • Albert, A., & Anderson, J. A. (1984). On the existence of maximum likelihood estimates in logistic regression models. Biometrika, 71(1), 1–20. doi:10.1093/biomet/71.1.1
  • Alhawarat, A., Salleh, Z., Mamat, M., & Rivaie, M. (2017). An efficient modified Polak–Ribière–Polyak conjugate gradient method with global convergence properties. Optimization Methods and Software, 32(6), 1299–1312. doi:10.1080/10556788.2016.1266354
  • Audet, C., & Tribes, C. (2018). Mesh-based Nelder–Mead algorithm for inequality constrained optimization. Computational Optimization and Applications, 71(2), 331–352. doi:10.1007/s10589-018-0016-0
  • Babaie-Kafaki, S., & Ghanbari, R. (2015). A hybridization of the Polak-ribière-Polyak and Fletcher-Reeves conjugate gradient methods. Numerical Algorithms, 68(3), 481–495. doi:10.1007/s11075-014-9856-6
  • Basel, I. (2004). International convergence of capital measurement and capital standards: A revised framework. Bank for international settlements.
  • Borowicz, J. M., & Norman, J. P.(2006). The effects of parameter uncertainty in the extreme event frequency-severity model. 28th International Congress of Actuaries, Paris, (Vol. 28). France: Citeseer.
  • Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010 (pp. 177–186). France: Springer.
  • Bottou, L. (2012). Stochastic gradient descent tricks. In Montavon G., Orr GB., and Muller KR (Eds.), Neural networks: Tricks of the trade (pp. 421–436). Heidelberg Berlin Germany: Springer.
  • Candès, E. J., & SUR, P. (2018). The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression. arXiv Preprint arXiv:1804.09753.
  • Caruana, J. (2010). Basel iii: Towards a safer financial system. Basel: BIS.
  • Charalambous, C., Charitou, A., & Kaourou, F. (2000). Comparative analysis of artificial neural network models: Application in bankruptcy prediction. Annals of Operations Research, 99(1–4), 403–425. doi:10.1023/A:1019292321322
  • Dembo, R. S., & Steihaug, T. (1983). Truncated-Newton algorithms for large-scale unconstrained optimization. Mathematical Programming, 26(2), 190–212. doi:10.1007/BF02592055
  • Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1), 1–22.
  • Derman, E. (1996). Model risk, quantitative strategies research notes. Goldman Sachs, New York, NY, 7, 1–11.
  • Diers, D., Eling, M., & Linde, M. (2013). Modeling parameter risk in premium risk in multi-year internal models. The Journal of Risk Finance, 14(3), 234–250. doi:10.1108/JRF-11-2012-0084
  • Dinse, G. E. (2011). An em algorithm for fitting a four-parameter logistic model to binary dose- response data. Journal of Agricultural, Biological, and Environmental Statistics, 16(2), 221–232. doi:10.1007/s13253-010-0045-3
  • Hinton, G. E., Sabour, S., & Frosst, N. (2018). Matrix capsules with EM routing. International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=HJWLfGWRb
  • Konečný, J., Liu, J., Richtárik, P., & Takàč, M. (2016). Mini-batch semi-stochastic gradient descent in the proximal setting. IEEE Journal of Selected Topics in Signal Processing, 10(2), 242–255. doi:10.1109/JSTSP.4200690
  • Lagarias, J. C., Reeds, J. A., Wright, M. H., & Wright, P. E. (1998). Convergence properties of the Nelder–Mead simplex method in low dimensions. SIAM Journal on Optimization, 9(1), 112–147. doi:10.1137/S1052623496303470
  • Laird, N., Lange, N., & Stram, D. (1987). Maximum likelihood computations with repeated measures: Application of the em algorithm. Journal of the American Statistical Association, 82(397), 97–105. doi:10.1080/01621459.1987.10478395
  • Lindstrom, M. J., & Bates, D. M. (1988). Newton—Raphson and em algorithms for linear mixed-effects models for repeated-measures data. Journal of the American Statistical Association, 83(404), 1014–1022.
  • Mashele, H. P. (2016). Aligning the economic capital of model risk with the strategic objectives of an enterprise (MBA Mini-dissertation), North-West University (South Africa), Potchefstroom Campus.
  • Mclachlan, G., & Krishnan, T. (2007). The EM algorithm and extensions (Vol. 382). New Jersey, NJ: Wiley.
  • Millar, R. B. (2011). Maximum likelihood estimation and inference: With examples in R, SAS and ADMB (Vol. 111). UK: Wiley.
  • Minka, T. (2003). A comparison of numerical optimizers for logistic regression. Retrieved from https://tminka.github.io/papers/logreg/minka-logreg.pdf
  • Nash, S. G., & Nocedal, J. (1991). A numerical study of the limited memory BFGS method and the truncated-Newton method for large scale optimization. SIAM Journal on Optimization, 1(3), 358–372. doi:10.1137/0801023
  • Nelder, J. A., & Mead, R. (1965). A simplex method for function minimization. The Computer Journal, 7(4), 308–313. doi:10.1093/comjnl/7.4.308
  • Neter, J., Kutner, M. H., Nachtsheim, C. J., & Wasserman, W. (1996). Applied linear statistical models (Vol. 4). New York, NY: Irwin Chicago.
  • Nocedal, J., & Wright, S. (2006). Numerical optimization. New York, NY: Springer Science & Business Media.
  • Noubiap, R. F., & Seidel, W. (2000). A minimax algorithm for constructing optimal symmetrical balanced designs for a logistic regression model. Journal of Statistical Planning and Inference, 91(1), 151–168. doi:10.1016/S0378-3758(00)00137-3
  • Polyak, B. T., & Juditsky, A. B. (1992). Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4), 838–855. doi:10.1137/0330046
  • Powell, M. (1965). A method for minimizing a sum of squares of non-linear functions without calculating derivatives. The Computer Journal, 7(4), 303–307. doi:10.1093/comjnl/7.4.303
  • Powell, M. J. (1964). An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal, 7(2), 155–162. doi:10.1093/comjnl/7.2.155
  • Powell, M. J. (2007). A view of algorithms for optimization without derivatives. Mathematics Today-Bulletin of the Institute of Mathematics and Its Applications, 43(5), 170–174.
  • Robles, V., Bielza, C., Larrañaga, P., González, S., & Ohno-Machado, L. (2008). Optimizing logistic regression coefficients for discrimination and calibration using estimation of distribution algorithms. Top, 16(2), 345. doi:10.1007/s11750-008-0054-3
  • Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv Preprint arXiv:1609.04747.
  • Schiereck, D., Kiesel, F., & Kolaric, S. (2016). Brexit:(Not) another lehman moment for banks? Finance Research Letters, 19, 291–297. doi:10.1016/j.frl.2016.09.003
  • Scott, J. G., & Sun, L. (2013). Expectation-maximization for logistic regression. arXiv Preprint arXiv:1306.0040.
  • Shanno, D. F. (1970). Conditioning of Quasi-Newton methods for function minimization. Mathematics of Computation, 24(111), 647–656. doi:10.1090/S0025-5718-1970-0274029-X
  • Shen, J., & He, X. (2015). Inference for subgroup analysis with a structured logistic-normal mixture model. Journal of the American Statistical Association, 110(509), 303–312. doi:10.1080/01621459.2014.894763
  • Silvapulle, M. J. (1981). On the existence of maximum likelihood estimators for the binomial response models. Journal of the Royal Statistical Society. Series B (Methodological), 43(3), 310–313. doi:10.1111/rssb.1981.43.issue-3
  • Tunaru, R. (2015). Model risk in financial markets: From financial engineering to risk management. Singapore: World Scientific.
  • Vetterling, W. T., Teukolsky, S. A., Press, W. H., & Flannery, B. P. (1992). Numerical recipes: The art of scientific computing. (Vol. 2). Cambridge: Cambridge university press.
  • Wang, H., Zhu, R., & Ma, P. (2018). Optimal subsampling for large sample logistic regression. Journal of the American Statistical Association, 113(522), 829–844. doi:10.1080/01621459.2017.1292914
  • Yang, Y., Brown, T., Moran, B., Wang, X., Pan, Q., & Qin, Y. (2016). A comparison of iteratively reweighted least squares and Kalman filter with em in measurement error covariance estimation. 2016 19th International Conference on Information Fusion (FUSION) (pp. 286–291). Heidelberg Berlin Germany, IEEE.