References
- Nesterov Y, Polyak BT. Cubic regularization of Newton method and its global performance. Mathematical Programming 2006;108(1):177–205.
- Cartis C, Gould NIM, Toint PL. Complexity bounds for second-order optimality in unconstrained optimization. J Complex. 2012;28(1):93–108.
- Cartis C, Gould NIM, Toint PL. On the complexity of steepest descent, Newton's and regularized Newton's method for nonconvex unconstrained optimization. SIAM J Optim. 2010;20(6):2833–2852.
- Cartis C, Gould NIM, Toint PL. An adaptive cubic regularisation algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity. IMA Journal of Numerical Analysis. 2012;32(4):1662–1695.
- Cartis C, Gould NIM, Toint PL. Adaptive cubic overestimation methods for unconstrained optimization. Part II: worst-case function and derivative-evaluation complexity. Mathematical Programming, Ser. A. 2011;130(2):295–319.
- Carmon Y, Duchi JC, Hinder O, et al. Lower bounds for finding stationary points I. Math Program Ser A. 2019;00:1–50.
- Bottou L, Curtis FE, Nocedal J. Optimization methods for large-scale machine learning. SIAM Review. 2018;60(2):223–311.
- Xu P, Roosta-Khorasani F, Mahoney MW. Second-order optimization for non-convex machine learning: an empirical study. Proceedings of the 2020 SIAM International Conference on Data Mining; 2020;199–207.
- Chen X, Jiang B, Lin T, et al. On adaptive cubic regularized Newton's methods for convex optimization via random sampling; 2019. arXiv:1802.05426.
- Xu P, Roosta-Khorasani F, Mahoney MW. Newton-type methods for non-convex optimization under inexact Hessian information. Math Program. 2020;184(1):37–70.
- Bellavia S, Gurioli G, Morini B. Adaptive cubic regularization methods with dynamic inexact hessian information and applications to finite-sum minimization. IMA Journal of Numerical Analysis. 2021;41(1):764–799.
- Bellavia S, Gurioli G, Morini B, et al. Adaptive regularization algorithms with inexact evaluations for nonconvex optimization. SIAM J Optim. 2019;29(4):2281–2915.
- Cartis C, Scheinberg K. Global convergence rate analysis of unconstrained optimization methods based on probabilistic models. Mathematical Programming, Ser. A. 2018;159(2):337–375.
- Kohler JM, Lucchi A. Sub-sampled cubic regularization for non-convex optimization. Proceedings of the 34th International Conference on Machine Learning. Vol. 70; 2017. p. 1895–1904. International Convention Centre, Sydney (Australia).
- Yao Z, Xu P, Roosta-Khorasani F, et al. Inexact non-convex newton-type methods; 2018. arXiv:1802.06925.
- Zhou D, Xu P, Gu Q. Stochastic variance-Reduced cubic regularization methods. J Mach Learn Res. 2019;20:1–47.
- Tropp J. An introduction to matrix concentration inequalities. Foundations and Trends in Machine Learning. 2015;8(1–2):1–230.
- Cartis C, Gould NIM, Toint PL. On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization. SIAM J Optim. 2012;22(1):66–86.
- Wang Z, Zhou Y, Liang Y, et al. A note on inexact gradient and Hessian conditions for cubic regularized Newton's method. Operations Research Letters. 2019;47:146–149.
- Birgin EG, Krejić N, Martínez JM. Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact. Math Comput. 2020;89(321):253–278.
- Cartis C, Gould NIM, Toint PL. Adaptive cubic overestimation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Mathematical Programming, Ser. A. 2011;127:245–295.
- Birgin EG, Gardenghi JL, Martínez JM, et al. Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models. Mathematical Programming, Ser. A. 2017;163(1–2):359–368.
- Blanchet J, Cartis C, Menickelly M, et al. Convergence rate analysis of a stochastic trust-region method via supermartingales. INFORMS Journal on Optimization. 2019;1(2):92–119.
- Barzilai J, Borwein JM. Two-point step size gradient methods. IMA Journal of Numerical Analysis. 1998;8:14–148.
- Bianconcini T, Liuzzi G, Morini B, et al. On the use of iterative methods in cubic regularization for unconstrained optimization. Comput Optim Appl. 2015;60:35–57.
- Berahas AS, Bollapragada R, Nocedal J. An investigation of Newton-sketch and subsampled Newton methods. Optimization Methods and Software. 2020;35:661–680.
- Chen R. Stochastic derivative-free optimization of noisy functions [Theses and Dissertations 2548]; 2015.
- Chen R, Menickelly M, Scheinberg K. Stochastic optimization using a trust-region method and random models. Math Program. 2018;169(2):447–487.
- Paquette C, Scheinberg K. A stochastic line search method with expected complexity analysis. SIAM J Optim. 2020;30(1):349–376.