247
Views
2
CrossRef citations to date
0
Altmetric
Original Articles

Inexact proximal stochastic second-order methods for nonconvex composite optimization

&
Pages 808-835 | Received 10 Jun 2019, Accepted 04 Jan 2020, Published online: 15 Jan 2020

References

  • M. Ahookhosh, Accelerated first-order methods for large-scale convex optimization: Nearly optimal complexity under strong convexity, Math. Method Oper. Res. 89(3) (2019), pp. 1–35. doi: 10.1007/s00186-019-00674-w
  • S. Becker and J. Fadili, A Quasi-Newton Proximal Splitting Method, in Advances in Neural Information Processing Systems, Vol. 25., F. Pereira, C. Burges, L. Bottou, K. Weinberger, eds., Curran Associates, 2012, pp. 2618–2626.
  • D.P. Bertsekas, A. Nedić, and A.E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, Belmont, 2003. Mass. 02478-9998.
  • R.H. Byrd, G.M. Chin, J. Nocedal, and F. Oztoprak, A family of second-order methods for convex l1-regularized optimization, Math. Program. 159(1–2) (2016), pp. 435–467. doi: 10.1007/s10107-015-0965-3
  • R.H. Byrd, J. Nocedal, and F. Oztoprak, An inexact successive quadratic approximation method for convex L-1 regularized optimization, Math. Program. 157(2) (2016), pp. 375–396. doi: 10.1007/s10107-015-0941-y
  • O. Devolder, F. Glineur, and Y.E. Nesterov, First-order methods within exact oracle: The strongly convex case, CORE Discussion Paper 2013/16, 2013.
  • J. Friedman, T. Hastie, H. Höfling, and R. Tibshirani, Pathwise coordinate optimization, Ann. Appl. Stat. 1(2) (2007), pp. 302–332. doi: 10.1214/07-AOAS131
  • H. Ghanbari and K. Scheinberg, Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates, Comp. Opt. and Appl., 2016. DOI:10.1007/s10589-017-9964-z.
  • C.J. Hsieh, M.A. Sustik, I.S. Dhillon, and P. Ravikumar, Sparse inverse covariance matrix estimation using quadratic approximation, In: NIPS, 2011, pp. 2330–2338.
  • M. Ito, New results on subgradient methods for strongly convex optimization problems with a unified analysis, Comp. Opt. Appl. 65(1) (2016), pp. 127–172. doi: 10.1007/s10589-016-9841-1
  • R. Johnson and T. Zhang, Accelerating stochastic gradient descent using predictive variance reduction, in: NIPS, 2013, pp. 315–323.
  • H. Karimi and M. Schmidt, Linear convergence of proximal-gradient methods under the polyak-Łojasiewicz condition, in: NIPS Workshop, 2015.
  • H. Karimi, J. Nutini, and M. Schmidt, Linear convergence of gradient and proximal-gradient methods under the polyak-Łojasiewicz condition, in: Joint European Conference on Ma- chine Learning and Knowledge Discovery in Databases (pp. 795-811). Springer, Cham, September, 2016.
  • S. Karimiand, S. Vavasis, and I mro, A proximal quasi-Newton method for solving l1-regularized least squares problems, SIAM J Optim 27(2) (2017), pp. 583–615. doi: 10.1137/140966587
  • S. Lacoste-Julien, M. Schmidt, and F. Bach, A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method, arXiv: 1212.2002, 2012.
  • G. Lan, Bundle-level type methods uniformly optimal for smooth and non-smooth convex optimization, Math. Program. 149(1) (2015), pp. 1–45. doi: 10.1007/s10107-013-0737-x
  • J. Lee, Y. Sun, and M. Saunders, Proximal Newton-type methods for minimizing composite functions, SIAM. J. Optim. 24(3) (2014), pp. 1420–1443. doi: 10.1137/130921428
  • H. Lin, J. Mairal, and Z. Harchaoui, A generic quasi-Newton algorithm for faster gradient-based optimization, arXiv:1610.00960, 2016.
  • A.S. Nemirovskii and Y.E. Nesterov, Optimal methods of smooth convex minimization, Zh. Vichisl. Mat. Fiz. 25 (1985), pp. 356–369. (In Russian).
  • Y.E. Nesterov, Complexity bounds for primal-dual methods minimizing the model of objective function, Math. Program. 171 (2018), pp. 311–330. doi: 10.1007/s10107-017-1188-6
  • N.H. Pham, L.M. Nguyen, D.T. Phan, and Q. Tran-Dinh, ProxSARAH: An efficient algorithmic framework for stochastic composite nonconvex optimization, arXiv:1902.05679v2 [math.OC] 29 Mar 2019.
  • B.T. Polyak, Gradient methods for the minimization of functionals, USSR Comput. Math. Math. Phys. 3(4) (1963), pp. 864–878. doi: 10.1016/0041-5553(63)90382-3
  • S.J. Reddi, S. Sra, B. Póczos, and A. Smola, Proximal stochastic methods for nonsmooth nonconvex finite-sum optimization, in: NIPS 2016.
  • S. Ghadimi, Conditional gradient type methods for composite nonlinear and stochastic optimization, Math. Program. 173(1–2) (2019), pp. 431–464. doi: 10.1007/s10107-017-1225-5
  • S. Ghadimi, G. Lan, and H. Zhang, Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization, Math. Program. 155(1–2) (2016), pp. 267–305. doi: 10.1007/s10107-014-0846-1
  • S. Ghadimi, G. Lan, and H. Zhang, Generalized uniformly optimal methods for nonlinear programming, J. Sci. Comput. 79(3) (2019), pp. 1854–1881. doi: 10.1007/s10915-019-00915-4
  • X. Wang, S.X. Wang, and H. Zhang, Inexact proximal stochastic gradient method for convex composite optimization, Comp. Opt. Appl. 68(3) (2017), pp. 579–618. doi: 10.1007/s10589-017-9932-7
  • X.Y. Wang, X. Wang, and Y. Yuan, Stochastic proximal quasi-Newton methods for non-convex composite optimization, Optim. Methods Softw. 34 (2019), pp. 922–948. doi: 10.1080/10556788.2018.1471141
  • L. Xiao and T. Zhang, A proximal stochastic gradient method with progressive variance reduction, SIAM J. Optim. 24 (2014), pp. 2057–2075. doi: 10.1137/140961791
  • C. Zălinescu, Convex Analysis in General Vector Spaces, World Scientific, Singapore, 2002.
  • Y. Zhang and X. Lin, Stochastic primal dual coordinate method for regularized empirical risk minimization, in: ICML, 2015.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.