Publication Cover
Optimization
A Journal of Mathematical Programming and Operations Research
Volume 72, 2023 - Issue 6
200
Views
0
CrossRef citations to date
0
Altmetric
Articles

A stochastic variance reduction algorithm with Bregman distances for structured composite problems

&
Pages 1463-1484 | Received 02 Mar 2021, Accepted 15 Oct 2021, Published online: 31 Jan 2022

References

  • Gower RM, Schmidt M, Bach F, et al. Variance-reduced methods for machine learning. Proceedings of the IEEE. 2020 Oct 16;108:1968–1983.
  • Nitanda A. Stochastic proximal gradient descent with acceleration techniques. In: Advances in Neural Information Processing Systems. Montreal Canada, December 8-13, 2014. p. 1574-1582.Montreal: MIT press
  • Xiao L, Zhang T. A proximal stochastic gradient method with progressive variance reduction. SIAM J Optim. 2014;24:2057–2075.
  • Allen-Zhu Z, Hazan E. Variance reduction for faster non-convex optimization. Proceedings of the 33rd International Conference on Machine Learning, 2016 Jun 20–22; New York, USA, Vol. 48, 2016. p. 699–707. JMLR.org.
  • Balamurugan P, Bach F. Stochastic variance reduction methods for saddle-point problems. In: Advances in neural information processing systems. Barcelona Spain, December 5-10, 2016. p. 1416–1424. Barcelona: Curran Associates Inc.
  • Devraj AM, Chen J. Stochastic variance reduced primal dual algorithms for empirical composition optimization. In: Advances in neural information processing systems. Vancouver Canada, December 8-14, 2019. p. 9882–9892. Vancouver: Curran Associates, Inc.
  • Du SS, Hu W. Linear convergence of the primal–dual gradient method for convexconcave saddle point problems without strong convexity. In: Proceedings of the International Conference on Artificial Intelligence and Statistics, Naha, Okinawa, Japan, April 16–18, 2019. p. 196–205, PMLR.
  • Hamedani EY, Jalilzadeh A. A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems. https://arxiv.org/abs/2012.13456.
  • Shi Z, Zhang X, Yu Y. Bregman divergence for stochastic variance reduction: saddle-point and adversarial prediction. In: Advances in neural information processing systems, Long Beach, CA, USA , Dec 4–9, 2017. p. 6033–6043. Curran Associates, Inc.
  • Combettes PL, Pesquet J-C. Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators. Set-Valued Var Anal. 2012;20:307–330.
  • Iyiola OS, Enyi CD, Shehu Y. Reflected three-operator splitting method for monotone inclusion problem. Optim Methods Softw. 2021. doi:10.1080/10556788.2021.1924715 .
  • Boţ RI, Hendrich C. A Douglas–Rachford type primal–dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators. SIAM J Optim. 2013;23:2541–2565.
  • Boţ RI, Hendrich C. Convergence analysis for a primal–dual monotone + skew splitting algorithm with applications to total variation minimization. J Math Imaging Vis. 2014;49:551–568.
  • Bùi MN, Combettes PL. Multivariate monotone inclusions in saddle form. Mathematics of Operations Research. 2021.
  • Ryu EK, Vũ BC. Finding the forward-Douglas–Rachford-forward method. J Optim Theory Appl. 2020;184:858–876.
  • Vũ BC. A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv Comput Math. 2013;38:667–681.
  • Combettes PL, Pesquet J-C. Stochastic approximations and perturbations in forward–backward splitting for monotone operators. Pure Appl Funct Anal. 2016;1(1):13–37.
  • Nguyen VD, Vũ BC. Convergence analysis of the stochastic reflected forward–backward splitting algorithm. https://arxiv.org/abs/2102.08906.
  • Rosasco L, Villa S, Vũ BC. A stochastic inertial forward–backward splitting algorithm for multivariate monotone inclusions. Optimization. 2016;65:1293–1314.
  • Rosasco L, Villa S, Vũ BC. A first-order stochastic primal–dual algorithm with correction step. Numer Funct Anal Optim. 2017;38:602–626.
  • Bregman LM. The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Comput Math Math Phys. 1967;7:200–217.
  • Bauschke HH, Borwein JM, Combettes PL. Bregman monotone optimization algorithms. SIAM J Control Optim. 2003;42:596–636.
  • Bùi MN, Combettes PL. Bregman forward–backward operator splitting. Set-Valued Var Anal. 2021;29:583–603.
  • Chen G, Teboulle M. Convergence analysis of aproximal-like minimization algorithm using Bregman functions. SIAM J Optim. 1993;3:538–543.
  • Dang CD, Lan G. Stochastic block mirror descent methods for nonsmooth and stochastic optimization. SIAM J Optim. 2015;25:856–881.
  • Duchi JC, Shalev-Shwartz S, Singer Y, et al. Composite objective mirror descent. In: COLT, Haifa, Israel, Jun 27–29, 2010. p. 14–26, Omnipress.
  • Lu H, Freund R, Nesterov Y. Relatively smooth convex optimization by first-order methods, and applications. SIAM J Optim. 2018;28:333–354.
  • Hien LTK, Lu C, Xu H, et al. Accelerated stochastic mirror descent algorithms for composite non-strongly convex optimization. J Optim Theory Appl. 2019;181:541–566.
  • Lei Y, Zhou DX. Analysis of online composite mirror descent algorithm. Neural Comput. 2017;29:825–860.
  • Nemirovski A, Juditsky A, Lan G, et al. Robust stochastic approximation approach to stochastic programming. SIAM J Optim. 2009;19(4):1574–1609.
  • Van Nguyen Q. Forward-backward splitting with Bregman distances. Vietnam J Math. 2017;45:519–539.
  • Tseng P. On accelerated proximal gradient methods for convex–concave optimization. Technical report, Department of Mathematics, University of Washington. 2008.
  • Chen Y, Lan G, Ouyang Y. Optimal primal–dual methods for a class of saddle point problems. SIAM J Optim. 2014;24:1779–1814.
  • Bauschke HH, Borwein JM, Combettes PL. Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces. Commun Contemp Math. 2001;3:615–647.
  • Moreau JJ. Fonctionnelles sous-différentiables. C R Acad Sci Paris Sér A Math. 1963;257:4117–4119.
  • Cevher V, Vũ BC. A reflected forward-backward splitting method for monotone inclusions involving Lipschitzian operators. Set-Valued Var Anal. 2021;29:163–174.
  • Boţ RI, Csetnek ER, Meier D. Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces. Optim Methods Softw. 2019;34:489–514.
  • Juditsky A, Nemirovski AS, Tauvel C. Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Syst. 2011;1:17–58.
  • Zhao R. Optimal stochastic algorithms for convex–concave saddle-point problems. Mathematics of Operations Research. 2021.
  • Wang J, Xiao L. Exploiting strong convexity from data with primal-dual first-order algorithms. In: International Conference on Machine Learning, 2017 Aug 6–11; Sydney, Australia, Vol. 70, 2017. p. 3694–3702. JMLR. org.
  • Boţ RI, Csetnek ER, Hendrich C. On the convergence rate improvement of a primal–dual splitting algorithm for solving monotone inclusion problems. Math Program. 2015;150:251–279.
  • Chambolle A, Pock T. On the ergodic convergence rates of a first-order primal–dual algorithm. Math Program. 2016;159:253–287.
  • Drori Y, Sabach S, Teboulle M. A simple algorithm for a class of nonsmooth convex–concave saddle-point problems. Oper Res Lett. 2015;43:209–214.
  • Hamedani EY, Aybat NS. A primal–dual algorithm with line search for general convex–concave saddle point problems. SIAM J Optim. 2021;31:1299–1329.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.