73
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Convergence analysis of stochastic higher-order majorization–minimization algorithms

&
Pages 384-413 | Received 30 May 2022, Accepted 04 Sep 2023, Published online: 25 Sep 2023
 

Abstract

Majorization–minimization schemes are a broad class of iterative methods targeting general optimization problems, including nonconvex, nonsmooth and stochastic. These algorithms minimize successively a sequence of upper bounds of the objective function so that along the iterations the objective value decreases. We present a stochastic higher-order algorithmic framework for minimizing the average of a very large number of sufficiently smooth functions. Our stochastic framework is based on the notion of stochastic higher-order upper bound approximations of the finite-sum objective function and minibatching. We derive convergence results for nonconvex and convex optimization problems when the higher-order approximation of the objective function yields an error that is p times differentiable and has Lipschitz continuous p derivative. More precisely, for general nonconvex problems we present asymptotic stationary point guarantees and under Kurdyka–Lojasiewicz property we derive local convergence rates ranging from sublinear to linear. For convex problems with uniformly convex objective function, we derive local (super)linear convergence results for our algorithm. Compared to existing stochastic (first-order) methods, our algorithm adapts to the problem's curvature and allows using any batch size. Preliminary numerical tests support the effectiveness of our algorithmic framework.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 By Lyapunov function along the sequence x^k generated by SHOM we mean that we evaluate the Lyapunov function Φ(x^)=1Nj=1N(f(xj)f(x))p+1q at x^k.

Additional information

Funding

The research leading to these results has received funding from the NO Grants 2014–2021 RO-NO-2019-0184, under project ELO-Hyp, contract no. 24/2020; UEFISCDI PN-III-P4-PCE-2021-0720, under project L2O-MOC, nr. 70/2022.

Notes on contributors

Daniela Lupu

Daniela Lupu is a Ph.D. student at University Politehnica Bucharest, Romania. Her research interests lies in the area of continuous optimization and machine learning with applications in control. She holds B.Sc. (2017) and M.Sc. degrees (2019) from University Politehnica Bucharest.

Ion Necoara

Ion Necoara received an M.Sc. degree (2002) in Optimization and Statistics from University of Bucharest, Romania, and a PhD degree (2006) in Applied Sciences (cum laude) from Delft University of Technology, Netherlands. From 2007 to 2009, he was a research fellow at Katholieke Universiteit Leuven, Belgium. Since 2009 he is with University Politehnica Bucharest, Romania, where he is now full professor. Since 2021 he is also a senior researcher at the Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy. His research interests cover various topics in developing new optimization algorithms with a focus on mathematical guarantees about their performance.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 1,330.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.