Abstract
We propose generic acceleration schemes for a wide class of optimization and iterative schemes based on relaxation and inertia. In particular, we introduce methods that automatically tune the acceleration coefficients online and establish their convergence. This is made possible by considering classes of fixed-point iterations over averaged operators which encompass gradient methods, ADMM (Alternating Direction Method of Multipliers), primal dual algorithms and so on.
Acknowledgements
F.I. thanks Walid Hachem for his guidance in the analysis of affine averaged operators.
Disclosure statement
No potential conflict of interest was reported by the authors.
ORCID
F. Iutzeler http://orcid.org/0000-0003-2537-380X
J. M. Hendrickx http://orcid.org/0000-0001-8214-8292
Notes
1. For ADMM, we consider notably Inertial ADMM built on the monotone operator formulation (see e.g. [Citation13]) as in [Citation8], but different from Fast ADMM [Citation17].
2. For the sake of clarity, we only discuss single-valued mappings in finite dimensional spaces; further results on monotone operators theory can be found in [Citation5].
3. Note the extra factor compared to Section 2.2. In the specific case of relaxation, this modified definition enables to estimate the convergence of
by applying it only once. Monotone operators theory ensures us that
, and enables the convergence proof.
4. The optimal stepsize when doing K iterations goes to 2/L, staying strictly below, when .
6. If either (i) g is the indicator function of a linear space, or (ii) when g is quadratic; then the representation operation of Equation (Equation6
(6)
(6) ) becomes linear and relaxation can be performed as an outer modification.
7. for the other two problems, the algorithm boils down to previously investigated ADMM.
8. we chose to perform the operator then the inertia for the sake of clarity and consistency in the derivations.