References
- Boyd S, Parikh N, Chu E, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011;3:1–122.
- Chan TF, Shen J. Image processing and analysis. Philadelphia (PA): SIAM; 2005.
- Combettes PL, Wajs VR. Signal recovery by proximal forward--backward splitting. Multiscale Model. Simul. 2005;4:1168–1200.
- Ekeland I, Témam R. Convex analysis and variational problems. Philadelphia (PA): SIAM; 1999.
- Glowinski R. Numerical methods for nonlinear variational problems. Berlin: Springer-Verlag; 2008.
- Ito K, Jin B. Inverse problems: Tikhonov theory and algorithms. Singapore: World Scientific; 2014.
- Ito K, Kunisch K. Lagrange multiplier approach to variational problems and applications. Philadelphia (PA): SIAM; 2008.
- Hestenes MR. Multiplier and gradient methods. J. Optim. Theory Appl. 1969;4:303–320.
- Powell MJD. A method for nonlinear constraints in minimization problems. In: Optimization (Symposium, University of Keele, Keele, 1968). London: Academic Press; 1969. p. 283–298.
- Rockafellar R. A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Program. 1973;5:354–373.
- Rockafellar R. The multiplier method of Hestenes and Powell applied to convex programming. J. Optim. Theory Appl. 1973;12:555–562.
- Glowinski R, Marroco A. Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de Dirichlet non linéaires. ESAIM. Math. Model. Numer. Anal. 1975;9:41–76.
- Glowinski R, Le Tallec P. Augmented Lagrangian and operator-splitting methods in nonlinear mechanics. Philadelphia (PA): SIAM; 1989.
- Parikh N, Boyd S. Proximal algorithms. Found. Trends Optim. 2013;1:123–231.
- Wu C, Tai XC. Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imaging Sci. 2010;3:300–339.
- Fortin M. Minimization of some non-differentiable functionals by the augmented Lagrangian method of Hestenes and Powell. Appl. Math. Optim. 1975;2:236–250.
- Ito K, Kunisch K. Augmented lagrangian methods for nonsmooth, convex optimization in Hilbert spaces. Nonlinear Anal. Ser. A Theory Methods. 2000;41:591–616.
- Lemaréchal C, Sagastizábal C. Practical aspects of the Moreau-Yosida regularization: theoretical preliminaries. SIAM J. Optim. 1997;7:367–385.
- Ip CM, Kyparisis J. Local convergence of quasi-Newton methods for B-differentiable equations. Math. Program. 1992;56:71–89.
- Facchinei F, Pang JS. Finite-dimensional variational inequalities and complementarity problems. Vol. II, New York (NY): Springer-Verlag; 2003.
- Bauschke HH, Combettes PL. Convex analysis and monotone operator theory in Hilbert spaces. New York (NY): Springer; 2011.
- Ito K, Kunisch K. An active set strategy based on the augmented Lagrangian formulation for image restoration. ESAIM. Math. Model. Numer. Anal. 1999;33:1–21.
- Evans LC, Gariepy RF. Measure theory and fine properties of functions. Boca Raton (FL): CRC Press; 1992.
- Patrinos P, Stella L, Bemporad A. Forward-backward truncated Newton methods for convex composite optimization. preprint 2014. arXiv:1402.6655v2.
- Benzi M, Golub GH, Liesen J. Numerical solution of saddle point problems. Acta Numer. 2005;14:1–137.
- Bergounioux M, Ito K, Kunisch K. Primal-dual strategy for constrained optimal control problems. SIAM J. Control Optim. 1999;37:1176–1194.