Abstract>
In this paper we introduce an extension of the proximal point algorithm proposed by Güler for solving convex minimization problems. This extension is obtained by substituting the usual quadratic proximal term by a class of convex nonquadratic entropy-like distances, called φ-divergences. A study of the convergence rate of this new proximal point method under mild assumptions is given, and further it is shown that this estimate rate is better than the available one of proximal-like methods. Some applications are given concerning general convex minimizations, linearly constrained convex programs and variationnal inequalities.
Notes
1 This approximation approach is originally introduced by Nesterov ([Citation17, Citation18]) where the author has proposed this kind of techniques based on quadratic approximations etc.…
2 For compatibility, we use the same notation as Auslender and Haddou [Citation1].