Abstract
We analyse the convergence of the gradient projection algorithm, which is finalized with the Newton method, to a stationary point for the problem of nonconvex constrained optimization with a proximally smooth set and a smooth function f. We propose new Error bound (EB) conditions for the gradient projection method which lead to the convergence domain of the Newton method. We prove that these EB conditions are typical for a wide class of optimization problems. It is possible to reach high convergence rate of the algorithm by switching to the Newton method.
2010 Mathematics Subject Classifications:
Acknowledgements
The authors are grateful to B. T. Polyak for useful comments and suggestions. The authors are grateful to the anonymous referees for numerous comments and suggestions.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 This condition can be weakened and we can demand compactness for the intersection of some lower level set for f with S. Let be an initial starting point in context of numerical methods. Then one can assume that the intersection of the boundary of the set S and the lower level set is empty and the intersection of the set S and the lower level set is compact.
2 In a finite dimensional space continuity of the mapping can be omitted. This follows from uniqueness and upper semicontinuity of the metric projection [Citation28, Ch. 3, § 1, Proposition 23].
3 We can treat as minimal singular value of matrices (it coincides with minimal by absolute value eigenvalue, denotes the spectrum of a matrix):
4 Sometimes called the Polyak–Lojasiewicz condition or the Kurdyka–Lojasiewicz condition , .
5 It is sufficient to demand Lipschitz continuity of in some neighbourhood of stationary points. This leads to one more restriction of the value β from above. Moreover, we can consider weaker condition , where is the nearest stationary point to the point .