Abstract
In this paper we present a new class of memory gradient methods for unconstrained optimization problems and develop some useful global convergence properties under some mild conditions. In the new algorithms, trust region approach is used to guarantee the global convergence. Numerical results show that some memory gradient methods are stable and efficient in practical computation. In particular, some memory gradient methods can be reduced to the BB method in some special cases.
Acknowledgements
The authors would like to thank three anonymous referees and the editor for their comments and suggestions that greatly improved the paper. This work was supported in part by NSF CNS-0521142, USA.