ABSTRACT
Given the inability to foresee all possible scenarios, it is justified to desire an efficient trust-region subproblem solver capable of delivering any desired level of accuracy on demand; that is, the accuracy obtainable for a given trust-region subproblem should not be partially dependent on the problem itself. Current state-of-the-art iterative eigensolvers all fall into the class of restarted Lanczos methods; whereas, current iterative trust-region solvers at best reduce to unrestarted Lanczos methods; which in this context are well known to be numerically unstable with impractical memory requirements. In this paper, we present the first iterative trust region subproblem solver that at its core contains a robust and practical eigensolver. Our solver leverages the recently announced work of Stathopoulos and Orginos which has not been noticed by the optimization community and can be utilized because, unlike other restarted Lanczos methods, its restarts do not necessarily modify the current Lanczos sequence generated by Conjugate Gradient methods (CG). This innovated strategy can be utilized in the context of TR solvers as well. Moreover, our TR subproblem solver adds negligible computational overhead compared to existing iterative TR approaches.
Acknowledgments
The authors profusely thank Manoj Chari, Yan Xu, Katya Scheinberg, and Jon W. Tolle for many insightful discussions, and continual support for research and development of this work.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Notes on contributors
I. G. Akrotirianakis
Dr I. G. Akrotirianakis was an operation research specialist at SAS Institute Inc. His research interests primarily lie in the intersection of Mathematical Optimization (Nonlinear, Mixed Integer and Global Optimization) and Machine Learning (Deep Neural Networks and Reinforcement Learning). He obtained his BSc in Mathematics from the Aristotle University of Thessaloniki, Greece, and earned his Ph.D. in Computer Science from Imperial College London, UK. He is currently a research scientist at Siemens Corporate Technology.
M. Gratton
Dr M. Gratton worked in the Advanced Analytics Division at SAS Institute Inc. She received her Ph.D. degree in Statistics and Operations Research from the University of North Carolina at Chapel Hill in 2012.
J. D. Griffin
Dr J. D. Griffin manages a nonlinear optimization team supporting external and internal optimization solver development for SAS products. His team works closely with the statistics, ETS, machine learning, and deep learning teams at SAS. His academic background is in large-scale nonconvex optimization. He received both a master's and Ph.D. from UCSD in mathematics.
S. Yektamaram
Dr S. Yektamaram is an operations research specialist at SAS Institute, working on mathematical optimization research and development. His background is on optimization methods for machine learning, he has received his Industrial and Systems Engineering Ph.D. from Lehigh University, Pennsylvania.
W. Zhou
Dr W. Zhou is an operations research specialist at SAS Institute Inc. His research interests are primarily in numerical optimization and machine learning. He received his Ph.D. at Numerical Optimization Centre of University of Hertfordshire, UK.