Abstract
In this paper, we explore hybrid parallel global optimization using Dividing Rectangles (DIRECT) and asynchronous generating set search (GSS). Both DIRECT and GSS are derivative-free and so require only objective function values; this makes these methods applicable to a wide variety of science and engineering problems. DIRECT is a global search method that strategically divides the search space into ever-smaller rectangles, sampling the objective function at the centre point for each rectangle. GSS is a local search method that samples the objective function at trial points around the current best point, i.e. the point with the lowest function value. Latin hypercube sampling can be used to seed GSS with a good starting point. Using a set of global optimization test problems, we compare the parallel performance of DIRECT and GSS with hybrids that combine the two methods. Our experiments suggest that the hybrid methods are much faster than DIRECT and scale better when more processors are added. This improvement in performance is achieved without any sacrifice in the quality of the solution – the hybrid methods find the global optimum whenever DIRECT does.
Acknowledgements
We gratefully acknowledge the users of APPSPACK for presenting us with new and challenging scenarios that motivated us to extend the code and create the new HOPSPACK framework presented here. We are indebted to Genetha Gray for her input on the design of HOPSPACK. Noam Goldberg was kind enough to proofread sections of this manuscript. Finally, we thank the two anonymous referees for their valuable suggestions and also Mihai Antiescu for his handling of the manuscript. This work was funded by Sandia National Laboratories, a multiprogramme laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.