Abstract
We introduce a new random, non-gradient based, optimization scheme that is consistent for all measurable functions with essential supremum. Theorem 1 establishes the exponential rate of convergence for a large class of functions. Performance comparisons with Pure Random Search (PRS), three quasi-Newton-type optimization routines as well as numerous non-gradient based procedures are reported.