Memory Based Stochastic Optimization for Validation and Tuning of Function Approximators

This paper focuses on the optimization of hyper-parameters for function approximators. We describe a kind of racing algorithm for continuous optimization problems that spends less time evaluating poor parameter settings and more time honing its estimates in the most promising regions of the parameter space. The algorithm is able to automatically optimize the parameters of a function approximator with less computation time. We demonstrate the algorithm on the problem of nding good parameters for a memory based learner and show the tradeoos involved in choosing the right amount of computation to spend on each evaluation.