Learning Parameters of Linear Models in Compressed Parameter Space

We present a novel method of reducing the training time by learning parameters of a model at hand in compressed parameter space. In compressed parameter space the parameters of the model are represented by fewer parameters, and hence training can be faster. After training, the parameters of the model can be generated from the parameters in compressed parameter space. We show that for supervised learning, learning the parameters of a model in compressed parameter space is equivalent to learning parameters of the model in compressed input space. We have applied our method to a supervised learning domain and show that a solution can be obtained at much faster speed than learning in uncompressed parameter space. For reinforcement learning, we show empirically that searching directly the parameters of a policy in compressed parameter space accelerates learning.

[1]  Richard G. Baraniuk,et al.  The smashed filter for compressive classification and target recognition , 2007, Electronic Imaging.

[2]  Neil D. Lawrence,et al.  Missing Data in Kernel PCA , 2006, ECML.

[3]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[4]  Larry A. Wasserman,et al.  Compressed Regression , 2007, NIPS.

[5]  Risto Miikkulainen,et al.  Robust non-linear control through neuroevolution , 2003 .

[6]  Rémi Munos,et al.  Compressed Least-Squares Regression , 2009, NIPS.

[7]  Jürgen Schmidhuber,et al.  Evolving neural networks in compressed weight space , 2010, GECCO '10.

[8]  Jürgen Schmidhuber,et al.  Searching for Minimal Neural Networks in Fourier Space , 2010, AGI 2010.

[9]  Christian Kothe,et al.  Towards passive brain–computer interfaces: applying brain–computer interface technology to human–machine systems in general , 2011, Journal of neural engineering.

[10]  Risto Miikkulainen,et al.  Efficient Non-linear Control Through Neuroevolution , 2006, ECML.

[11]  Tom Schaul,et al.  Towards Practical Universal Search , 2010, AGI 2010.

[12]  J. Haupt,et al.  Compressive Sampling for Signal Classification , 2006, 2006 Fortieth Asilomar Conference on Signals, Systems and Computers.

[13]  Scott M. Williams,et al.  A balanced accuracy function for epistasis modeling in imbalanced datasets using multifactor dimensionality reduction , 2007, Genetic epidemiology.

[14]  David L Donoho,et al.  Compressed sensing , 2006, IEEE Transactions on Information Theory.

[15]  Frank Kirchner,et al.  Accelerating neuroevolutionary methods using a Kalman filter , 2008, GECCO '08.