A Generalized Representer Theorem for Hilbert Space - Valued Functions

The necessary and sufficient conditions for existence of a generalized representer theorem are presented for learning Hilbert space-valued functions. Representer theorems involving explicit basis functions and Reproducing Kernels are a common occurrence in various machine learning algorithms like generalized least squares, support vector machines, Gaussian process regression and kernel based deep neural networks to name a few. Due to the more general structure of the underlying variational problems, the theory is also relevant to other application areas like optimal control, signal processing and decision making. We present the generalized representer as a unified view for supervised and semi-supervised learning methods, using the theory of linear operators and subspace valued maps. The implications of the theorem are presented with examples of multi input-multi output regression, kernel based deep neural networks, stochastic regression and sparsity learning problems as being special cases in this unified view.

[1]  A Tikhonov,et al.  Solution of Incorrectly Formulated Problems and the Regularization Method , 1963 .

[2]  W. Rudin Principles of mathematical analysis , 1964 .

[3]  Charles A. Micchelli,et al.  On Learning Vector-Valued Functions , 2005, Neural Computation.

[4]  N. Aronszajn Theory of Reproducing Kernels. , 1950 .

[5]  Michael Unser,et al.  Representer Theorems for Sparsity-Promoting $\ell _{1}$ Regularization , 2016, IEEE Transactions on Information Theory.

[6]  Paul Hofmarcher Primal and dual model representations in kernel-based learning , 2011 .

[7]  Grace Wahba,et al.  Spline Models for Observational Data , 1990 .

[8]  Bernhard Schölkopf,et al.  A Generalized Representer Theorem , 2001, COLT/EuroCOLT.

[9]  Christopher M. Bishop,et al.  Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .

[10]  Neil D. Lawrence,et al.  Deep Gaussian Processes , 2012, AISTATS.

[11]  Bernhard Schölkopf,et al.  The representer theorem for Hilbert spaces: a necessary and sufficient condition , 2012, NIPS.

[12]  Vittorio Murino,et al.  A Unifying Framework in Vector-valued Reproducing Kernel Hilbert Spaces for Manifold Regularization and Co-Regularized Multi-view Learning , 2014, J. Mach. Learn. Res..

[13]  Walid Mahdi,et al.  Deep multilayer multiple kernel learning , 2016, Neural Computing and Applications.

[14]  Andreas Argyriou,et al.  A Unifying View of Representer Theorems , 2014, ICML.

[15]  Vikas Sindhwani,et al.  Vector-valued Manifold Regularization , 2011, ICML.

[16]  Sayan Mukherjee,et al.  Characterizing the Function Space for Bayesian Kernel Models , 2007, J. Mach. Learn. Res..