Learning Gradients with Gaussian Processes

The problems of variable selection and inference of statistical dependence have been addressed by modeling in the gradients learning framework based on the representer theorem In this paper, we propose a new gradients learning algorithm in the Bayesian framework, called Gaussian Processes Gradient Learning (GPGL) model, which can achieve higher accuracy while returning the credible intervals of the estimated gradients that existing methods cannot provide The simulation examples are used to verify the proposed algorithm, and its advantages can be seen from the experimental results.

[1]  Alexander J. Smola,et al.  Learning with Kernels: support vector machines, regularization, optimization, and beyond , 2001, Adaptive computation and machine learning series.

[2]  Sayan Mukherjee,et al.  Learning Gradients: Predictive Models that Infer Geometry and Statistical Dependence , 2010, J. Mach. Learn. Res..

[3]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[4]  H. Tong,et al.  Article: 2 , 2002, European Financial Services Law.

[5]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[6]  Serge J. Belongie,et al.  Non-isometric manifold learning: analysis and an algorithm , 2007, ICML '07.

[7]  Sayan Mukherjee,et al.  Learning Coordinate Covariances via Gradients , 2006, J. Mach. Learn. Res..

[8]  Edwin V. Bonilla,et al.  Multi-task Gaussian Process Prediction , 2007, NIPS.

[9]  Sayan Mukherjee,et al.  Estimation of Gradients and Coordinate Covariation in Classification , 2006, J. Mach. Learn. Res..