Learning Gaussian Process Models from Uncertain Data

It is generally assumed in the traditional formulation of supervised learning that only the outputs data are uncertain. However, this assumption might be too strong for some learning tasks. This paper investigates the use of Gaussian Process prior to infer consistent models given uncertain data. By assuming a Gaussian distribution with known variances over the inputs and a Gaussian covariance function, it is possible to marginalize out the inputs' uncertainty and keep an analytical posterior distribution over functions. We demonstrated the properties of the method on a synthetic problem and on a more realistic one, which consist in learning the dynamics of the well-known cart-pole problem and compare the performance versus a classic Gaussian Process. A large improvement of the mean squared error is presented as well as the consistency of the result of the regression.