Gaussian processes are a powerful tool for regression problems. Beside computing regression curves using predictive mean values, the uncertainty of the estimations can be computed in terms of predictive variances. However, the complexity of learning and testing the model is often too large for practical use. We present an efficient approximation of the Gaussian process regression framework leading to reduced complexities of both runtime and memory. The idea is to approximate Gaussian process predictive mean and variance using a special diagonal matrix instead of the full kernel matrix. We show that this simple diagonal matrix approximation of the Gaussian process predictive variance is a true upper bound for the exact variance. Experimental results are presented for a standard regression task.
[1]
Ulrike von Luxburg,et al.
Statistical learning with similarity and dissimilarity functions
,
2004
.
[2]
C. Rasmussen,et al.
Approximations for Binary Gaussian Process Classification
,
2008
.
[3]
Zoubin Ghahramani,et al.
Sparse Gaussian Processes using Pseudo-inputs
,
2005,
NIPS.
[4]
Joachim Denzler,et al.
Large-Scale Gaussian Process Classification with Flexible Adaptive Histogram Kernels
,
2012,
ECCV.
[5]
Carl E. Rasmussen,et al.
A Unifying View of Sparse Approximate Gaussian Process Regression
,
2005,
J. Mach. Learn. Res..