Kernel Learning Method on Riemannian Manifold with Geodesic Distance Preservation

Riemannian manifold recently has widely exploited in pattern recognition, data analysis, and machine learning. In this paper, a novel kernel learning method, based on preserving geodesic distance on Riemannian manifold, is proposed. In our approach, the features of data first are extracted by an covariance descriptor, and represented as covariance matrices (SPD matrices). Then an logarithm operation is used to transfer them to column vectors. Different from general doing, we introduce a parameterized Mahalanobis distance to define the distance between two column vectors, aiming to make it equal to the geodetic distance between two corresponding data on Riemannian manifold. Furthermore, an initialization kernel matrix is obtained from the learned Mahalanobis distance matrix by a linear transformation, and with it the Bregman optimization algorithm is applied to find the optimal kernel matrix, in which the distance in kernel space is equal to the geodetic distance on Riemannian manifold. We implement experiments on texture data sets to demonstrate the benefits of the proposed method.

[1]  Nicolas Le Roux,et al.  Learning Eigenfunctions Links Spectral Embedding and Kernel PCA , 2004, Neural Computation.

[2]  O. Faugeras,et al.  Statistics on Multivariate Normal Distributions: A Geometric Approach and its Application to Diffusion Tensor MRI , 2004 .

[3]  René Vidal,et al.  Clustering and dimensionality reduction on Riemannian manifolds , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Yann LeCun,et al.  Learning a similarity metric discriminatively, with application to face verification , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[5]  Xavier Pennec,et al.  A Riemannian Framework for Tensor Computing , 2005, International Journal of Computer Vision.

[6]  Peter Meer,et al.  Nonlinear Mean Shift over Riemannian Manifolds , 2009, International Journal of Computer Vision.

[7]  Fatih Murat Porikli,et al.  Region Covariance: A Fast Descriptor for Detection and Classification , 2006, ECCV.

[8]  John C. Platt,et al.  Fast training of support vector machines using sequential minimal optimization, advances in kernel methods , 1999 .

[9]  Brian C. Lovell,et al.  Sparse Coding and Dictionary Learning for Symmetric Positive Definite Matrices: A Kernel Approach , 2012, ECCV.

[10]  Xuemin Lin,et al.  Efficient Identification of Local Keyword Patterns in Microblogging Platforms , 2016, IEEE Transactions on Knowledge and Data Engineering.

[11]  TuzelOncel,et al.  Pedestrian Detection via Classification on Riemannian Manifolds , 2008 .

[12]  Fatih Murat Porikli,et al.  Pedestrian Detection via Classification on Riemannian Manifolds , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  Nello Cristianini,et al.  Learning the Kernel Matrix with Semidefinite Programming , 2002, J. Mach. Learn. Res..

[14]  Alexander J. Smola,et al.  Learning the Kernel with Hyperkernels , 2005, J. Mach. Learn. Res..

[15]  Zoubin Ghahramani,et al.  Nonparametric Transforms of Graph Kernels for Semi-Supervised Learning , 2004, NIPS.

[16]  Manik Varma,et al.  Learning The Discriminative Power-Invariance Trade-Off , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[17]  Brian C. Lovell,et al.  Kernel analysis over Riemannian manifolds for visual recognition of actions, pedestrians and textures , 2012, 2012 IEEE Workshop on the Applications of Computer Vision (WACV).

[18]  Sayan Mukherjee,et al.  Choosing Multiple Parameters for Support Vector Machines , 2002, Machine Learning.

[19]  N. Ayache,et al.  Log‐Euclidean metrics for fast and simple calculus on diffusion tensors , 2006, Magnetic resonance in medicine.

[20]  Alexander J. Smola,et al.  Learning with non-positive kernels , 2004, ICML.