Riemannian kernel based Nyström method for approximate infinite-dimensional covariance descriptors with application to image set classification

In the domain of pattern recognition, using the CovDs (Covariance Descriptors) to represent data and taking the metrics of the resulting Riemannian manifold into account have been widely adopted for the task of image set classification. Recently, it has been proven that infinite-dimensional CovDs are more discriminative than their low-dimensional counterparts. However, the form of infinite-dimensional CovDs is implicit and the computational load is high. We propose a novel framework for representing image sets by approximating infinite-dimensional CovDs in the paradigm of the Nyström method based on a Riemannian kernel. We start by modeling the images via CovDs, which lie on the Riemannian manifold spanned by SPD (Symmetric Positive Definite) matrices. We then extend the Nyström method to the SPD manifold and obtain the approximations of CovDs in RKHS (Reproducing Kernel Hilbert Space). Finally, we approximate infinite-dimensional CovDs via these approximations. Empirically, we apply our framework to the task of image set classification. The experimental results obtained on three benchmark datasets show that our proposed approximate infinite-dimensional CovDs outperform the original CovDs.

[1]  Fatih Murat Porikli,et al.  Region Covariance: A Fast Descriptor for Detection and Classification , 2006, ECCV.

[2]  Anoop Cherian,et al.  Riemannian Dictionary Learning and Sparse Coding for Positive Definite Matrices , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[3]  Trevor Darrell,et al.  Face recognition with image sets using manifold density divergence , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[4]  Mehrtash Tafazzoli Harandi,et al.  Bregman Divergences for Infinite Dimensional Covariance Matrices , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Mehrtash Tafazzoli Harandi,et al.  Image set classification by symmetric positive semi-definite matrices , 2016, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV).

[6]  Lei Zhang,et al.  Log-Euclidean Kernels for Sparse Representation and Dictionary Learning , 2013, 2013 IEEE International Conference on Computer Vision.

[7]  Xilin Chen,et al.  Projection Metric Learning on Grassmann Manifold with Application to Video based Face Recognition , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Mehrtash Harandi,et al.  Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Lei Zhang,et al.  RAID-G: Robust Estimation of Approximate Infinite Dimensional Gaussian with Application to Material Recognition , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Hui Li,et al.  Multi-focus Image Fusion Using Dictionary Learning and Low-Rank Representation , 2017, ICIG.

[11]  Tianyang Xu,et al.  Object tracking with kernel correlation filters based on mean shift , 2017, 2017 International Smart Cities Conference (ISC2).

[12]  Xiaojun Wu,et al.  Bidirectional Covariance Matrices: A Compact and Efficient Data Descriptor for Image Set Classification , 2015, IScIDE.

[13]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[14]  Ha Quang Minh,et al.  Affine-Invariant Riemannian Distance Between Infinite-Dimensional Covariance Operators , 2015, GSI.

[15]  Matthias W. Seeger,et al.  Using the Nyström Method to Speed Up Kernel Machines , 2000, NIPS.

[16]  Vittorio Murino,et al.  Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces , 2014, NIPS.

[17]  Larry S. Davis,et al.  Covariance discriminative learning: A natural and efficient approach to image set classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Shiguang Shan,et al.  Log-Euclidean Metric Learning on Symmetric Positive Definite Manifold with Application to Image Set Classification , 2015, ICML.

[19]  Rui Wang,et al.  Structure Maintaining Discriminant Maps (SMDM) for Grassmann Manifold Dimensionality Reduction with Applications to the Image Set Classification , 2017, 2017 16th International Symposium on Distributed Computing and Applications to Business, Engineering and Science (DCABES).

[20]  Brian C. Lovell,et al.  Sparse Coding on Symmetric Positive Definite Manifolds Using Bregman Divergences , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[21]  Mehrtash Tafazzoli Harandi,et al.  Approximate infinite-dimensional Region Covariance Descriptors for image classification , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).