Element Rearrangement for Tensor-Based Subspace Learning

The success of tensor-based subspace learning depends heavily on reducing correlations along the column vectors of the mode-k flattened matrix. In this work, we study the problem of rearranging elements within a tensor in order to maximize these correlations, so that information redundancy in tensor data can be more extensively removed by existing tensor-based dimensionality reduction algorithms. An efficient iterative algorithm is proposed to tackle this essentially integer optimization problem. In each step, the tensor structure is refined with a spatially-constrained Earth Mover's Distance procedure that incrementally rearranges tensors to become more similar to their low rank approximations, which have high correlation among features along certain tensor dimensions. Monotonic convergence of the algorithm is proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. In addition, we present an extension of the algorithm for conducting supervised subspace learning with tensor data. Experiments in both unsupervised and supervised subspace learning demonstrate the effectiveness of our proposed algorithms in improving data compression performance and classification accuracy.

[1]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[2]  Laurence A. Wolsey,et al.  Integer and Combinatorial Optimization , 1988, Wiley interscience series in discrete mathematics and optimization.

[3]  Hyeonjoon Moon,et al.  The FERET evaluation methodology for face-recognition algorithms , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[4]  David J. Kriegman,et al.  Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection , 1996, ECCV.

[5]  Leonidas J. Guibas,et al.  A metric for distributions with applications to image databases , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[6]  Joos Vandewalle,et al.  A Multilinear Singular Value Decomposition , 2000, SIAM J. Matrix Anal. Appl..

[7]  Eero P. Simoncelli,et al.  Natural image statistics and neural representation. , 2001, Annual review of neuroscience.

[8]  Terence Sim,et al.  The CMU Pose, Illumination, and Expression (PIE) database , 2002, Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition.

[9]  T. Jaakkola,et al.  Generalized Low-Rank Approximations , 2003 .

[10]  D. Goldsman Operations Research Models and Methods , 2003 .

[11]  Terence Sim,et al.  The CMU Pose, Illumination, and Expression Database , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Demetri Terzopoulos,et al.  Multilinear subspace analysis of image ensembles , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[13]  Jian Yang,et al.  Two-dimensional PCA: a new approach to appearance-based face representation and recognition , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Jieping Ye,et al.  Generalized Low Rank Approximations of Matrices , 2004, Machine Learning.

[15]  Jieping Ye,et al.  Two-Dimensional Linear Discriminant Analysis , 2004, NIPS.

[16]  Hwann-Tzong Chen,et al.  Local discriminant embedding and its variants , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[17]  Dong Xu,et al.  Discriminant analysis with tensor representation , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[18]  Dit-Yan Yeung,et al.  Tensor Embedding Methods , 2006, AAAI.