Lateral enhancement in adaptive metric learning for functional data

The amount of available functional data like time series and hyper-spectra in remote sensing is rapidly growing and requires an efficient processing taking into account the knowledge about this special data characteristic. Usually these data are high-dimensional but with inherent correlations between neighbored vector dimensions reflecting the functional characteristics. Especially, for such high dimensional data, metric adaptation is an important tool in several learning methods for data discrimination and sparse representation. An important group of metric learning are relevance and matrix learning in vector quantization. Functional variants of relevance and matrix learning are considered in this paper. For an efficient learning of these functional relevance and matrix weights, we propose the utilization of spatial neighborhood correlations regarding the vector dimensions. We show that this efficient enhancement scheme can be seen as a new dissimilarity measure in standard generalized learning vector quantization, emphasizing the functional data aspect, such that theoretical aspects like margin analysis remain valid.

[1]  A. Sato,et al.  An Analysis of Convergence in Generalized LVQ , 1998 .

[2]  Barbara Hammer,et al.  Prototype based recognition of splice sites , 2005 .

[3]  I. Jolliffe Principal Component Analysis , 2002 .

[4]  Thomas Villmann,et al.  Generalized relevance learning vector quantization , 2002, Neural Networks.

[5]  Michel Verleysen,et al.  Mutual information for the selection of relevant variables in spectrometric nonlinear modelling , 2006, ArXiv.

[6]  Thomas Villmann,et al.  Regularization in Matrix Relevance Learning , 2010, IEEE Transactions on Neural Networks.

[7]  Thomas Villmann,et al.  Functional Principal Component Learning Using Oja's Method and Sobolev Norms , 2009, WSOM.

[8]  Thomas Villmann,et al.  Stationarity of Matrix Relevance Learning Vector Quantization , 2009 .

[9]  Thomas Villmann,et al.  Divergence-Based Vector Quantization , 2011, Neural Computation.

[10]  Thomas Villmann,et al.  Limited Rank Matrix Learning, discriminative dimension reduction and visualization , 2012, Neural Networks.

[11]  Michel Verleysen,et al.  Supervised variable clustering for classification of NIR spectra , 2009, ESANN.

[12]  Thomas Villmann,et al.  Stochastic neighbor embedding (SNE) for dimension reduction and visualization using arbitrary divergences , 2012, Neurocomputing.

[13]  Thomas Martinetz,et al.  'Neural-gas' network for vector quantization and its application to time-series prediction , 1993, IEEE Trans. Neural Networks.

[14]  Thomas Villmann,et al.  Border sensitive fuzzy vector quantization in semi-supervised learning , 2013, ESANN.

[15]  Thomas Villmann,et al.  Relevance LVQ versus SVM , 2004, ICAISC.

[16]  Kjersti Aas,et al.  The Generalized Hyperbolic Skew Student’s t-Distribution , 2006 .

[17]  James O. Ramsay,et al.  Functional Data Analysis , 2005 .

[18]  Robert P. W. Duin,et al.  The Dissimilarity Representation for Pattern Recognition - Foundations and Applications , 2005, Series in Machine Perception and Artificial Intelligence.

[19]  Thomas Villmann,et al.  Divergence-based classification in learning vector quantization , 2011, Neurocomputing.

[20]  Thomas Villmann,et al.  Border-Sensitive Learning in Kernelized Learning Vector Quantization , 2013, IWANN.

[21]  Thomas Villmann,et al.  Functional relevance learning in generalized learning vector quantization , 2012, Neurocomputing.

[22]  Fabrice Rossi,et al.  Clustering functional data with the SOM algorithm , 2004, ESANN.

[23]  Yeuvo Jphonen,et al.  Self-Organizing Maps , 1995 .

[24]  Atsushi Sato,et al.  Generalized Learning Vector Quantization , 1995, NIPS.

[25]  Michael Biehl,et al.  Distance Learning in Discriminative Vector Quantization , 2009, Neural Computation.

[26]  Tosio Kato On the Adiabatic Theorem of Quantum Mechanics , 1950 .

[27]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[28]  Michael Biehl,et al.  Window-Based Example Selection in Learning Vector Quantization , 2010, Neural Computation.

[29]  Koby Crammer,et al.  Margin Analysis of the LVQ Algorithm , 2002, NIPS.

[30]  Michel Verleysen,et al.  Representation of functional data in neural networks , 2005, Neurocomputing.

[31]  B. Silverman,et al.  Smoothed functional principal components analysis by choice of norm , 1996 .

[32]  David A. Landgrebe,et al.  Signal Theory Methods in Multispectral Remote Sensing , 2003 .

[33]  T. Villmann,et al.  Functional relevance learning in learning vector quantization for hyperspectral data , 2011, 2011 3rd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS).

[34]  Michael Biehl,et al.  Adaptive Relevance Matrices in Learning Vector Quantization , 2009, Neural Computation.

[35]  David H. Wolpert,et al.  The Lack of A Priori Distinctions Between Learning Algorithms , 1996, Neural Computation.

[36]  Michel Verleysen,et al.  A data-driven functional projection approach for the selection of feature ranges in spectra with ICA or cluster analysis , 2008, ArXiv.

[37]  Harold J. Kushner,et al.  wchastic. approximation methods for constrained and unconstrained systems , 1978 .