Fast Multilevel Transduction on Graphs

The recent years have witnessed a surge of interest in graphbased semi-supervised learning methods. The common denominator of these methods is that the data are represented by the nodes of a graph, the edges of which encode the pairwise similarities of the data. Despite the theoretical and empirical success, these methods have one major bottleneck which is the high computational complexity (since they usually require the computation of matrix inverse). In this paper, we propose a multilevel scheme for speeding up the traditional graph based semi-supervised learning methods. Unlike other accelerating approaches based on pure mathematical derivations, our method has explicit physical meanings with some graph intuitions. We also analyze the relationship of our method with multigrid methods, and provide a theoretical guarantee of the performance of our method. Finally the experimental results are presented to show the effectiveness of our method.

[1]  Ronald Rosenfeld,et al.  Semi-supervised learning with graphs , 2005 .

[2]  Mikhail Belkin,et al.  Regularization and Semi-supervised Learning on Large Graphs , 2004, COLT.

[3]  M. Wertheimer,et al.  Gestalt Theory , 2019, Theories and Applications of Counseling and Psychotherapy: Relevance Across Cultures and Settings.

[4]  Avrim Blum,et al.  Learning from Labeled and Unlabeled Data using Graph Mincuts , 2001, ICML.

[5]  Bernhard Schölkopf,et al.  Learning with Local and Global Consistency , 2003, NIPS.

[6]  Nicolas Le Roux,et al.  Efficient Non-Parametric Function Induction in Semi-Supervised Learning , 2004, AISTATS.

[7]  Mikhail Belkin,et al.  Semi-Supervised Learning on Riemannian Manifolds , 2004, Machine Learning.

[8]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.

[9]  Fei Wang,et al.  Label Propagation through Linear Neighborhoods , 2008, IEEE Trans. Knowl. Data Eng..

[10]  Ronen Basri,et al.  Fast multiscale image segmentation , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[11]  Alexander Zien,et al.  Semi-Supervised Learning , 2006 .

[12]  Mikhail Belkin,et al.  Laplacian Eigenmaps for Dimensionality Reduction and Data Representation , 2003, Neural Computation.

[13]  Gene H. Golub,et al.  Matrix computations , 1983 .

[14]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[15]  Bernhard Schölkopf,et al.  Learning from Labeled and Unlabeled Data Using Random Walks , 2004, DAGM-Symposium.