Scaling Active Search using Linear Similarity Functions

Active Search has become an increasingly useful tool in information retrieval problems where the goal is to discover as many target elements as possible using only limited label queries. With the advent of big data, there is a growing emphasis on the scalability of such techniques to handle very large and very complex datasets. In this paper, we consider the problem of Active Search where we are given a similarity function between data points. We look at an algorithm introduced by Wang et al. [2013] for Active Search over graphs and propose crucial modifications which allow it to scale significantly. Their approach selects points by minimizing an energy function over the graph induced by the similarity function on the data. Our modifications require the similarity function to be a dot-product between feature vectors of data points, equivalent to having a linear kernel for the adjacency matrix. With this, we are able to scale tremendously: for $n$ data points, the original algorithm runs in $O(n^2)$ time per iteration while ours runs in only $O(nr + r^2)$ given $r$-dimensional features. We also describe a simple alternate approach using a weighted-neighbor predictor which also scales well. In our experiments, we show that our method is competitive with existing semi-supervised approaches. We also briefly discuss conditions under which our algorithm performs well.

[1]  Claudio Gentile,et al.  Active Learning on Trees and Graphs , 2010, COLT.

[2]  Yasuhiro Fujiwara,et al.  Efficient Label Propagation , 2014, ICML.

[3]  Jure Leskovec,et al.  Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining , 2014, KDD 2014.

[4]  Xiaojin Zhu,et al.  Harmonic mixtures: combining mixture models and graph-based methods for inductive and scalable semi-supervised learning , 2005, ICML.

[5]  J. Lafferty,et al.  Combining active learning and semi-supervised learning using Gaussian fields and harmonic functions , 2003, ICML 2003.

[6]  Mikhail Belkin,et al.  Laplacian Support Vector Machines Trained in the Primal , 2009, J. Mach. Learn. Res..

[7]  J. Varah A lower bound for the smallest singular value of a matrix , 1975 .

[8]  Jeff A. Bilmes,et al.  Label Selection on Graphs , 2009, NIPS.

[9]  Roman Garnett,et al.  Σ-Optimality for Active Learning on Gaussian Random Fields , 2013, NIPS.

[10]  Jeff G. Schneider,et al.  Active Search and Bandits on Graphs using Sigma-Optimality , 2015, UAI.

[11]  Michael I. Jordan,et al.  Advances in Neural Information Processing Systems 30 , 1995 .

[12]  Robert L. Grossman,et al.  Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining , 2013, KDD 2013.

[13]  Wei Liu,et al.  Large Graph Construction for Scalable Semi-Supervised Learning , 2010, ICML.

[14]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[15]  Dan Kushnir,et al.  Active-transductive learning with label-adapted kernels , 2014, KDD.

[16]  Roman Garnett,et al.  Active search on graphs , 2013, KDD.

[17]  Benjamin Recht,et al.  Random Features for Large-Scale Kernel Machines , 2007, NIPS.

[18]  Roman Garnett,et al.  Bayesian Optimal Active Search and Surveying , 2012, ICML.