Generalized Min-Max Kernel and Generalized Consistent Weighted Sampling

We propose the "generalized min-max" (GMM) kernel as a measure of data similarity, where data vectors can have both positive and negative entries. GMM is positive definite as there is an associate hashing method named "generalized consistent weighted sampling" (GCWS) which linearizes this (nonlinear) kernel. A natural competitor of GMM is the radial basis function (RBF) kernel, whose corresponding hashing method is known as the "random Fourier features" (RFF). An extensive experimental study on classifications of \textbf{50} publicly available datasets demonstrates that both the GMM and RBF kernels can often substantially improve over linear classifiers. Furthermore, the GCWS hashing method typically requires substantially fewer samples than RFF in order to achieve similar classification accuracies. To understand the property of random Fourier features (RFF), we derive the theoretical variance of RFF, which reveals that the variance of RFF has a term that does not vanish at any similarity. In comparison, the variance of GCWS approaches zero at certain similarities. Overall, the relative (to the expectation) variance of RFF is substantially larger than the relative variance of GCWS. This helps explain the superb empirical results of GCWS compared to RFF. We expect that GMM and GCWS will be adopted in practice for large-scale statistical machine learning applications and efficient near neighbor search (as GMM generates discrete hash values).

[1]  Jeff G. Schneider,et al.  On the Error of Random Fourier Features , 2015, UAI.

[2]  Ben Taskar,et al.  Approximate Inference in Continuous Determinantal Processes , 2013, NIPS.

[3]  Le Song,et al.  Scalable Kernel Methods via Doubly Stochastic Gradients , 2014, NIPS.

[4]  Kenneth Ward Church,et al.  Very sparse random projections , 2006, KDD '06.

[5]  Matthew W. Hoffman,et al.  Predictive Entropy Search for Efficient Global Optimization of Black-box Functions , 2014, NIPS.

[6]  Kunal Talwar,et al.  Consistent Weighted Sampling , 2007 .

[7]  Gennady Samorodnitsky,et al.  Sign Stable Projections, Sign Cauchy Projections and Chi-Square Kernels , 2013, ArXiv.

[8]  Kenneth Ward Church,et al.  Improving Random Projections Using Marginal Information , 2006, COLT.

[9]  W. Rudin,et al.  Fourier Analysis on Groups. , 1965 .

[10]  Arthur Gretton,et al.  Fast Two-Sample Testing with Analytic Representations of Probability Measures , 2015, NIPS.

[11]  Sergey Ioffe,et al.  Improved Consistent Sampling, Weighted Minhash and L1 Sketching , 2010, 2010 IEEE International Conference on Data Mining.

[12]  Ping Li,et al.  Hashing Algorithms for Large-Scale Learning , 2011, NIPS.

[13]  Chih-Jen Lin,et al.  LIBLINEAR: A Library for Large Linear Classification , 2008, J. Mach. Learn. Res..

[14]  Yoshua Bengio,et al.  An empirical evaluation of deep architectures on problems with many factors of variation , 2007, ICML '07.

[15]  Georges Goetz,et al.  Recognizing retinal ganglion cells in the dark , 2015, NIPS.

[16]  Ping Li,et al.  Theory of the GMM Kernel , 2016, WWW.

[17]  Inderjit S. Dhillon,et al.  Fast Prediction for Large-Scale Kernel Machines , 2014, NIPS.

[18]  Zoubin Ghahramani,et al.  Parallel Predictive Entropy Search for Batch Global Optimization of Expensive Objective Functions , 2015, NIPS.

[19]  Ping Li Tunable GMM Kernels , 2017, ArXiv.

[20]  Jason Weston,et al.  Large-scale kernel machines , 2007 .

[21]  Ping Li Generalized Intersection Kernel , 2016, ArXiv.

[22]  Ping Li Nystrom Method for Approximating the GMM Kernel , 2016, ArXiv.

[23]  Benjamin Recht,et al.  Random Features for Large-Scale Kernel Machines , 2007, NIPS.

[24]  Ashutosh Kumar Singh,et al.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction , 2010 .

[25]  Ping Li,et al.  0-Bit Consistent Weighted Sampling , 2015, KDD.

[26]  Rong Jin,et al.  Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison , 2012, NIPS.

[27]  Shou-De Lin,et al.  Sparse Random Feature Algorithm as Coordinate Descent in Hilbert Space , 2014, NIPS.

[28]  Svetlana Lazebnik,et al.  Locality-sensitive binary codes from shift-invariant kernels , 2009, NIPS.

[29]  Ping Li,et al.  ABC-boost: adaptive base class boost for multi-class classification , 2008, ICML '09.

[30]  Ping Li,et al.  Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost , 2010, UAI.