The recently proposed "generalized min-max" (GMM) kernel can be efficiently linearized, with direct applications in large-scale statistical learning and fast near neighbor search. The linearized GMM kernel was extensively compared in with linearized radial basis function (RBF) kernel. On a large number of classification tasks, the tuning-free GMM kernel performs (surprisingly) well compared to the best-tuned RBF kernel. Nevertheless, one would naturally expect that the GMM kernel ought to be further improved if we introduce tuning parameters.
In this paper, we study three simple constructions of tunable GMM kernels: (i) the exponentiated-GMM (or eGMM) kernel, (ii) the powered-GMM (or pGMM) kernel, and (iii) the exponentiated-powered-GMM (epGMM) kernel. The pGMM kernel can still be efficiently linearized by modifying the original hashing procedure for the GMM kernel. On about 60 publicly available classification datasets, we verify that the proposed tunable GMM kernels typically improve over the original GMM kernel. On some datasets, the improvements can be astonishingly significant.
For example, on 11 popular datasets which were used for testing deep learning algorithms and tree methods, our experiments show that the proposed tunable GMM kernels are strong competitors to trees and deep nets. The previous studies developed tree methods including "abc-robust-logitboost" and demonstrated the excellent performance on those 11 datasets (and other datasets), by establishing the second-order tree-split formula and new derivatives for multi-class logistic loss. Compared to tree methods like "abc-robust-logitboost" (which are slow and need substantial model sizes), the tunable GMM kernels produce largely comparable results.
[1]
Ping Li,et al.
Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost
,
2010,
UAI.
[2]
Ping Li,et al.
0-Bit Consistent Weighted Sampling
,
2015,
KDD.
[3]
Y. Freund,et al.
Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By
,
2000
.
[4]
Yoshua Bengio,et al.
An empirical evaluation of deep architectures on problems with many factors of variation
,
2007,
ICML '07.
[5]
Ping Li,et al.
Generalized Min-Max Kernel and Generalized Consistent Weighted Sampling
,
2016,
ArXiv.
[6]
Ping Li.
Nystrom Method for Approximating the GMM Kernel
,
2016,
ArXiv.
[7]
Ping Li.
Adaptive Base Class Boost for Multi-class Classification
,
2008,
ArXiv.
[8]
Sergey Ioffe,et al.
Improved Consistent Sampling, Weighted Minhash and L1 Sketching
,
2010,
2010 IEEE International Conference on Data Mining.
[9]
Ping Li,et al.
ABC-boost: adaptive base class boost for multi-class classification
,
2008,
ICML '09.
[10]
Kunal Talwar,et al.
Consistent Weighted Sampling
,
2007
.
[11]
Ping Li.
Linearized GMM Kernels and Normalized Random Fourier Features
,
2017,
KDD.
[12]
J. Friedman.
Greedy function approximation: A gradient boosting machine.
,
2001
.