A Witness Two-Sample Test

The Maximum Mean Discrepancy (MMD) has been the state-of-the-art nonparametric test for tackling the two-sample problem. Its statistic is given by the difference in expectations of the witness function, a real-valued function defined as a weighted sum of kernel evaluations on a set of basis points. Typically the kernel is optimized on a training set, and hypothesis testing is performed on a separate test set to avoid overfitting (i.e., control type-I error). That is, the test set is used to simultaneously estimate the expectations and define the basis points, while the training set only serves to select the kernel and is discarded. In this work, we argue that this data splitting scheme is overly conservative, and propose to use the training data to also define the weights and the basis points for better data efficiency. We show that 1) the new test is consistent and has a well-controlled type-I error; 2) the optimal witness function is given by a precision-weighted mean in the reproducing kernel Hilbert space associated with the kernel, and is closely related to kernel Fisher discriminant analysis; and 3) the test power of the proposed test is comparable or exceeds that of the MMD and other modern tests, as verified empirically on challenging synthetic and real problems (e.g., Higgs data).

[1]  Alexander Cloninger,et al.  Classification Logit Two-Sample Testing by Neural Networks for Differentiating Near Manifold Densities , 2019, IEEE Transactions on Information Theory.

[2]  Bernhard Schölkopf,et al.  A Generalized Representer Theorem , 2001, COLT/EuroCOLT.

[3]  Bernhard Schölkopf,et al.  A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..

[4]  H. Hotelling The Generalization of Student’s Ratio , 1931 .

[5]  Kenji Fukumizu,et al.  Statistical Convergence of Kernel CCA , 2005, NIPS.

[6]  B. Scholkopf,et al.  Fisher discriminant analysis with kernels , 1999, Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468).

[7]  Stephen E. Fienberg,et al.  Testing Statistical Hypotheses , 2005 .

[8]  Sebastian Mika,et al.  Kernel Fisher Discriminants , 2003 .

[9]  Ing Rj Ser Approximation Theorems of Mathematical Statistics , 1980 .

[10]  Two-sample Testing Using Deep Learning , 2019, AISTATS.

[11]  Lorenzo Rosasco,et al.  FALKON: An Optimal Large Scale Kernel Method , 2017, NIPS.

[12]  P. Baldi,et al.  Searching for exotic particles in high-energy physics with deep learning , 2014, Nature Communications.

[13]  Sivaraman Balakrishnan,et al.  Optimal kernel choice for large-scale two-sample tests , 2012, NIPS.

[14]  Jerome H. Friedman,et al.  A New Graph-Based Two-Sample Test for Multivariate and Object Data , 2013, 1307.6294.

[15]  Krishnakumar Balasubramanian,et al.  On the Optimality of Kernel-Embedding Based Goodness-of-Fit Tests , 2017, J. Mach. Learn. Res..

[16]  Jonas M. Kubler,et al.  Learning Kernel Tests Without Data Splitting , 2020, NeurIPS.

[17]  Anthony Widjaja,et al.  Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2003, IEEE Transactions on Neural Networks.

[18]  Lorenzo Rosasco,et al.  Kernel methods through the roof: handling billions of points efficiently , 2020, NeurIPS.

[19]  Larry A. Wasserman,et al.  Classification Accuracy as a Proxy for Two Sample Testing , 2016, The Annals of Statistics.

[20]  J. Friedman On Multivariate Goodness-of-Fit and Two-Sample Testing , 2004 .

[21]  Nyström Kernel Mean Embeddings , 2022, ArXiv.

[22]  Hans-Peter Kriegel,et al.  Integrating structured biological data by Kernel Maximum Mean Discrepancy , 2006, ISMB.

[23]  Arthur Gretton,et al.  Interpretable Distribution Features with Maximum Testing Power , 2016, NIPS.

[24]  Christopher K. I. Williams,et al.  Using the Nyström Method to Speed Up Kernel Machines , 2000, NIPS.

[25]  Matthieu Lerasle,et al.  Kernels Based Tests with Non-asymptotic Bootstrap Approaches for Two-sample Problems , 2012, COLT.

[26]  M. Yuan,et al.  On the Optimality of Gaussian Kernel Based Nonparametric Tests against Smooth Alternatives , 2019, 1909.03302.

[27]  B. Laurent,et al.  The two-sample problem for Poisson processes: adaptive tests with a non-asymptotic wild bootstrap approach , 2012, 1203.3572.

[28]  Zaïd Harchaoui,et al.  Testing for Homogeneity with Kernel Fisher Discriminant Analysis , 2007, NIPS.

[29]  Feng Liu,et al.  Learning Deep Kernels for Non-Parametric Two-Sample Tests , 2020, ICML.

[30]  David Lopez-Paz,et al.  Revisiting Classifier Two-Sample Tests , 2016, ICLR.

[31]  Arthur Gretton,et al.  Fast Two-Sample Testing with Analytic Representations of Probability Measures , 2015, NIPS.

[32]  Alexander J. Smola,et al.  Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy , 2016, ICLR.

[33]  J. Friedman,et al.  Multivariate generalizations of the Wald--Wolfowitz and Smirnov two-sample tests , 1979 .

[34]  Qingtang Jiang,et al.  Two‐sample test based on classification probability , 2019, Stat. Anal. Data Min..

[35]  Bernhard Schölkopf,et al.  Hilbert Space Embeddings and Metrics on Probability Measures , 2009, J. Mach. Learn. Res..