KSD Aggregated Goodness-of-fit Test
暂无分享,去创建一个
[1] G. Reinert,et al. On RKHS Choices for Assessing Graph Generators via Kernel Stein Statistics , 2022, ArXiv.
[2] F. Briol,et al. Towards Healing the Blindness of Score Matching , 2022, ArXiv.
[3] Tamara Fern'andez,et al. A general framework for the analysis of kernel-based tests , 2022, 2209.00124.
[4] A. Gretton,et al. Efficient Aggregated Kernel Tests using Incomplete U-statistics , 2022, NeurIPS.
[5] A. Duncan,et al. A Fourier representation of kernel Stein discrepancy with application to Goodness-of-Fit tests for measures on infinite dimensional Hilbert spaces , 2022, 2206.04552.
[6] G. Reinert,et al. A Kernelised Stein Statistic for Assessing Implicit Generative Models , 2022, NeurIPS.
[7] G. Reinert,et al. AgraSSt: Approximate Graph Stein Statistics for Interpretable Assessment of Implicit Graph Generators , 2022, NeurIPS.
[8] A. Gretton,et al. Composite Goodness-of-fit Tests with Kernels , 2021, ArXiv.
[9] B. Laurent,et al. MMD Aggregated Two-Sample Test , 2021, J. Mach. Learn. Res..
[10] Wenkai Xu. Standardisation-function Kernel Stein Discrepancy: A Unifying View on Kernel Stein Discrepancy Tests for Goodness-of-fit , 2021, AISTATS.
[11] Pierre-Cyril Aubin-Frankowski,et al. Kernel Stein Discrepancy Descent , 2021, ICML.
[12] Takeru Matsuda,et al. Interpretable Stein Goodness-of-fit Tests on Riemannian Manifold , 2021, ICML.
[13] Jonas M. Kubler,et al. A Witness Two-Sample Test , 2021, AISTATS.
[14] M. Yamada,et al. Post-selection inference with HSIC-Lasso , 2020, ICML.
[15] A. Duncan,et al. A Kernel Two-Sample Test for Functional Data , 2020, J. Mach. Learn. Res..
[16] Heishiro Kanagawa,et al. Blindness of score-based methods to isolated components and mixing proportions , 2020, 2008.10087.
[17] Arthur Gretton,et al. Kernelized Stein Discrepancy Tests of Goodness-of-fit for Time-to-Event Data , 2020, ICML.
[18] Jonas M. Kubler,et al. Learning Kernel Tests Without Data Splitting , 2020, NeurIPS.
[19] L. Wasserman,et al. Minimax optimality of permutation tests , 2020, The Annals of Statistics.
[20] Bernhard Schölkopf,et al. Testing Goodness of Fit of Conditional Density Models with Kernels , 2020, UAI.
[21] Feng Liu,et al. Learning Deep Kernels for Non-Parametric Two-Sample Tests , 2020, ICML.
[22] Takeru Matsuda,et al. A Stein Goodness-of-fit Test for Directional Distributions , 2020, AISTATS.
[23] Richard Zemel,et al. Learning the Stein Discrepancy for Training and Evaluating Energy-Based Models without Sampling , 2020, ICML.
[24] Bernhard Schölkopf,et al. Kernel Stein Tests for Multiple Model Comparison , 2019, NeurIPS.
[25] Jen Ning Lim,et al. More Powerful Selective Kernel Tests for Feature Selection , 2019, AISTATS.
[26] B. Laurent,et al. Adaptive test of independence based on HSIC measures , 2019, The Annals of Statistics.
[27] M. Yuan,et al. On the Optimality of Gaussian Kernel Based Nonparametric Tests against Smooth Alternatives , 2019, 1909.03302.
[28] Kenji Fukumizu,et al. A Kernel Stein Test for Comparing Latent Variable Models , 2019, Journal of the Royal Statistical Society Series B: Statistical Methodology.
[29] Alessandro Barp,et al. Minimum Stein Discrepancy Estimators , 2019, NeurIPS.
[30] Bernhard Schölkopf,et al. Informative Features for Model Comparison , 2018, NeurIPS.
[31] Arthur Gretton,et al. A maximum-mean-discrepancy goodness-of-fit test for censored data , 2018, AISTATS.
[32] Prafulla Dhariwal,et al. Glow: Generative Flow with Invertible 1x1 Convolutions , 2018, NeurIPS.
[33] Lester Mackey,et al. Random Feature Stein Discrepancies , 2018, NeurIPS.
[34] Kenji Fukumizu,et al. Post Selection Inference with Incomplete Maximum Mean Discrepancy Estimator , 2018, ICLR.
[35] Krishnakumar Balasubramanian,et al. On the Optimality of Kernel-Embedding Based Goodness-of-Fit Tests , 2017, J. Mach. Learn. Res..
[36] Kenji Fukumizu,et al. A Linear-Time Kernel Goodness-of-Fit Test , 2017, NIPS.
[37] Lester W. Mackey,et al. Measuring Sample Quality with Kernels , 2017, ICML.
[38] Bernhard Schölkopf,et al. Minimax Estimation of Maximum Mean Discrepancy with Radial Kernels , 2016, NIPS.
[39] Alexander J. Smola,et al. Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy , 2016, ICLR.
[40] Samy Bengio,et al. Density estimation using Real NVP , 2016, ICLR.
[41] M. Girolami,et al. Convergence rates for a class of estimators based on Stein’s method , 2016, Bernoulli.
[42] Qiang Liu,et al. A Kernelized Stein Discrepancy for Goodness-of-fit Tests , 2016, ICML.
[43] Arthur Gretton,et al. A Kernel Test of Goodness of Fit , 2016, ICML.
[44] Lester W. Mackey,et al. Measuring Sample Quality with Stein's Method , 2015, NIPS.
[45] M. Girolami,et al. Control Functionals for Quasi-Monte Carlo Integration , 2015, AISTATS.
[46] Arthur Gretton,et al. A Wild Bootstrap for Degenerate Kernel Tests , 2014, NIPS.
[47] Anne Leucht,et al. Dependent wild bootstrap for degenerate U- and V-statistics , 2013, J. Multivar. Anal..
[48] Sivaraman Balakrishnan,et al. Optimal kernel choice for large-scale two-sample tests , 2012, NIPS.
[49] Matthieu Lerasle,et al. Kernels Based Tests with Non-asymptotic Bootstrap Approaches for Two-sample Problems , 2012, COLT.
[50] A. Leucht,et al. Degenerate U- and V-statistics under weak dependence : Asymptotic theory and bootstrap consistency , 2012, 1205.1892.
[51] B. Laurent,et al. The two-sample problem for Poisson processes: adaptive tests with a non-asymptotic wild bootstrap approach , 2012, 1203.3572.
[52] Bernhard Schölkopf,et al. A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..
[53] Kenji Fukumizu,et al. Universality, Characteristic Kernels and RKHS Embedding of Measures , 2010, J. Mach. Learn. Res..
[54] X. Shao,et al. The Dependent Wild Bootstrap , 2010 .
[55] C. Carmeli,et al. Vector valued reproducing kernel Hilbert spaces and universality , 2008, 0807.1659.
[56] Bernhard Schölkopf,et al. Measuring Statistical Dependence with Hilbert-Schmidt Norms , 2005, ALT.
[57] Joseph P. Romano,et al. Exact and Approximate Stepdown Methods for Multiple Hypothesis Testing , 2003 .
[58] Y. Baraud. Non-asymptotic minimax rates of testing in signal detection , 2002 .
[59] Yu. I. Ingster. Minimax testing of the hypothesis of independence for ellipsoids inlp , 1996 .
[60] W. Stute,et al. Bootstrap based goodness-of-fit-tests , 1993 .
[61] Stefun D. Leigh. U-Statistics Theory and Practice , 1992 .
[62] P. Massart. The Tight Constant in the Dvoretzky-Kiefer-Wolfowitz Inequality , 1990 .
[63] Yu. I. Ingster. Minimax Testing of Nonparametric Hypotheses on a Distribution Density in the $L_p$ Metrics , 1987 .
[64] J. Kiefer,et al. Asymptotic Minimax Character of the Sample Distribution Function and of the Classical Multinomial Estimator , 1956 .
[65] N. Aronszajn. Theory of Reproducing Kernels. , 1950 .
[66] W. Hoeffding. A Class of Statistics with Asymptotically Normal Distribution , 1948 .
[67] Wenkai Xu,et al. Generalised Kernel Stein Discrepancy (GKSD): A Unifying Approach for Non-parametric Goodness-of-fit Testing , 2021 .
[68] Gesine Reinert,et al. A Stein Goodness-of-test for Exponential Random Graph Models , 2021, AISTATS.
[69] Gesine Reinert,et al. Stein’s Method Meets Statistics: A Review of Some Recent Developments , 2021 .
[70] M. Frei. Decoupling From Dependence To Independence , 2016 .
[71] P. Diaconis,et al. Use of exchangeable pairs in the analysis of simulations , 2004 .
[72] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[73] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[74] C. Stein. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables , 1972 .