暂无分享,去创建一个
[1] Francis R. Bach,et al. On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions , 2015, J. Mach. Learn. Res..
[2] Francis R. Bach,et al. Breaking the Curse of Dimensionality with Convex Neural Networks , 2014, J. Mach. Learn. Res..
[3] Yoshua Bengio,et al. Mutual Information Neural Estimation , 2018, ICML.
[4] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[5] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[6] Lucas Theis,et al. Amortised MAP Inference for Image Super-resolution , 2016, ICLR.
[7] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[8] Robert Krauthgamer,et al. Navigating nets: simple algorithms for proximity search , 2004, SODA '04.
[9] E. J. McShane,et al. Extension of range of functions , 1934 .
[10] Alexander A. Alemi,et al. On Variational Bounds of Mutual Information , 2019, ICML.
[11] Roger B. Grosse,et al. Isolating Sources of Disentanglement in Variational Autoencoders , 2018, NeurIPS.
[12] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[13] Sebastian Nowozin,et al. Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks , 2017, ICML.
[14] Benjamin Recht,et al. Random Features for Large-Scale Kernel Machines , 2007, NIPS.
[15] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[16] Lee-Ad Gottlieb,et al. Efficient Regression in Metric Spaces via Approximate Lipschitz Extension , 2011, IEEE Transactions on Information Theory.
[17] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[18] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[19] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[20] H. Whitney. Analytic Extensions of Differentiable Functions Defined in Closed Sets , 1934 .
[21] Aryeh Kontorovich,et al. Maximum Margin Multiclass Nearest Neighbors , 2014, ICML.
[22] Arthur Gretton,et al. Demystifying MMD GANs , 2018, ICLR.
[23] H. Triebel,et al. Function Spaces, Entropy Numbers, Differential Operators: Function Spaces , 1996 .
[24] Stefano Ermon,et al. Understanding the Limitations of Variational Mutual Information Estimators , 2020, ICLR.
[25] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[26] J. Cima,et al. On weak* convergence in ¹ , 1996 .
[27] Felipe Cucker,et al. On the mathematical foundations of learning , 2001 .
[28] Martin J. Wainwright,et al. Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization , 2008, IEEE Transactions on Information Theory.
[29] Anthony Widjaja,et al. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2003, IEEE Transactions on Neural Networks.
[30] Kartik Ahuja. Estimating Kullback-Leibler Divergence Using Kernel Machines , 2019, 2019 53rd Asilomar Conference on Signals, Systems, and Computers.