Generative adversarial networks (GANs) are an exciting alternative to algorithms for solving density estimation problems---using data to assess how likely samples are to be drawn from the same distribution. Instead of explicitly computing these probabilities, GANs learn a generator that can match the given probabilistic source. This paper looks particularly at this matching capability in the context of problems with one-dimensional outputs. We identify a class of function decompositions with properties that make them well suited to the critic role in a leading approach to GANs known as Wasserstein GANs. We show that Taylor and Fourier series decompositions belong to our class, provide examples of these critics outperforming standard GAN approaches, and suggest how they can be scaled to higher dimensional problems in the future.
[1]
Jimmy Ba,et al.
Adam: A Method for Stochastic Optimization
,
2014,
ICLR.
[2]
Bernhard Schölkopf,et al.
A Kernel Two-Sample Test
,
2012,
J. Mach. Learn. Res..
[3]
Sergey Ioffe,et al.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
,
2015,
ICML.
[4]
Richard S. Zemel,et al.
Generative Moment Matching Networks
,
2015,
ICML.
[5]
Piotr Indyk,et al.
Nearly optimal sparse fourier transform
,
2012,
STOC '12.
[6]
A. Berlinet,et al.
Reproducing kernel Hilbert spaces in probability and statistics
,
2004
.
[7]
Yoshua Bengio,et al.
Generative Adversarial Nets
,
2014,
NIPS.
[8]
Aaron C. Courville,et al.
Improved Training of Wasserstein GANs
,
2017,
NIPS.
[9]
Alexandr Andoni,et al.
Learning Sparse Polynomial Functions
,
2014,
SODA.