Generative Trees: Adversarial and Copycat

While Generative Adversarial Networks (GANs) achieve spectacular results on unstructured data like images, there is still a gap on tabular data, data for which state of the art supervised learning still favours to a large extent decision tree (DT)-based models. This paper proposes a new path forward for the generation of tabular data, exploiting decades-old understanding of the supervised task’s best components for DT induction, from losses (properness), models (tree-based) to algorithms (boosting). The properness condition on the supervised loss – which postulates the optimality of Bayes rule – leads us to a variational GAN-style loss formulation which is tight when discriminators meet a calibration property trivially satisfied by DTs, and, under common assumptions about the supervised loss, yields ”one loss to train against them all” for the generator: the χ2. We then introduce tree-based generative models, generative trees (GTs), meant to mirror on the generative side the good properties of DTs for classifying tabular data, with a boosting-compliant adversarial training algorithm for GTs. We also introduce copycat training, in which the generator copies at run time the underlying tree (graph) of the discriminator DT and completes it for the hardest discriminative task, with boosting compliant convergence. We test our algorithms on tasks including fake/real distinction, training from fake data and missing data imputation. Each one of these tasks displays that GTs can provide comparatively simple – and interpretable – contenders to sophisticated state of the art methods for data generation (using neural network models) or missing data imputation (relying on multiple imputation by chained equations with complex tree-based modeling).

[1]  M. E. Muller,et al.  A Note on the Generation of Random Normal Deviates , 1958 .

[2]  M. Degroot Uncertainty, Information, and Sequential Experiments , 1962 .

[3]  L. J. Savage Elicitation of Personal Probabilities and Expectations , 1971 .

[4]  丸山 徹 Convex Analysisの二,三の進展について , 1977 .

[5]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[6]  Donald E. Knuth Two notes on notation , 1992 .

[7]  Ferdinand Österreicher,et al.  Statistical information and discrimination , 1993, IEEE Trans. Inf. Theory.

[8]  Yishay Mansour,et al.  On the boosting ability of top-down decision tree learning algorithms , 1996, STOC '96.

[9]  Yoram Singer,et al.  Improved Boosting Algorithms Using Confidence-rated Predictions , 1998, COLT' 98.

[10]  J. Friedman Special Invited Paper-Additive logistic regression: A statistical view of boosting , 2000 .

[11]  Peter Rob,et al.  Database systems : design, implementation, and management , 2000 .

[12]  Pierre Maréchal,et al.  On a Functional Operation Generating Convex Functions, Part 2: Algebraic Properties , 2005 .

[13]  Pierre Maréchal,et al.  On a Functional Operation Generating Convex Functions, Part 1: Duality , 2005 .

[14]  Frank Nielsen,et al.  On the Efficient Minimization of Classification Calibrated Surrogates , 2008, NIPS.

[15]  Mark D. Reid,et al.  Information, Divergence and Risk for Binary Experiments , 2009, J. Mach. Learn. Res..

[16]  Stef van Buuren,et al.  MICE: Multivariate Imputation by Chained Equations in R , 2011 .

[17]  Wei-Yin Loh,et al.  Classification and regression trees , 2011, WIREs Data Mining Knowl. Discov..

[18]  Stef van Buuren,et al.  Flexible Imputation of Missing Data , 2012 .

[19]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[20]  Peter Harremoës,et al.  Rényi Divergence and Kullback-Leibler Divergence , 2012, IEEE Transactions on Information Theory.

[21]  Richard Nock,et al.  A scaled Bregman theorem with applications , 2016, NIPS.

[22]  Cheng Soon Ong,et al.  Linking losses for density ratio and class-probability estimation , 2016, ICML.

[23]  Sebastian Nowozin,et al.  f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.

[24]  Richard Nock,et al.  f-GANs in an Information Geometric Nutshell , 2017, NIPS.

[25]  Aaron C. Courville,et al.  Adversarially Learned Inference , 2016, ICLR.

[26]  Changxi Zheng,et al.  BourGAN: Generative Networks with Metric Embeddings , 2018, NeurIPS.

[27]  Mihaela van der Schaar,et al.  GAIN: Missing Data Imputation using Generative Adversarial Nets , 2018, ICML.

[28]  Lei Xu,et al.  Modeling Tabular data using Conditional GAN , 2019, NeurIPS.

[29]  Christian A. Hammerschmidt,et al.  Oversampling Tabular Data with Deep Generative Models: Is it worth the effort? , 2020, ICBINB@NeurIPS.

[30]  Julie Josse,et al.  Missing Data Imputation using Optimal Transport , 2020, ICML.

[31]  Jeremy W. Linsley,et al.  Superhuman cell death detection with biomarker-optimized neural networks. , 2021, Science advances.

[32]  Sercan O. Arik,et al.  TabNet: Attentive Interpretable Tabular Learning , 2019, AAAI.