Inefficiency of Data Augmentation for Large Sample Imbalanced Data

Many modern applications collect large sample size and highly imbalanced categorical data, with some categories being relatively rare. Bayesian hierarchical models are well motivated in such settings in providing an approach to borrow information to combat data sparsity, while quantifying uncertainty in estimation. However, a fundamental problem is scaling up posterior computation to massive sample sizes. In categorical data models, posterior computation commonly relies on data augmentation Gibbs sampling. In this article, we study computational efficiency of such algorithms in a large sample imbalanced regime, showing that mixing is extremely poor, with a spectral gap that converges to zero at a rate proportional to the square root of sample size or faster. This theoretical result is verified with empirical performance in simulations and an application to a computational advertising data set. In contrast, algorithms that bypass data augmentation show rapid mixing on the same dataset.

[1]  Daniel W. Lozier,et al.  NIST Digital Library of Mathematical Functions , 2003, Annals of Mathematics and Artificial Intelligence.

[2]  Miklós Simonovits,et al.  Random Walks in a Convex Body and an Improved Volume Algorithm , 1993, Random Struct. Algorithms.

[3]  Galin L. Jones On the Markov chain central limit theorem , 2004, math/0409112.

[4]  David B. Dunson,et al.  Bayesian Data Analysis , 2010 .

[5]  H. Lookman Sithic,et al.  Survey of Insurance Fraud Detection Using Data Mining Techniques , 2013, ArXiv.

[6]  James G. Scott,et al.  Bayesian Inference for Logistic Models Using Pólya–Gamma Latent Variables , 2012, 1205.0310.

[7]  S. Meyn,et al.  Stability of Markovian processes III: Foster–Lyapunov criteria for continuous-time processes , 1993, Advances in Applied Probability.

[8]  David J. Hand,et al.  Statistical fraud detection: A review , 2002 .

[9]  Adrian F. M. Smith,et al.  Sampling-Based Approaches to Calculating Marginal Densities , 1990 .

[10]  D. V. Dyk,et al.  A Bayesian analysis of the multinomial probit model using marginal data augmentation , 2005 .

[11]  Art B. Owen,et al.  Infinitely Imbalanced Logistic Regression , 2007, J. Mach. Learn. Res..

[12]  S. Chib,et al.  Bayesian analysis of binary and polychotomous response data , 1993 .

[13]  Hee Min Choi,et al.  The Polya-Gamma Gibbs sampler for Bayesian logistic regression is uniformly ergodic , 2013 .

[14]  S. Frühwirth-Schnatter,et al.  Data Augmentation and MCMC for Binary and Multinomial Logit Models , 2010 .

[15]  A. Sokal,et al.  Bounds on the ² spectrum for Markov chains and Markov processes: a generalization of Cheeger’s inequality , 1988 .

[16]  Christian P. Robert,et al.  Monte Carlo Statistical Methods , 2005, Springer Texts in Statistics.

[17]  Nigel H. Lovell,et al.  Analyzing health insurance claims on different timescales to predict days in hospital , 2016, J. Biomed. Informatics.

[18]  C. Holmes,et al.  Bayesian auxiliary variable models for binary and multinomial regression , 2006 .

[19]  Kate Smith-Miles,et al.  A Comprehensive Survey of Data Mining-based Fraud Detection Research , 2010, ArXiv.

[20]  J. Rosenthal,et al.  Geometric Ergodicity and Hybrid Markov Chains , 1997 .

[21]  P. Visscher,et al.  Five years of GWAS discovery. , 2012, American journal of human genetics.

[22]  W. Gilks Markov Chain Monte Carlo , 2005 .