On the inductive bias of dropout

Dropout is a simple but effective technique for learning in neural networks and other settings. A sound theoretical understanding of dropout is needed to determine when dropout should be applied and how to use it most effectively. In this paper we continue the exploration of dropout as a regularizer pioneered by Wager, et.al. We focus on linear classification where a convex proxy to the misclassification loss (i.e. the logistic loss used in logistic regression) is minimized. We show: (a) when the dropout-regularized criterion has a unique minimizer, (b) when the dropout-regularization penalty goes to infinity with the weights, and when it remains bounded, (c) that the dropout regularization can be non-monotonic as individual weights increase from 0, and (d) that the dropout regularization penalty may not be convex. This last point is particularly surprising because the combination of dropout regularization with any convex loss proxy is always a convex function. In order to contrast dropout regularization with $L_2$ regularization, we formalize the notion of when different sources are more compatible with different regularizers. We then exhibit distributions that are provably more compatible with dropout regularization than $L_2$ regularization, and vice versa. These sources provide additional insight into how the inductive biases of dropout and $L_2$ regularization differ. We provide some similar results for $L_1$ regularization.

[1]  Carlos S. Kubrusly,et al.  Stochastic approximation algorithms and applications , 1973, CDC 1973.

[2]  E. Slud Distribution Inequalities for the Binomial Law , 1977 .

[3]  L. Breiman SOME INFINITY THEORY FOR PREDICTOR ENSEMBLES , 2000 .

[4]  Tong Zhang Statistical behavior and consistency of classification methods based on convex risk minimization , 2003 .

[5]  L. Breiman Population theory for boosting ensembles , 2003 .

[6]  Claudio Gentile,et al.  A Second-Order Perceptron Algorithm , 2002, SIAM J. Comput..

[7]  Michael I. Jordan,et al.  Convexity, Classification, and Risk Bounds , 2006 .

[8]  Rocco A. Servedio,et al.  Random classification noise defeats all convex potential boosters , 2008, ICML '08.

[9]  A. Dasgupta Asymptotic Theory of Statistics and Probability , 2008 .

[10]  Yoram Singer,et al.  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..

[11]  Philip M. Long,et al.  On the Necessity of Irrelevant Variables , 2011, ICML.

[12]  Nitish Srivastava,et al.  Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.

[13]  Tara N. Sainath,et al.  Improving deep neural networks for LVCSR using rectified linear units and dropout , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[14]  Yann LeCun,et al.  Regularization of Neural Networks using DropConnect , 2013, ICML.

[15]  Geoffrey Zweig,et al.  Recent advances in deep learning for speech research at Microsoft , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[16]  Pierre Baldi,et al.  Understanding Dropout , 2013, NIPS.

[17]  Sida I. Wang,et al.  Dropout Training as Adaptive Regularization , 2013, NIPS.

[18]  Christopher D. Manning,et al.  Fast dropout training , 2013, ICML.

[19]  Philip Bachman,et al.  Learning with Pseudo-Ensembles , 2014, NIPS.

[20]  Pierre Baldi,et al.  The dropout learning algorithm , 2014, Artif. Intell..

[21]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[22]  Ambuj Tewari,et al.  Online Linear Optimization via Smoothing , 2014, COLT.

[23]  Wojciech Kotlowski,et al.  Follow the Leader with Dropout Perturbations , 2014, COLT.

[24]  Sida I. Wang,et al.  Altitude Training: Strong Bounds for Single-Layer Dropout , 2014, NIPS.

[25]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..