Why flatness does and does not correlate with generalization for deep neural networks

The intuition that local flatness of the loss landscape is correlated with better generalization for deep neural networks (DNNs) has been explored for decades, spawning many different flatness measures. Recently, this link with generalization has been called into question by a demonstration that many measures of flatness are vulnerable to parameter re-scaling which arbitrarily changes their value without changing neural network outputs. Here we show that, in addition, some popular variants of SGD such as Adam and Entropy-SGD, can also break the flatnessgeneralization correlation. As an alternative to flatness measures, we use a function based picture and propose using the log of Bayesian prior upon initialization, logP (f), as a predictor of the generalization when a DNN converges on function f after training to zero error. The prior is directly proportional to the Bayesian posterior for functions that give zero error on a test set. For the case of image classification, we show that logP (f) is a significantly more robust predictor of generalization than flatness measures are. Whilst local flatness measures fail under parameter re-scaling, the prior/posterior, which is global quantity, remains invariant under re-scaling. Moreover, the correlation with generalization as a function of data complexity remains good for different variants of SGD.

[1]  Zhanxing Zhu,et al.  Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes , 2017, ArXiv.

[2]  Yoshua Bengio,et al.  Finding Flatter Minima with SGD , 2018, ICLR.

[3]  Jorge Nocedal,et al.  On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.

[4]  Jürgen Schmidhuber,et al.  Flat Minima , 1997, Neural Computation.

[5]  Cristian Sminchisescu,et al.  A Reparameterization-Invariant Flatness Measure for Deep Neural Networks , 2019, ArXiv.

[6]  Geoffrey E. Hinton,et al.  Keeping Neural Networks Simple , 1993 .

[7]  Ard A. Louis,et al.  Is SGD a Bayesian sampler? Well, almost , 2020, ArXiv.

[8]  Yao Zhang,et al.  Energy–entropy competition and the effectiveness of stochastic gradient descent in machine learning , 2018, Molecular Physics.

[9]  Razvan Pascanu,et al.  Sharp Minima Can Generalize For Deep Nets , 2017, ICML.

[10]  Elad Hoffer,et al.  Train longer, generalize better: closing the generalization gap in large batch training of neural networks , 2017, NIPS.

[11]  Masashi Sugiyama,et al.  Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis , 2019, ICML.

[12]  Trac D. Tran,et al.  A Scale Invariant Flatness Measure for Deep Network Minima , 2019, ArXiv.

[13]  Shai Ben-David,et al.  Understanding Machine Learning: From Theory to Algorithms , 2014 .

[14]  David J Schwab,et al.  How noise affects the Hessian spectrum in overparameterized neural networks , 2019, ArXiv.

[15]  Quoc V. Le,et al.  Don't Decay the Learning Rate, Increase the Batch Size , 2017, ICLR.

[16]  Kurt Keutzer,et al.  Hessian-based Analysis of Large Batch Training and Robustness to Adversaries , 2018, NeurIPS.

[17]  J. Rissanen,et al.  Modeling By Shortest Data Description* , 1978, Autom..

[18]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.