Deep neural networks are often trained in the over-parametrized regime (i.e. with far more parameters than training examples), and understanding why the training converges to solutions that generalize remains an open problem. Several studies have highlighted the fact that the training procedure, i.e. mini-batch Stochastic Gradient Descent (SGD) leads to solutions that have specific properties in the loss landscape. However, even with plain Gradient Descent (GD) the solutions found in the over-parametrized regime are pretty good and this phenomenon is poorly understood.
We propose an analysis of this behavior for feedforward networks with a ReLU activation function under the assumption of small initialization and learning rate and uncover a quantization effect: The weight vectors tend to concentrate at a small number of directions determined by the input data. As a consequence, we show that for given input data there are only finitely many, "simple" functions that can be obtained, independent of the network size. This puts these functions in analogy to linear interpolations (for given input data there are finitely many triangulations, which each determine a function by linear interpolation). We ask whether this analogy extends to the generalization properties - while the usual distribution-independent generalization property does not hold, it could be that for e.g. smooth functions with bounded second derivative an approximation property holds which could "explain" generalization of networks (of unbounded size) to unseen inputs.
[1]
L. Pontryagin,et al.
Ordinary differential equations
,
1964
.
[2]
Esther M. Arkin,et al.
Probabilistic Bounds on the Length of a Longest Edge in Delaunay Graphs of Random Points in d-Dimensions
,
2011,
CCCG.
[3]
Shai Ben-David,et al.
Understanding Machine Learning: Preface
,
2014
.
[4]
Ryota Tomioka,et al.
Norm-Based Capacity Control in Neural Networks
,
2015,
COLT.
[5]
Ryota Tomioka,et al.
In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning
,
2014,
ICLR.
[6]
Zhanxing Zhu,et al.
Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes
,
2017,
ArXiv.
[7]
Naftali Tishby,et al.
Opening the Black Box of Deep Neural Networks via Information
,
2017,
ArXiv.
[8]
Lei Wu,et al.
Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes
,
2017,
ArXiv.
[9]
Samy Bengio,et al.
Understanding deep learning requires rethinking generalization
,
2016,
ICLR.
[10]
Razvan Pascanu,et al.
Sharp Minima Can Generalize For Deep Nets
,
2017,
ICML.
[11]
Jorge Nocedal,et al.
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
,
2016,
ICLR.
[12]
Ohad Shamir,et al.
Size-Independent Sample Complexity of Neural Networks
,
2017,
COLT.
[13]
Quoc V. Le,et al.
A Bayesian Perspective on Generalization and Stochastic Gradient Descent
,
2017,
ICLR.
[14]
Shai Shalev-Shwartz,et al.
SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data
,
2017,
ICLR.