Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training

The mean field theory of multilayer neural networks centers around a particular infinite-width scaling, in which the learning dynamics is shown to be closely tracked by the mean field limit. A random fluctuation around this infinite-width limit is expected from a large-width expansion to the next order. This fluctuation has been studied only in the case of shallow networks, where previous works employ heavily technical notions or additional formulation ideas amenable only to that case. Treatment of the multilayer case has been missing, with the chief difficulty in finding a formulation that must capture the stochastic dependency across not only time but also depth. In this work, we initiate the study of the fluctuation in the case of multilayer networks, at any network depth. Leveraging on the neuronal embedding framework recently introduced by Nguyen and Pham [17], we systematically derive a system of dynamical equations, called the second-order mean field limit, that captures the limiting fluctuation distribution. We demonstrate through the framework the complex interaction among neurons in this second-order mean field limit, the stochasticity with cross-layer dependency and the nonlinear time evolution inherent in the limiting fluctuation. A limit theorem is proven to relate quantitatively this limit to the fluctuation realized by large-width networks. We apply the result to show a stability property of gradient descent mean field training: in the large-width regime, along the training trajectory, it progressively biases towards a solution with “minimal fluctuation” (in fact, vanishing fluctuation) in the learned output function, even after the network has been initialized at or has converged (sufficiently fast) to a global optimum. This extends a similar phenomenon previously shown only for shallow networks with a squared loss in the empirical risk minimization setting, to multilayer networks with a loss function that is not necessarily convex in a more general setting.

[1]  Phan-Minh Nguyen,et al.  A Note on the Global Convergence of Multilayer Neural Networks in the Mean Field Regime , 2020, ArXiv.

[2]  Phan-Minh Nguyen,et al.  Analysis of feature learning in weight-tied autoencoders via the mean field lens , 2021, ArXiv.

[3]  Justin A. Sirignano,et al.  Mean field analysis of neural networks: A central limit theorem , 2018, Stochastic Processes and their Applications.

[4]  Jianfeng Lu,et al.  A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth , 2020, ICML.

[5]  Francis Bach,et al.  On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport , 2018, NeurIPS.

[6]  E Weinan,et al.  Stochastic Modified Equations and Dynamics of Stochastic Gradient Algorithms I: Mathematical Foundations , 2018, J. Mach. Learn. Res..

[7]  俊一 甘利 5分で分かる!? 有名論文ナナメ読み:Jacot, Arthor, Gabriel, Franck and Hongler, Clement : Neural Tangent Kernel : Convergence and Generalization in Neural Networks , 2020 .

[8]  Phan-Minh Nguyen,et al.  Global Convergence of Three-layer Neural Networks in the Mean Field Regime , 2021, ICLR.

[9]  École d'été de probabilités de Saint-Flour,et al.  Ecole d'été de probabilités de Saint-Flour XIX, 1989 , 1991 .

[10]  Grant M. Rotskoff,et al.  Neural Networks as Interacting Particle Systems: Asymptotic Convexity of the Loss Landscape and Universal Scaling of the Approximation Error , 2018, ArXiv.

[11]  Jianfeng Lu,et al.  Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime , 2020, ICLR.

[12]  Adel Javanmard,et al.  Analysis of a Two-Layer Neural Network via Displacement Convexity , 2019, The Annals of Statistics.

[13]  Taiji Suzuki,et al.  Stochastic Particle Gradient Descent for Infinite Ensembles , 2017, ArXiv.

[14]  Phan-Minh Nguyen,et al.  A Rigorous Framework for the Mean Field Limit of Multilayer Neural Networks , 2020, ArXiv.

[15]  Andrea Montanari,et al.  A mean field view of the landscape of two-layer neural networks , 2018, Proceedings of the National Academy of Sciences.

[16]  Cong Fang,et al.  Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks , 2020, COLT.

[17]  Taiji Suzuki,et al.  Particle dual averaging: optimization of mean field neural network with global convergence rate analysis , 2020, NeurIPS.

[18]  Marco Mondelli,et al.  Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks , 2020, ICML.

[19]  Phan-Minh Nguyen,et al.  Mean Field Limit of the Learning Dynamics of Multilayer Neural Networks , 2019, ArXiv.

[20]  A. Sznitman Topics in propagation of chaos , 1991 .

[21]  Colin Wei,et al.  Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel , 2018, NeurIPS.

[22]  Eugene A. Golikov,et al.  Dynamically Stable Infinite-Width Limits of Neural Classifiers , 2020, ArXiv.