Mean Field Analysis of Neural Networks: A Law of Large Numbers

Machine learning, and in particular neural network models, have revolutionized fields such as image, text, and speech recognition. Today, many important real-world applications in these areas are driven by neural networks. There are also growing applications in engineering, robotics, medicine, and finance. Despite their immense success in practice, there is limited mathematical understanding of neural networks. This paper illustrates how neural networks can be studied via stochastic analysis, and develops approaches for addressing some of the technical challenges which arise. We analyze one-layer neural networks in the asymptotic regime of simultaneously (A) large network sizes and (B) large numbers of stochastic gradient descent training iterations. We rigorously prove that the empirical distribution of the neural network parameters converges to the solution of a nonlinear partial differential equation. This result can be considered a law of large numbers for neural networks. In addition, a consequence of our analysis is that the trained parameters of the neural network asymptotically become independent, a property which is commonly called "propagation of chaos".

[1]  Sommers,et al.  Chaos in random neural networks. , 1988, Physical review letters.

[2]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[3]  Kurt Hornik,et al.  Convergence of learning algorithms with constant learning rates , 1991, IEEE Trans. Neural Networks.

[4]  A. Sznitman Topics in propagation of chaos , 1991 .

[5]  A. Barron Approximation and Estimation Bounds for Artificial Neural Networks , 1991, COLT '91.

[6]  Kurt Hornik,et al.  Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.

[7]  F. Hollander,et al.  McKean-Vlasov limit for interacting random processes in random media , 1996 .

[8]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[9]  A. Gottlieb Markov Transitions and the Propagation of Chaos , 2000, math/0001076.

[10]  M. Samuelides,et al.  Large deviations and mean-field theory for asymmetric random recurrent neural networks , 2002 .

[11]  C. Villani,et al.  Kinetic equilibration rates for granular media and related equations: entropy dissipation and mass transportation estimates , 2003 .

[12]  L. Ambrosio,et al.  Gradient Flows: In Metric Spaces and in the Space of Probability Measures , 2005 .

[13]  S. Ethier,et al.  Markov Processes: Characterization and Convergence , 2005 .

[14]  W. Runggaldier,et al.  Large portfolio losses: A dynamic contagion model , 2007, 0704.1348.

[15]  P. D. Pra,et al.  Heterogeneous credit portfolios and the dynamics of the aggregate losses , 2008, 0806.3399.

[16]  F. Bolley Separability and completeness for the Wasserstein distance , 2008 .

[17]  V. Kolokoltsov Nonlinear Markov Processes and Kinetic Equations , 2010 .

[18]  Justin A. Sirignano,et al.  LARGE PORTFOLIO ASYMPTOTICS FOR LOSS FROM DEFAULT , 2011, 1109.1272.

[19]  K. Spiliopoulos,et al.  Default clustering in large portfolios: Typical events. , 2011, 1104.1773.

[20]  J. Touboul Propagation of chaos in neural fields , 2011, 1108.2414.

[21]  Ming Yang,et al.  DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[22]  Quoc V. Le,et al.  Sequence to Sequence Learning with Neural Networks , 2014, NIPS.

[23]  F. Delarue,et al.  Particle systems with a singular mean-field self-excitation. Application to neuronal networks , 2014, 1406.1151.

[24]  Lijun Bo,et al.  Systemic Risk in Interbanking Networks , 2015, SIAM J. Financial Math..

[25]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[26]  D. Talay,et al.  Mean-Field Limit of a Stochastic Particle System Smoothly Interacting Through Threshold Hitting-Times and Applications to Neural Networks with Dendritic Component , 2014, SIAM J. Math. Anal..

[27]  B. Frey,et al.  Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning , 2015, Nature Biotechnology.

[28]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[29]  Justin A. Sirignano,et al.  Deep Learning for Mortgage Risk , 2016, Journal of Financial Econometrics.

[30]  Stéphane Mallat,et al.  Understanding deep convolutional networks , 2016, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[31]  J. Templeton,et al.  Reynolds averaged turbulence modelling using deep neural networks with embedded invariance , 2016, Journal of Fluid Mechanics.

[32]  Matus Telgarsky,et al.  Benefits of Depth in Neural Networks , 2016, COLT.

[33]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[34]  Julia Ling,et al.  Machine learning strategies for systems with invariance properties , 2016, J. Comput. Phys..

[35]  B. Hambly,et al.  A stochastic McKean--Vlasov equation for absorbing diffusions on the half-line , 2016, 1605.00669.

[36]  George Kurian,et al.  Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.

[37]  Bowen Zhou,et al.  Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond , 2016, CoNLL.

[38]  Adam Coates,et al.  Deep Voice: Real-time Neural Text-to-Speech , 2017, ICML.

[39]  Yue M. Lu,et al.  Scaling Limit: Exact and Tractable Analysis of Online Learning Algorithms with Applications to Regularized Regression and PCA , 2017, ArXiv.

[40]  Matus Telgarsky,et al.  Spectrally-normalized margin bounds for neural networks , 2017, NIPS.

[41]  Furu Wei,et al.  Improving Multi-Document Summarization via Text Classification , 2016, AAAI.

[42]  Harry A. Pierson,et al.  Deep learning in robotics: a review of recent research , 2017, Adv. Robotics.

[43]  Yu Zhang,et al.  Very deep convolutional networks for end-to-end speech recognition , 2016, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[44]  Sebastian Thrun,et al.  Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.

[45]  Sergey Levine,et al.  Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[46]  Justin A. Sirignano,et al.  Universal features of price formation in financial markets: perspectives from deep learning , 2018, Machine Learning and AI in Finance.

[47]  Grant M. Rotskoff,et al.  Neural Networks as Interacting Particle Systems: Asymptotic Convexity of the Loss Landscape and Universal Scaling of the Approximation Error , 2018, ArXiv.

[48]  Wolfram Burgard,et al.  The limits and potentials of deep learning for robotics , 2018, Int. J. Robotics Res..

[49]  Justin A. Sirignano,et al.  Mean field analysis of neural networks: A central limit theorem , 2018, Stochastic Processes and their Applications.

[50]  Francis Bach,et al.  On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport , 2018, NeurIPS.

[51]  Justin A. Sirignano,et al.  DGM: A deep learning algorithm for solving partial differential equations , 2017, J. Comput. Phys..

[52]  Andrea Montanari,et al.  A mean field view of the landscape of two-layer neural networks , 2018, Proceedings of the National Academy of Sciences.

[53]  Justin A. Sirignano,et al.  Mean Field Analysis of Deep Neural Networks , 2019, Math. Oper. Res..

[54]  Zhenjie Ren,et al.  Mean-field Langevin dynamics and energy landscape of neural networks , 2019, Annales de l'Institut Henri Poincaré, Probabilités et Statistiques.