Asymptotic properties of one-layer artificial neural networks with sparse connectivity

A law of large numbers for the empirical distribution of parameters of a one-layer artificial neural networks with sparse connectivity is derived for a simultaneously increasing number of both, neurons and training iterations of the stochastic gradient descent.

[1]  A. Shiryaev,et al.  Limit Theorems for Stochastic Processes , 1987 .

[2]  Insoo Sohn,et al.  Influence of random topology in artificial neural networks: A survey , 2020, ICT Express.

[3]  Peter Stone,et al.  Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science , 2017, Nature Communications.

[4]  O. Kallenberg Foundations of Modern Probability , 2021, Probability Theory and Stochastic Modelling.

[5]  M. Manhart,et al.  Markov Processes , 2018, Introduction to Stochastic Processes and Simulation.

[6]  Konstantinos Spiliopoulos,et al.  Mean Field Analysis of Neural Networks: A Law of Large Numbers , 2018, SIAM J. Appl. Math..

[7]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[8]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[9]  Justin A. Sirignano,et al.  Mean field analysis of neural networks: A central limit theorem , 2018, Stochastic Processes and their Applications.

[10]  V. Kolokoltsov Nonlinear Markov Processes and Kinetic Equations , 2010 .

[11]  M. Jara,et al.  Reaction–Diffusion models: From particle systems to SDE’s , 2017, Stochastic Processes and their Applications.

[12]  Peter A. Beerel,et al.  Pre-Defined Sparse Neural Networks With Hardware Acceleration , 2018, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[13]  L. Pessoa Understanding brain networks and brain organization. , 2014, Physics of life reviews.