Statistical physics of learning in high-dimensional chaotic systems

Recurrent neural network models are high-dimensional dynamical systems in which the degrees of freedom are coupled through synaptic connections and evolve according to a set of differential equations. When the synaptic connections are sufficiently strong and random, such models display a chaotic phase which can be used to perform a task if the network is carefully trained. It is fair to say that this setting applies to many other complex systems, from biological organisms (from cells, to individuals and populations) to financial markets. For all these out-of-equilibrium systems, elementary units live in a chaotic environment and need to adapt their strategies to survive by extracting information from the environment and controlling the feedback loop on it. In this work, we consider a prototypical high-dimensional chaotic system as a simplified model for a recurrent neural network and more complex learning systems. We study the model under two particular training strategies: Hebbian driving and FORCE training. In the first case, we show that Hebbian training can be used to tune the level of chaos in the dynamics and this reproduces some results recently obtained in the study of standard models of RNN. In the latter case, we show that the dynamical system can be trained to reproduce a simple periodic function. We show that the FORCE algorithm drives the dynamics close to an asymptotic attractor the larger the training time. We also discuss possible extensions of our setup to other learning strategies.