Long-Range Out-of-Sample Properties of Autoregressive Neural Networks

We consider already-trained discrete autoregressive neural networks in their most general representations, with the exclusion of time-varying input though, and we provide tight sufficient conditions and elementary proofs for the existence of an attractor, uniqueness, and global convergence. Those conditions can be used as easy-to-check criteria when convergence (or not) of long-range predictions is desirable.

[1]  Fixed point theorems with applications to economics and game theory: What good is a completely labeled subsimplex , 1985 .

[2]  Pauline van den Driessche,et al.  Global Attractivity in Delayed Hopfield Neural Network Models , 1998, SIAM J. Appl. Math..

[3]  Aníbal R. Figueiras-Vidal,et al.  Efficient Block Training of Multilayer Perceptrons , 2000, Neural Computation.

[4]  Kurt Hornik,et al.  Stationary and Integrated Autoregressive Neural Network Processes , 2000, Neural Computation.

[5]  Jirí Síma,et al.  Training a Single Sigmoidal Neuron Is Hard , 2002, Neural Comput..

[6]  P. McNelis Neural networks in finance : gaining predictive edge in the market , 2005 .

[7]  Derong Liu,et al.  On the global output convergence of a class of recurrent neural networks with time-varying inputs , 2005, Neural Networks.

[8]  Jito Vanualailai,et al.  Some Generalized Sufficient Convergence Criteria for Nonlinear Continuous Neural Networks , 2005, Neural Computation.

[9]  S RzepczynskiMark Neural Networks in Finance: Gaining Predictive Edge in the Markets (a review) , 2007 .

[10]  Zhengdong Lu,et al.  Penalized Probabilistic Clustering , 2007, Neural Computation.

[11]  Xuyang Lou,et al.  Global output convergence of Cohen–Grossberg neural networks with both time-varying and distributed delays , 2009 .

[12]  Stefan Rotter,et al.  Multiplicatively interacting point processes and applications to neural modeling , 2009, Journal of Computational Neuroscience.