Optimal connectivity in hardware-targetted MLP networks

In large neural networks, partial connectivity is both biologically plausible and a matter of necessity when targetting a hardware implementation. We are using the SpiNNaker neural chip multiprocessor to model such networks as a drop-in replacement for the Lens network simulator. For the popular MLP network, a theoretical model of the relation between connectivity, network size and gain in the activation function provides a method to set these parameters to near-optimal values. Using the model, we run a series of network simulations in Lens, permuting the parameters to explore the effects in 2 networks of different size and application. Initial test results show a clear connectivity-gain relation and a benefit to partial connectivity in both networks, with optimal hidden-output connectivity values ranging from ∼10%-∼30% depending on the network type. We show that optimal connectivity-gain settings reduce training time, minimising error oscillations during learning. Preliminary analysis also suggests that while very low connectivities may improve error they may also result in decreased adaptivity to new inputs or component failure. These results in combination with the theoretical relation give a method for determining reasonable initial connectivity and gain values at design time for an MLP network, allowing more efficient use of hardware resources such as SpiNNaker and faster simulations in any software environment. They also suggest a different way of considering the problem of MLP network design: rather than specify a fixed number of neurons, specify a fixed number of connections and vary the number of neurons to reach optimal connectivity.

[1]  Luis A. Plana,et al.  SpiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor , 2008, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence).

[2]  D. Simard,et al.  Fastest learning in small-world neural networks , 2004, physics/0402076.

[3]  T. Rohde LENS : The light , efficient network simulator , 1999 .

[4]  Amir Ayali,et al.  Morphological characterization of in vitro neuronal networks. , 2002, Physical review. E, Statistical, nonlinear, and soft matter physics.

[5]  Kalmanje KrishnaKumar Optimization of the neural net connectivity pattern using a backpropagation algorithm , 1993, Neurocomputing.

[6]  Neil Davey,et al.  High capacity, small world associative memory models , 2006, Connect. Sci..

[7]  Duncan J. Watts,et al.  Collective dynamics of ‘small-world’ networks , 1998, Nature.

[8]  Joaquín J. Torres,et al.  Influence of topology on the performance of a neural network , 2004, Neurocomputing.

[9]  Ammar Belatreche,et al.  Challenges for large-scale implementations of spiking neural networks on FPGAs , 2007, Neurocomputing.

[10]  Neil Davey,et al.  Efficient architectures for sparsely-connected high capacity associative memory models , 2007, Connect. Sci..

[11]  H. Berendse,et al.  The application of graph theoretical analysis to complex networks in the brain , 2007, Clinical Neurophysiology.

[12]  Stephen B. Furber,et al.  Virtual synaptic interconnect using an asynchronous network-on-chip , 2008, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence).

[13]  Matthew A. Lambon Ralph,et al.  Using Parallel Distributed Processing Models to Simulate Phonological Dyslexia: The Key Role of Plasticity-related Recovery , 2007, Journal of Cognitive Neuroscience.