Artificial neural network performance degradation under network damage: Stuck-at faults

Biological neural networks are spectacularly more energy efficient than currently available man-made, transistor-based information processing units. Additionally, biological systems do not suffer catastrophic failures when subjected to physical damage, but experience proportional performance degradation. Hardware neural networks promise great advantages in information processing tasks that are inherently parallel or are deployed in an environment where the processing unit might be susceptible to physical damage. This paper, intended for hardware neural network applications, presents analysis of performance degradation of various architectures of artificial neural networks when subjected to ‘stuck-at-0’ and ‘stuck-at-1’ faults. This study aims to determine if a fixed number of neurons should be kept in a single or multiple hidden layers. Faults are administered to input and hidden layer(s) and analysis of unoptimized and optimized, feedforward and recurrent networks, trained with uncorrelated and correlated data sets is conducted. A comparison of networks with single, dual, triple, and quadruple hidden layers is quantified. The main finding is that ‘stuck-at-0’ faults administered to input layer result in least performance degradation in networks with multiple hidden layers. However, for ‘stuck-at-0’ faults occurring to cells in hidden layer(s), the architecture that sustains the least damage is that of a single hidden layer. When ‘stuck-at-1’ errors are applied to either input or hidden layers, the network(s) that offer the most resilience are those with multiple hidden layers. The study suggests that hardware neural network architecture should be chosen based on the most likely type of damage that the system may be subjected to, namely damage to sensors or the neural network itself.

[1]  C. Lee Giles,et al.  What Size Neural Network Gives Optimal Generalization? Convergence Properties of Backpropagation , 1998 .

[2]  Wei Yang Lu,et al.  Nanoscale memristor device as synapse in neuromorphic systems. , 2010, Nano letters.

[3]  Rohini K. Srihari,et al.  Automatic scoring of short handwritten essays in reading comprehension tests , 2008, Artif. Intell..

[4]  Dhananjay S. Phatak,et al.  Investigating the Fault Tolerance of Neural Networks , 2005, Neural Computation.

[5]  C. Gamrat,et al.  An Organic Nanoparticle Transistor Behaving as a Biological Spiking Synapse , 2009, 0907.2540.

[6]  Shlomo Mark,et al.  NNIC—neural network image compressor for satellite positioning system , 2007 .

[7]  Richard M. Voyles,et al.  Structured Computational Polymers for a soft robot: Actuation and cognition , 2011, 2011 IEEE International Conference on Robotics and Automation.

[8]  Richard M. Voyles,et al.  Monitoring Artificial Neural Network Performance Degradation under Network Damage , 2010 .

[9]  Richard M. Voyles,et al.  Simulating Hardware Neural Networks with Organic Memristors and Organic Field Effect Transistors , 2010 .

[10]  Harris Drucker,et al.  Improving generalization performance using double backpropagation , 1992, IEEE Trans. Neural Networks.

[11]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[12]  Kenji Suzuki,et al.  A Simple Neural Network Pruning Algorithm with Application to Filter Synthesis , 2001, Neural Processing Letters.

[13]  Harvey G. Cragon Computer Architecture and Implementation , 2000 .

[14]  Benjamin W. Wah,et al.  Automated learning for reducing the configuration of a feedforward neural network , 1996, IEEE Trans. Neural Networks.

[15]  Robert A. Nawrocki Simulation, Application, and Resilience of an Organic Neuromorphic Architecture, Made with Organic Bistable Devices and Organic Field Effect Transistors , 2011 .

[16]  Gert Cauwenberghs,et al.  Dynamic MOS sigmoid array folding analog-to-digital conversion , 2004, IEEE Transactions on Circuits and Systems I: Regular Papers.

[17]  Daniel L. Palumbo,et al.  Performance and fault-tolerance of neural networks for optimization , 1993, IEEE Trans. Neural Networks.

[18]  Dumitru Ostafe Neural Network Hidden Layer Number Determination Using Pattern Recognition Techniques , 2005 .

[19]  Michael J. Frank,et al.  Hippocampus, cortex, and basal ganglia: Insights from computational models of complementary learning systems , 2004, Neurobiology of Learning and Memory.