A fault-value injection approach for multiple-weight-fault tolerance of MNNs

Many studies of methods for making multilayer neural nets (MNN) fault-tolerant by injecting intentionally the snapping of a link or a noise into links in the learning process have considered fault-tolerance to link snapping. We consider fault-tolerance to the weight fault, which includes link snapping as a special case. We take a pattern recognition problem as an example. To make an MNN fault-tolerant to any single or double weight faults in a certain interval or range, we inject intentionally two extreme points of a single or double fault-values in an interval or a range during learning. By simulation, we investigate how much MNN becomes fault-tolerant to the weight faults depending on the injected ones. The degree of fault-tolerance for a n-multiple weight fault is estimated by the number of essential multiple links. An interesting result that if only two faults of the extreme points in the interval are injected, the number of the essential links becomes zero for single faults of all the weights in the interval is obtained. This means that MNN becomes fault-tolerant to any single weight faults in the interval. Expecting that the similar result for double faults will be obtained, we inject two extreme points in two-dimensional range. As expected, the number of 2-multiple essential links has become zero in the range. This means that MNN becomes fault-tolerant to any double weight faults in the range. Finally, we analyse the internal structure of MNN by the distribution of covariance between any two inputs of a neuron in the output layer.