Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes

In a multi-layered neural network, any one of the hidden layers can be viewed as computing a distributed representation of the input. Several "encoder" experiments have shown that when the representation space is small it can be fully used. But computing with such a representation requires completely dependable nodes. In the case where the hidden nodes are noisy and unreliable, we find that error correcting schemes emerge simply by using noisy units during training; random errors injected during backpropagation result in spreading representations apart. Average and minimum distances increase with misfire probability, as predicted by coding-theoretic considerations. Furthermore, the effect of this noise is to protect the machine against permanent node failure, thereby potentially extending the useful lifetime of the machine.