Configurational entropy stabilizes pattern formation in a hetero-associative neural network

The authors report on a prototype implementation and preliminary studies of a new class of computational engine. This engine introduces statistical mechanical considerations into a simple neural network design, affording greater stability in the pattern classes generated in response to different input stimulus. The current instantiation of the engine consists of two 1-D layers, with feedforward connections between the input layer and the computational layer. The computational layer achieves its total configuration via response to many factors, including input activations obtained from the input layer, and minimization of a Gibbs free energy function. A Hamming distance metric is used to assess the difference between intraclass patterns and interclass patterns. The interclass distance between prototype patterns produced in response to different inputs is an order of magnitude greater than the intraclass distance in the computational layer patterns produced in response to a given input.<<ETX>>