Autoassociative memory with 'inverted pyramid' logic networks

Probabilistic logic nodes (PLNs) arranged in pyramids can become autoassociative when a noise-training procedure is applied. The author describes the behavior of pyramidal PLNs when the recall procedure is inverted. Nodes estimate their most probable inputs and pass these values to precursor nodes. Empirical analysis of standard PLN pyramid networks, the Hopfield model, and inverted PLN pyramid networks (PIs) reveals that autoassociation is achieved with a much higher degree of probability with IPs, even with substantial amounts of noise. The excellent results achieved by this algorithm are further evidence of the fruitfullness of the RAM based neural network paradigm.<<ETX>>