Evaluation of Two Connectionist Approaches to Stack Representation

This study empirically compares two distributed connectionist learning models trained to represent an arbitrarily deep stack. One is Pol-lack's Recursive Auto-Associative Memory, a recurrent back propagating neural network that uses a hidden intermediate representation. The other is the Exponential Decay Model, a novel architecture that we propose here, which tries to learn an explicit represention that models the stack as an exponentially decaying entity. We show that although the concept of a stack is learnable for both approaches, neither model is able to deliver the arbitrary depth attribute. Ultimately, both suffer from the rapid rate of error propagation inherent in their recursive structures.