Adaptive Neural Networks

The neural networks studied in previous chapters have had a simple flow of information from input to hidden to output neurons through connections without loops and feedback. They were trained by supervised learning techniques where the desired system output was provided during training. Another interesting type of neural network performs unsupervised learning where the system is shown inputs with no desired outputs. The system searches for similar features in the training inputs to group them into categories where members of a single category share common features. These networks are called adaptive resonance theory (ART) networks (Carpenter and Grossberg, 1987). Carpenter and Grossberg have extended their theory to process inputs with a dynamic range. The sample program listed in this chapter uses this newer gray-scale Adaptive Resonance Theory (ART2) which allows inputs to have any value between 0.0 and 1.0 (thus the name gray-scale). This sample program uses the variable naming conventions originated by Gail Carpenter and Stephen Grossberg in their paper introducing the theory for ART2 (Carpenter and Grossberg, 1987). The real time dynamic behavior of ART2 net-works is fascinating to watch; they exhibit interesting behavior when they have few output neurons and are shown a large number of input patterns. The network will reshuffle existing categorizations when necessary.