Matthew Zeidenberg, Neural Networks in Artificial Intelligence

There is now a substantial body of results in connectionist AI, treating problems in representation, learning, inference, speech, vision, and language. Zeidenberg's book is a collection of 1-5 page summaries of various pieces of work in these areas. If one needs a quick overview of, say, Kohonen's self-organizing feature maps, Grossberg's adaptive resonance theory, or Sejnowski and Rosenberg's NETtalk model, this is a good place to look. But beyond these old standards there is quite a bit more material, such as Hinton and Plaut's work on learning with fast and slow weights, Servan-Schreiber, Cleeremans, and McClelland's experiments in learning finite state automata from examples, and several early efforts in connectionist parsing. Over sixty pieces of work are summarized. The book is somewhat dated now; it covers publications only through 1989, and so in some cases major results have been overlooked, e.g., the chapter on speech recognition makes no mention of time delay neural networks or radial basis function networks. Many of the references are to Cognitive Science (both the conference and the journal), the 1988 Connectionist Models Summer School proceedings, and various university technical reports. A completely up-to-date bibliography would have to include citations to the NIPS (Neural Information Processing Systems) proceedings, and to the journals Neural Networks and Neural Computation, but these were just starting up at the time this book was published.