Neural networks from idea to implementation

An artificial neural network is an exciting project for anyone with an interest in computer programming and a little background in mathematics. This paper will explain a popular neural network model from the motivating concepts to a computer implementation. Our APL code is designed for readers familiar with any programming language that includes convenient matrix manipulations, and should convert to such languages. We seek to understand not only how but why the algorithm is designed as it is. Along the way, this involves a little linear algebra and multivariable differentiation as well as array-oriented programming; making it an excellent project in undergraduate mathematical sciences. The result is a very different kind of program that is becoming increasingly important in modem applications. A neural network program is designed to imitate the processing capability of a physiological networks of neurons. This contrasts with typical computer programming where the human programmer focuses on accomplishing a particular, well-specified task. Here, the program should be able to perform many different, unspecified tasks; it will learn the new tasks as they are presented to the program. A task is given as input patterns (vectors) with desired output patterns (vectors) and it is learned as the computer runs repeated trials. Our model is a two-layer feed-forward network with backpropagation as discussed in [RM], [RWL] and [W]. The excellent introductory article [RWL] claims that "The backpropagation learning procedure has become the single most popular method to train networks. The procedure has been used to train networks in problem domains including character recognition, speech recognition, sonar detection, and many more." The classic reference [RM] was used heavily in the student project that culminated in this paper.