Introduction to Neural Networks

Parallel computation and neural networks are new computing paradigms that are finding increasing attention among computer and artificial intelligence scientists. The key element of these paradigms is a novel computational structure composed of a large number of highly interconnected processing elements (neurons) working in parallel. Therefore, many operations can be performed simultaneously, as opposed to traditional serial processing in which computations must be performed in sequential order. Simple neural networks were built in the 1950s, but little progress was made in the field due, both to the lack of proper technology and to the breakthroughs in other areas of artificial intelligence. The increasing power of available computers in the 1970s and the development of efficient parallel computing techniques renewed the interest in this field. Nowadays, neural networks have already proven successful for solving hard problems which at first seem to be intractable and difficult to formulate using conventional computing techniques. Examples of such problems can be found in a great variety of domains such as pattern recognition (see Ripley (1996) and Bishop (1997), and the references therein), vision and speech recognition (see Allen (1995) and Skrzypek and Karplus (1992)), time series prediction and forecasting (see Azoff (1994) and Myers (1992)), process control (Miller et al. (1995)), signal processing (Cichocki et al. (1993)), etc.