From the Publisher:
As book review editor of the IEEE Transactions on Neural Networks, Mohamad Hassoun has had the opportunity to assess the multitude of books on artificial neural networks that have appeared in recent years. Now, in Fundamentals of Artificial Neural Networks, he provides the first systematic account of artificial neural network paradigms by identifying clearly the fundamental concepts and major methodologies underlying most of the current theory and practice employed by neural network researchers.
Such a systematic and unified treatment, although sadly lacking in most recent texts on neural networks, makes the subject more accessible to students and practitioners. Here, important results are integrated in order to more fully explain a wide range of existing empirical observations and commonly used heuristics. There are numerous illustrative examples, over 200 end-of-chapter analytical and computer-based problems that will aid in the development of neural network analysis and design skills, and a bibliography of nearly 700 references.
Proceeding in a clear and logical fashion, the first two chapters present the basic building blocks and concepts of artificial neural networks and analyze the computational capabilities of the basic network architectures involved. Supervised, reinforcement, and unsupervised learning rules in simple nets are brought together in a common framework in chapter three. The convergence and solution properties of these learning rules are then treated mathematically in chapter four, using the "average learning equation" analysis approach. This organization of material makes it natural to switch into learning multilayer nets using backpropand its variants, described in chapter five. Chapter six covers most of the major neural network paradigms, while associative memories and energy minimizing nets are given detailed coverage in the next chapter. The final chapter takes up Boltzmann machines and Boltzmann learning along with other global search/optimization algorithms such as stochastic gradient search, simulated annealing, and genetic algorithms.
[1]
Thomas M. Cover,et al.
Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition
,
1965,
IEEE Trans. Electron. Comput..
[2]
V. G. Zhadan,et al.
Numerical methods of solving some operational research problems
,
1973
.
[3]
Santosh S. Venkatesh,et al.
The capacity of the Hopfield associative memory
,
1987,
IEEE Trans. Inf. Theory.
[4]
David Haussler,et al.
What Size Net Gives Valid Generalization?
,
1989,
Neural Computation.
[5]
Nils J. Nilsson,et al.
The Mathematical Foundations of Learning Machines
,
1990
.
[6]
Anders Krogh,et al.
Introduction to the theory of neural computation
,
1994,
The advanced book program.
[7]
L. Faybusovich.
Hamiltonian structure of dynamical systems which solve linear programming problems
,
1991
.
[8]
R. Brockett.
Dynamical systems that sort lists, diagonalize matrices, and solve linear programming problems
,
1991
.
[9]
R. Brockett,et al.
Completely integrable gradient flows
,
1992
.
[10]
Jacek M. Zurada,et al.
Introduction to artificial neural systems
,
1992
.
[11]
Martin Fodslette Møller,et al.
A scaled conjugate gradient algorithm for fast supervised learning
,
1993,
Neural Networks.
[12]
Yoshimasa Nakamura.
Completely integrable gradient systems on the manifolds of Gaussian and multinomial distributions
,
1993
.
[13]
Brian D. Ripley,et al.
Pattern Recognition and Neural Networks
,
1996
.
[14]
Simon Haykin,et al.
Neural Networks: A Comprehensive Foundation
,
1998
.