Chapters 2–7 make up Part II of the book: artificial neural networks. After introducing the basic concepts of neurons and artificial neuron learning rules in Chapter 2, Chapter 3 describes a particular formalism, based on signal-plus-noise, for the learning problem in general. After presenting the basic neural network types this chapter reviews the principal algorithms for error function minimization/optimization and shows how these learning issues are addressed in various supervised models. Chapter 4 deals with issues in unsupervised learning networks, such as the Hebbian learning rule, principal component learning, and learning vector quantization. Various techniques and learning paradigms are covered in Chapters 3–6, and especially the properties and relative merits of the multilayer perceptron networks, radial basis function networks, self-organizing feature maps and reinforcement learning are discussed in the respective four chapters. Chapter 7 presents an in-depth examination of performance issues in supervised learning, such as accuracy, complexity, convergence, weight initialization, architecture selection, and active learning. Par III (Chapters 8–15) offers an extensive presentation of techniques and issues in evolutionary computing. Besides the introduction to the basic concepts in evolutionary computing, it elaborates on the more important and most frequently used techniques on evolutionary computing paradigm, such as genetic algorithms, genetic programming, evolutionary programming, evolutionary strategies, differential evolution, cultural evolution, and co-evolution, including design aspects, representation, operators and performance issues of each paradigm. The differences between evolutionary computing and classical optimization are also explained. Part IV (Chapters 16 and 17) introduces swarm intelligence. It provides a representative selection of recent literature on swarm intelligence in a coherent and readable form. It illustrates the similarities and differences between swarm optimization and evolutionary computing. Both particle swarm optimization and ant colonies optimization are discussed in the two chapters, which serve as a guide to bringing together existing work to enlighten the readers, and to lay a foundation for any further studies. Part V (Chapters 18–21) presents fuzzy systems, with topics ranging from fuzzy sets, fuzzy inference systems, fuzzy controllers, to rough sets. The basic terminology, underlying motivation and key mathematical models used in the field are covered to illustrate how these mathematical tools can be used to handle vagueness and uncertainty. This book is clearly written and it brings together the latest concepts in computational intelligence in a friendly and complete format for undergraduate/postgraduate students as well as professionals new to the field. With about 250 pages covering such a wide variety of topics, it would be impossible to handle everything at a great length. Nonetheless, this book is an excellent choice for readers who wish to familiarize themselves with computational intelligence techniques or for an overview/introductory course in the field of computational intelligence. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond—Bernhard Schölkopf and Alexander Smola, (MIT Press, Cambridge, MA, 2002, ISBN 0-262-19475-9). Reviewed by Amir F. Atiya.
[1]
A. Tsuchiya,et al.
Vertex operators in conformal field theory on P(1) and monodromy representations of braid group
,
1988
.
[2]
D. Ruppert.
The Elements of Statistical Learning: Data Mining, Inference, and Prediction
,
2004
.
[3]
Nello Cristianini,et al.
An Introduction to Support Vector Machines and Other Kernel-based Learning Methods
,
2000
.
[4]
Ljubo B. Vlacic,et al.
Learning and Soft Computing, Support Vector Machines, Neural Networks, and Fuzzy Logic Models, Vojislav Kecman; MIT Press, Cambridge, MA, 2001, ISBN 0-262-11255-8, 2001, pp 578
,
2002,
Neurocomputing.
[5]
Allan Pinkus,et al.
Approximation theory of the MLP model in neural networks
,
1999,
Acta Numerica.
[6]
A. Beauville.
Vector bundles on curves and generalized theta functions: recent results and open problems
,
1994,
alg-geom/9404001.
[7]
T. Poggio,et al.
The Mathematics of Learning: Dealing with Data
,
2005,
2005 International Conference on Neural Networks and Brain.
[8]
Heekuck Oh,et al.
Neural Networks for Pattern Recognition
,
1993,
Adv. Comput..