Multi-Objective Training of Neural Networks

Traditionally, the application of a neural network (Haykin, 1999) to solve a problem has required to follow some steps before to obtain the desired network. Some of these steps are the data preprocessing, model selection, topology optimization and then the training. It is usual to spend a large amount of computational time and human interaction to perform each task of before and, particularly, in the topology optimization and network training. There have been many proposals to reduce the effort necessary to do these tasks and to provide the experts with a robust methodology. For example, Giles et al. (1995) provides a constructive method to optimize iteratively the topology of a recurrent network. Other methods attempt to reduce the complexity of the network structure by mean of removing unnecessary network nodes and connections like in (Morse, 1994). In the last years, evolutionary algorithms have been shown as promising tools to solve this problem, existing many competitive approaches in the literature. For example, Blanco et al. (2001) proposed a master-slave genetic algorithm to train (master algorithm) and to optimize the size of the network (slave algorithm). For a general view of the problem and the use of evolutionary algorithms for neural network training and optimization, we refer the reader to (Yao, 1999). Although the literature about genetic algorithms and neural networks is very extensive, we would like to remark the recent popularity of multi-objective optimization (Coello et al., 2002, Jin, 2006), specially to solve the problem of simultaneous training and topology optimization of neural networks. These methods have shown to perform suitably for this task in previous works, although most of them are proposed for feedforward models. They attempt to optimize the structure of the network (number of connections, hidden units or layers), while training the network at the same time. Multi-objective algorithms may provide important advantages in the simultaneous training and optimization of neural networks: They may force the search to return a set of optimal networks instead of a single one; they are capable to speed-up the optimization process; they may be preferred to a weight-aggregation procedure to cover the regularization problem in neural networks; and they are more suitable when the designer would like to combine different error measures for the training. A recent review of these techniques may be found in (Jin, 2006).

[1]  Maria del Carmen Pegalajar Jiménez,et al.  A multiobjective genetic algorithm for obtaining the optimal size of a recurrent neural network for grammatical inference , 2005, Pattern Recognit..

[2]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[3]  Klaus Jantke,et al.  Wrapper Induction Programs as Information Extraction Assistants , 2007 .

[4]  Xin Yao,et al.  Evolving artificial neural networks , 1999, Proc. IEEE.

[5]  Xuan F. Zha,et al.  Artificial Intelligence and Integrated Intelligent Information Systems: Emerging Technologies and Applications , 2006 .

[6]  Hussein A. Abbass,et al.  A Memetic Pareto Evolutionary Approach to Artificial Neural Networks , 2001, Australian Joint Conference on Artificial Intelligence.

[7]  Armando Blanco,et al.  A genetic algorithm to obtain the optimal recurrent neural network , 2000, Int. J. Approx. Reason..

[8]  Gorka Guardiola,et al.  System Support for Smart Spaces , 2011 .

[9]  Kevin Curran,et al.  Ubiquitous Developments in Ambient Computing and Intelligence: Human-Centered Applications , 2011 .

[10]  André L. V. Coelho,et al.  An Evolutionary Framework for Nonlinear Time-Series Prediction with Adaptive Gated Mixtures of Experts , 2007 .

[11]  Alejandro Pazos Sierra,et al.  Encyclopedia of Artificial Intelligence , 2008 .

[12]  C. Lee Giles,et al.  Constructive learning of recurrent neural networks: limitations of recurrent cascade correlation and a simple solution , 1995, IEEE Trans. Neural Networks.

[13]  Manuel P. Cuéllar,et al.  Topology Optimization and Training of Recurrent Neural Networks with Pareto-Based Multi-objective Algorithms: A Experimental Study , 2007, IWANN.

[14]  Xin Yao,et al.  Evolving hybrid ensembles of learning machines for better generalisation , 2006, Neurocomputing.

[15]  Bernhard Sendhoff,et al.  Evolutionary Multi-objective Optimization for Simultaneous Generation of Signal-Type and Symbol-Type Representations , 2005, EMO.

[16]  Alex Aussem,et al.  Dynamical recurrent neural networks towards prediction and modeling of dynamical systems , 1999, Neurocomputing.

[17]  Milan Stankovic,et al.  Intelligent Software Agents with Applications in Focus , 2009, Encyclopedia of Artificial Intelligence.

[18]  Fulvio Mastrogiovanni,et al.  Proactive Assistance in Ecologies of Physically Embedded Intelligent Systems: A Constraint-Based Approach , 2011 .

[19]  Andrew Vande Moere,et al.  Beyond Ambient Display: A Contextual Taxonomy of Alternative Information Display , 2009, Int. J. Ambient Comput. Intell..

[20]  Roland H. Kaschek,et al.  Intelligent assistant systems - concepts, techniques and technologies , 2006 .

[21]  Héctor Pomares,et al.  Multiobjective evolutionary optimization of the size, shape, and position parameters of radial basis function networks for function approximation , 2003, IEEE Trans. Neural Networks.