Incremental learning using sensitivity analysis

A new incremental learning algorithm for function approximation problems is presented where the neural network learner dynamically selects during training the most informative patterns from a candidate training set. The incremental learning algorithm uses its current knowledge about the function to be approximated, in the form of output sensitivity information, to incrementally grow the training set with patterns that have the highest influence on the learning objective.

[1]  Kenji Fukumizu,et al.  Active Learning in Multilayer Perceptrons , 1995, NIPS.

[2]  Kenji Fukumizu,et al.  Statistical active learning in multilayer perceptrons , 2000, IEEE Trans. Neural Networks Learn. Syst..

[3]  H. Sebastian Seung,et al.  Query by committee , 1992, COLT '92.

[4]  Thomas Zeugmann,et al.  Incremental Learning from Positive Data , 1996, J. Comput. Syst. Sci..

[5]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[6]  Kurt Hornik,et al.  Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks , 1990, Neural Networks.

[7]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.

[8]  David A. Cohn,et al.  Can Neural Networks Do Better Than the Vapnik-Chervonenkis Bounds? , 1990, NIPS.

[9]  Byoung-Tak Zhang,et al.  Accelerated Learning by Active Example Selection , 1994, Int. J. Neural Syst..

[10]  Jenq-Neng Hwang,et al.  Query-based learning applied to partially trained multilayer perceptrons , 1991, IEEE Trans. Neural Networks.

[11]  Yaser S. Abu-Mostafa,et al.  The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning , 1989, Neural Computation.

[12]  Halbert White,et al.  On learning the derivatives of an unknown mapping with multilayer feedforward networks , 1992, Neural Networks.

[13]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[14]  John R. Deller,et al.  Selective training of feedforward artificial neural networks using matrix perturbation theory , 1995, Neural Networks.

[15]  David J. C. MacKay,et al.  Information-Based Objective Functions for Active Data Selection , 1992, Neural Computation.

[16]  Opper,et al.  Learning and generalization in a two-layer neural network: The role of the Vapnik-Chervonvenkis dimension. , 1994, Physical review letters.

[17]  Mark Plutowski,et al.  Selecting concise training sets from clean data , 1993, IEEE Trans. Neural Networks.