An Algorithm for Building Regularized Piecewise Linear Discrimination Surfaces: The Perceptron Membrane

The perceptron membrane is a new connectionist model that aims at solving discrimination (classification) problems with piecewise linear surfaces. The discrimination surfaces of perceptron membranes are defined by the union of convex polyhedrons. Starting from only one convex polyhedron, new facets and new polyhedrons are added during learning. Moreover, the positions and orientations of the facets are continuously adapted according to the training examples. Considering each facet as a perceptron cell, a geometric credit assignment provides a local training domain to each perceptron of the network. This enables one to apply statistical theorems on the probability of good generalization for each unit on its learning domain, and gives a reliable criterion for perceptron elimination (using Vapnik-Chervonenkis dimension). Furthermore, a regularization procedure is implemented. The model efficiency is demonstrated on well-known problems such as the 2-spirals or waveforms.

[1]  Richard T. Scalettar,et al.  Emergence of grandmother memory in feed forward networks: learning with noise and forgetfulness , 1988 .

[2]  Leo Breiman,et al.  Classification and Regression Trees , 1984 .

[3]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[4]  李幼升,et al.  Ph , 1989 .

[5]  Marcus Frean,et al.  The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks , 1990, Neural Computation.

[6]  Halbert White,et al.  Artificial Neural Networks: Approximation and Learning Theory , 1992 .

[7]  Guillaume Deffuant,et al.  Semi-algebraic networks: An attempt to design geometric autopoietic models , 1995 .

[8]  A. A. Mullin,et al.  Principles of neurodynamics , 1962 .

[9]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[10]  Guillaume Deffuant Neural units recruitment algorithm for generation of decision trees , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[11]  Elie Bienenstock,et al.  Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.

[12]  Leslie G. Valiant,et al.  A general lower bound on the number of examples needed for learning , 1988, COLT '88.

[13]  Geoffrey E. Hinton,et al.  Learning distributed representations of concepts. , 1989 .

[14]  Yann LeCun,et al.  Une procedure d'apprentissage pour reseau a seuil asymmetrique (A learning scheme for asymmetric threshold networks) , 1985 .

[15]  Guillaume Deffuant Réseaux connexionnistes auto-construits , 1992 .

[16]  A. N. Tikhonov,et al.  Solutions of ill-posed problems , 1977 .

[17]  J. Nadal,et al.  Learning in feedforward layered networks: the tiling algorithm , 1989 .

[18]  Sompolinsky,et al.  Statistical mechanics of learning from examples. , 1992, Physical review. A, Atomic, molecular, and optical physics.

[19]  J. Friedman Multivariate adaptive regression splines , 1990 .

[20]  Jerome H. Friedman Multivariate adaptive regression splines (with discussion) , 1991 .

[21]  M. Golea,et al.  A Convergence Theorem for Sequential Learning in Two-Layer Perceptrons , 1990 .

[22]  Eric B. Baum,et al.  Constructing Hidden Units Using Examples and Queries , 1990, NIPS.

[23]  Paul Bourgine,et al.  Generalist vs specialist neural networks , 1991 .

[24]  Paul E. Utgoff,et al.  Perceptron Trees : A Case Study in ybrid Concept epresentations , 1999 .

[25]  R.J.F. Dow,et al.  Neural net pruning-why and how , 1988, IEEE 1988 International Conference on Neural Networks.

[26]  Jean-Pierre Nadal,et al.  Study of a Growth Algorithm for a Feedforward Network , 1989, Int. J. Neural Syst..

[27]  Sylvie Thiria,et al.  Cooperation of neural nets for robust classification , 1990, 1990 IJCNN International Joint Conference on Neural Networks.