R esum e : L'ACC est un r eseau de neurones auto-organis e qui donne une carte de la sous-vari et e d'un nuage de donn ees en grandes dimensions non lin eairement d ependantes. Le principe est de construire une relation entre un espace d'entr ee (les don-n ees) et un espace de sortie (la carte) au moyen d'un ensemble de neurones ayant chacun deux vecteurs-poids : un pour l'entr ee et l'autre pour la sortie. Apr es avoir quantii e la distribution par les vecteurs d'entr ee, les distances entre ces vecteurs sont copi ees dans l'espace de sortie, tout en favorisant les petites distances de sortie. On obtient alors le d epliage de la vari et e des donn ees avec r eduction de dimension. Apr es apprentissage, le m^ eme algo-rithme peut ^ etre utilis e pour projeter contin^ ument n'importe quel point de la distribution, avec d'excellentes caract eristiques en interpolation et en extrapolation. L'ACC peut ^ etre employ ee dans plusieurs domaines comme la fusion de donn ees, l'appariement de graphes, l'analyse et la surveillance de proc ed es industriels, la d e-tection de pannes dans des machines, la cartographie de concepts et le routage adaptatif en t el ecommunications. Abstract : CCA is a self-organizing neural network which gives a revealing low-dimensional mapping of the submanifold of a high-dimensional and non linearly related data set. The principle is to build a relation between an input space (data) and an output space (the expected mapping) through a set of neurons, each having two weight vectors: one for the input and the other one for the output. After driving the input vectors to a vector quanti-zation of the input data set, the distances between input vectors are copied in the output space, while favouring short-range output distances. Then, one obtains the unfolding of the data subman-ifold together with a dimension reduction. After learning, the same projection algorithm can be used to map continuously any point of the distribution, leading to excellent interpolation and extrapolation properties, which is an original result. CCA can be used in several domains such as data fusion, graph matching, industrial process monitoring or analysis, faults detection in devices, concept mapping and adaptive routing in telecommunications.
[1]
Stanley C. Ahalt,et al.
Competitive learning algorithms for vector quantization
,
1990,
Neural Networks.
[2]
Pierre Demartines,et al.
Data Analysis: How to Compare Kohonen Neural Networks to Other Techniques?
,
1991,
IWANN.
[3]
Allen Gersho,et al.
Vector quantization and signal compression
,
1991,
The Kluwer international series in engineering and computer science.
[4]
F. Girosi,et al.
Networks for approximation and learning
,
1990,
Proc. IEEE.
[5]
Teuvo Kohonen,et al.
Self-Organization and Associative Memory
,
1988
.
[6]
R. Hecht-Nielsen.
Counterpropagation networks.
,
1987,
Applied optics.
[7]
P. Demartines.
Organization Measures and Representations of the Kohonen Maps
,
1992
.
[8]
Pierre Demartines.
Representation of Nonlinear Data Structures through Fast Vqp Neural Network
,
1993
.
[9]
Helge J. Ritter,et al.
Neural computation and self-organizing maps - an introduction
,
1992,
Computation and neural systems series.
[10]
Vladimir Cherkassky,et al.
Constrained topological mapping for nonparametric regression analysis
,
1991,
Neural Networks.
[11]
Helge Ritter,et al.
Parametrized Self-Organizing Maps
,
1993
.