Blind signal flattening using warping neural modules

The aim of the paper is to present a new blind transformation algorithm which makes flat (uniform) the probability density function of a random process. The same algorithm allows us to find a uniform hashing map between a set of source symbols and a set of associated ones. As a transformation a non-linear flexible parametric function is used. Its parameters are continuously changed through time for maximizing the entropy of the transformed random process. In a neural context, such a function will represent the input-output mapping performed by a single neuron endowed with functional links.

[1]  Noga Alon,et al.  Source coding and graph entropies , 1996, IEEE Trans. Inf. Theory.

[2]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[3]  Kari Torkkola,et al.  Blind deconvolution, information maximization and recursive filters , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[4]  Mohamad H. Hassoun,et al.  Nonlinear Hebbian rule: a statistical interpretation , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[5]  Robert Tibshirani,et al.  An Introduction to the Bootstrap , 1994 .

[6]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[7]  Jeroen Dehaene,et al.  Local adaptive algorithms for information maximization in neural networks, and application to source separation , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[8]  George Havas,et al.  A Family of Perfect Hashing Methods , 1996, Comput. J..