A Pre-Processing Technique Based on the Wavelet Transform for Linear Autoassociators with Applications to Face Recognition

In order to improve the performance of a linear autoassociator (which is a neural network model), we explore the use of several preprocessing techniques. The gist of our approach is to store, in addition to the original pattern, one or several pre-processed (i.e. filtered) versions of the patterns to be stored in a neural network. First, we compare the performance of several pre-processing techniques (a plain vanilla version of the autoassociator as a control, a Sobel operator, a Canny-Deriche operator, and a multiscale Canny-Deriche operator) on an example of a pattern completion task using a noise degraded version of a face stored in an autoassociator. We found that the multiscale Canny-Deriche operator gives the best performance of all models. Second, we compare the performance of the multiscale Canny-Deriche operator with the control condition on a pattern completion task of noise degraded versions (with several levels of noise) of learned faces and new faces of the same or another race than the learned faces. In all cases, the multiscale CannyDeriche operator performs significantly better than the control.