The subspace learning algorithm as a formalism for pattern recognition and neural networks

Vector subspaces have been suggested for representations of structured information. In the theory of associative memory and associative information processing, the projection principle and subspaces are used in explaining the optimality of associative mappings and novelty filters. These formalisms seem to be very pertinent to neural networks, too. Based on these operations, the subspace method has been developed for a practical pattern-recognition algorithm. The method is reviewed, and some recent results on image analysis are given.<<ETX>>