Unsupervised Learning Of Invariant Representations Of Faces Through Temporal Association

The appearance of an object or a face changes continuously as the observer moves through the environment or as a face changes expression or pose. Recognizing an object or a face despite these image changes is a challenging problem for computer vision systems, yet we perform the task quickly and easily. This simulation investigates the ability of an unsupervised learning mechanism to acquire representations that are tolerant to such changes in the image. The learning mechanism finds these representations by capturing temporal relationships between 2-D patterns. Previous rnodels of temporal association learning have used idealized input representations. The input to this model consists of graylevel images of faces. A two-layer network learned face representations that incorporated changes of pose up to rt30°. A second network learned representations that were independent of facial expression.