Teacher-Directed Learning with Mixture of Experts for View-Independent Face Recognition

We propose two new models for view-independent face recognition, which lies under the category of multiview approaches. We use the so-called "mixture of experts" (MOE) in which, the problem space is divided into several subspaces for the experts, and then the outputs of experts are combined by a gating network to form the final output. Basically, our focus is on the way that the face space is partitioned by MOE. In our first model, experts of MOE structure are not biased in any way to prefer one class of faces to another, in other words, the gating network learns a partition of input face space and trusts one expert in each of these partitions; we call this method "self-directed partitioning". In our second model, we attempt to direct the experts to specialize in predetermined areas of face space by developing teacher-directed learning methods for MOE. In this model, by including teacher information about the pose of input face image in the training phase of networks, each expert is directed to learn faces of a specific pose class, so referred to as "teacher-directed partitioning". Thus, in our second model, instead of allowing the MOE to partition the face space on its own way, it is quantized according to a number of predetermined views and MOE is trained to adapt to such space partitioning. The experimental results support our claim that directing the mixture of experts to a predetermined partitioning of face space is a more beneficial way of using MOE for view-independent face recognition.