Two-channel convolutional neural network for facial expression recognition using facial parts

This paper proposes the design of a facial expression recognition system based on the deep convolutional neural network by using facial parts. In this work, a solution for facial expression recognition that uses a combination of algorithms for face detection, feature extraction and classification is discussed. The proposed method builds a two-channel convolutional neural network model in which facial parts are used as inputs: the extracted eyes are used as inputs to the first channel, while the mouth is the input into the second channel. Feature information from both channels converges in a fully connected layer which is used to learn global information from these local features and is then used for expression classification. Experiments are carried out on the Japanese female facial expression dataset and the extended Cohn-Kanada dataset to determine the expression recognition accuracy for the proposed facial expression recognition system based on facial parts. The results achieved shows that the system provides state of art classification accuracy with 97.6% and 95.7% respectively when compared to other methods.