Emotion Detection through Facial Feature Recognition

People share a common and fundamental set of emotions which exhibited through consistent facial expressions. Detection, extraction, and evaluation of these facial expressions are being performed by an algorithm which will allow for automatic recognition of human emotion in images. Extraction of faces and facial features from images are done by using Viola-jones cascade object detectors and Harris corner key points and uses principal component analysis(PCA), linear discriminant analysis(LDA), histogramof-oriented-gradients (HOG) feature extraction, and support vector machines (SVM) in order to train a multi-class predictor for classifying the seven-basic human facial expressions. This approach allows for initial classification via projection of a testing image onto a calculated eigenvector, of a basis that has been specifically calculated to focus attention on the separation of a specific emotion from others. This initial step works well for five of the seven emotions which are easier to distinguish. For further prediction computationally, slower HOG feature extraction is performed and a class prediction is made with a trained SVM. Depending on the testing set and test emotions reasonable accuracy is achieved with the predictor. Contempt a very difficult-to-distinguish emotion, is achieved included as a target emotion and the run-time of the hybrid approach is 20% faster than using the HOG approach exclusively.