Model of the Facial Emotions Expressions Based on Grouping Classes of Feature Vectors

The characteristic forms of facial expressions of the emotional states of a human are typical of a rather large degree of generalization on the basis of common physiological structures and the location of the muscles that form the human face. This circumstance is one of the main reasons for the commonality of human display of emotions that are reflected in the face. By the nature and form of facial expressions on the face it is possible, with high probability, to determine the emotional state of a human with some correction on the part of cultural characteristics and traditions of certain groups. In accordance with the existence of common mimic forms of emotional displays, an approach is proposed to create a model of recognition of emotional displays on a human’s face with relatively low requirements for the means of photo, video-fixation. The creation of the model is based on the implementation of the hyperplane classification of mimic displays of major emotional states. One of the main advantages of the proposed approach is the low computational complexity, which makes it possible to implement a system of recognition of changes in the emotional state of a human by mimic displays without the use of specialized equipment. In addition, the model formed on the basis of the proposed approach allows obtaining proper recognition accuracy with low requirements for quality image characteristics, which allows extending the scope of practical application to a great extent. One example of practical application is the control of the driver while driving a transport, complex production operator and other automatic visual surveillance systems. The set of identified emotional states is formed in accordance with the assigned tasks and gives the opportunity to focus on the recognition of facial expressions and to group characteristic structural displays based on the set of distinguished characteristic features.

[1]  Michael L. Thomas,et al.  Minimization of Childhood Maltreatment Is Common and Consequential: Results from a Large, Multinational Sample Using the Childhood Trauma Questionnaire , 2016, PloS one.

[2]  P. Ekman,et al.  Facial action coding system , 2019 .

[3]  Mark Brosnan,et al.  Validation of the Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions , 2016, PloS one.

[4]  R. Gur,et al.  Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders , 2011, Journal of Neuroscience Methods.

[5]  G. Boulogne,et al.  The Mechanism of Human Facial Expression , 1990 .

[6]  Patrick Mair,et al.  Multidimensional Scaling Using Majorization: SMACOF in R , 2008 .

[7]  Sergey I. Vyatkin,et al.  FACE RECOGNITION TECHNIQUES , 2020, Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska.

[8]  Iu. V. Krak,et al.  Modeling human hand movements, facial expressions, and articulation to synthesize and visualize gesture information , 2011 .

[9]  Aleix M. Martínez,et al.  A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives , 2012, J. Mach. Learn. Res..

[10]  Yuriy V. Krak,et al.  Definition of Information Core for Documents Classification , 2018 .

[11]  Yuriy V. Krak,et al.  Usage of NURBS-approximation for Construction of Spatial Model of Human Face , 2011 .

[12]  Eric O. Postma,et al.  Dimensionality Reduction: A Comparative Review , 2008 .

[13]  Roberto Brunelli,et al.  Face Recognition: Features Versus Templates , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  Iu. G. Kryvonos,et al.  Information Technology for the Analysis of Mimic Expressions of Human Emotional States , 2015 .

[15]  Yuriy V. Krak,et al.  Information Technology of Separating Hyperplanes Synthesis for Linear Classifiers , 2019, Journal of Automation and Information Sciences.