Emotional computing based on cross-modal fusion and edge network data incentive

In large-scale emotional events and complex emotional recognition applications, how to improve the recognition accuracy, computing efficiency, and user experience quality becomes the first problem to be solved. Aiming at the above problems, this paper proposes an emotional computing algorithm based on cross-modal fusion and edge network data incentive. In order to improve the efficiency of emotional data collection and the accuracy of emotional recognition, deep cross-modal fusion can capture the semantic deviation between multi-modal and design fusion methods through non-linear cross-layer mapping. In this paper, a deep fusion cross-modal data fusion method is designed. In order to improve the computational efficiency and user experience quality, a data incentive algorithm for edge network is designed in this paper, based on the overlapping delay gaps and incentive weights of large data collection and error detection. Finally, the edge network is mapped to a finite data set space from the set of emotional data elements inspired by heterogeneous emotional events. In this space, all emotional events and emotional data elements are balanced. In this paper, an emotional computing algorithm based on cross-modal data fusion is designed. The results of simulation experiments and theoretical analysis show that the proposed algorithm is superior to the edge network data incentive algorithm and the cross-modal data fusion algorithm in recognition accuracy, complex emotion recognition efficiency, and computation efficiency and delay.

[1]  Guillaume Gravier,et al.  A Crossmodal Approach to Multimodal Fusion in Video Hyperlinking , 2018, IEEE MultiMedia.

[2]  Tudor Dumitras,et al.  Understanding the Relationship between Human Behavior and Susceptibility to Cyber Attacks , 2017, ACM Trans. Intell. Syst. Technol..

[3]  Wang-Kong Tse Coherent magneto-optical effects in topological insulators: Excitation near the absorption edge , 2016 .

[4]  Hejun Wu,et al.  Cross-Modal Attentional Context Learning for RGB-D Object Detection , 2018, IEEE Transactions on Image Processing.

[5]  Qingyuan Zhou,et al.  Multi-layer affective computing model based on emotional psychology , 2018, Electron. Commer. Res..

[6]  Isaac Ben-Israel The Letter from Prof. Maj. Gen. (Ret.) Isaac Ben-Israel , 2017 .

[7]  Fan Wu,et al.  Data Quality Guided Incentive Mechanism Design for Crowdsensing , 2018, IEEE Transactions on Mobile Computing.

[8]  Shuming Wang,et al.  Recycling systems design using reservation incentive data , 2017, J. Oper. Res. Soc..

[9]  Iain MacLeod,et al.  Situation Awareness and campaign assessments , 2017, J. Oper. Res. Soc..

[10]  Marcos Faúndez-Zanuy,et al.  EMOTHAW: A Novel Database for Emotional State Recognition From Handwriting and Drawing , 2017, IEEE Transactions on Human-Machine Systems.

[11]  L. Y. Lo,et al.  Running in fear: an investigation into the dimensional account of emotion in discriminating emotional expressions , 2018, Cognitive Processing.

[12]  Yishu Liu,et al.  Scene Classification Based on Two-Stage Deep Feature Fusion , 2018, IEEE Geoscience and Remote Sensing Letters.

[13]  Yuxin Peng,et al.  CCL: Cross-modal Correlation Learning With Multigrained Fusion by Hierarchical Network , 2017, IEEE Transactions on Multimedia.

[14]  Xingyu Wang,et al.  Discriminative Feature Extraction via Multivariate Linear Regression for SSVEP-Based BCI , 2016, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[15]  Leslie M. Collins,et al.  A Comparison of Feature Representations for Explosive Threat Detection in Ground Penetrating Radar Data , 2017, IEEE Transactions on Geoscience and Remote Sensing.

[16]  Gongbin Qian,et al.  Capacitive Proximity Sensor Array With a Simple High Sensitivity Capacitance Measuring Circuit for Human–Computer Interaction , 2018, IEEE Sensors Journal.

[17]  Xueying Zhang,et al.  Articulatory-Acoustic Analyses of Mandarin Words in Emotional Context Speech for Smart Campus , 2018, IEEE Access.

[18]  Ronaldo Vigo,et al.  On the learning difficulty of visual and auditory modal concepts: Evidence for a single processing system , 2017, Cognitive Processing.

[19]  Boyang Li,et al.  Heterogeneous Knowledge Transfer in Video Emotion Recognition, Attribution and Summarization , 2015, IEEE Transactions on Affective Computing.

[20]  Hao-Chiang Koong Lin,et al.  Emotional Design Tutoring System Based on Multimodal Affective Computing Techniques , 2018, Int. J. Distance Educ. Technol..