An interactive system for humanoid robot SHFR-III

The natural interaction between human and robot is full of challenges but indispensable. In this article, a human–robot interactive system is designed for humanoid robot SHFR-III. The system consists of three subsystems: multi-sensor positioning subsystem, emotional interaction subsystem, and dialogue subsystem. The multi-sensor positioning subsystem is designed to improve the positioning accuracy, the emotional interaction subsystem uses bimodal emotional recognition model and fuzzy emotional decision-making model to realize the emotion recognition and expression feedback to the interactive objects, and the dialogue subsystem with personal information can complete the response consistent with the default information and avoid conflicting responses .The experimental results show that the multi-sensor positioning subsystem has good environmental adaptability and positioning accuracy, the emotional interaction subsystem can achieve human-like emotional feedback, and the dialogue subsystem can achieve more natural, logical, and consistent responses.

[1]  Rosalind W. Picard Affective computing: (526112012-054) , 1997 .

[2]  Jianfeng Gao,et al.  A Diversity-Promoting Objective Function for Neural Conversation Models , 2015, NAACL.

[3]  Vlado Delic,et al.  Discrimination Capability of Prosodic and Spectral Features for Emotional Speech Recognition , 2012 .

[4]  Quoc V. Le,et al.  A Neural Conversational Model , 2015, ArXiv.

[5]  E. Vesterinen,et al.  Affective Computing , 2009, Encyclopedia of Biometrics.

[6]  Christopher Kanan,et al.  Robotic grasp detection using deep convolutional neural networks , 2016, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[7]  Jianfeng Gao,et al.  A Persona-Based Neural Conversation Model , 2016, ACL.

[8]  Bin He,et al.  Product sustainability assessment for product life cycle , 2019, Journal of Cleaner Production.

[9]  Pablo Bustos,et al.  Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation , 2014, Sensors.

[10]  Ali Meghdari,et al.  Design and Realization of a Sign Language Educational Humanoid Robot , 2019, J. Intell. Robotic Syst..

[11]  H. Gunes,et al.  Live human–robot interactive public demonstrations with automatic emotion and personality prediction , 2019, Philosophical Transactions of the Royal Society B.

[12]  Bin He,et al.  Underactuated robotics: A review , 2019, International Journal of Advanced Robotic Systems.

[13]  Yang Yi Research Progress on Human Body Localization in Smart Home System , 2013 .

[14]  Yann LeCun,et al.  Dimensionality Reduction by Learning an Invariant Mapping , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[15]  Chung-Hsien Wu,et al.  Emotion Recognition of Affective Speech Based on Multiple Classifiers Using Acoustic-Prosodic Information and Semantic Labels , 2015, IEEE Transactions on Affective Computing.

[16]  Xiaoyan Zhu,et al.  Assigning personality/identity to a chatting machine for coherent conversation generation , 2017, ArXiv.

[17]  Seiji Yamada,et al.  Expressing Emotions Through Color, Sound, and Vibration with an Appearance-Constrained Social Robot , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[18]  Nikolaos G. Tsagarakis,et al.  Whole-Body Stabilization for Visual-Based Box Lifting with the COMAN+ Robot , 2019, 2019 Third IEEE International Conference on Robotic Computing (IRC).

[19]  Manuela M. Veloso,et al.  Towards a Robust Interactive and Learning Social Robot , 2018, AAMAS.

[20]  Javier Snaider,et al.  Conversational Contextual Cues: The Case of Personalization and History for Response Ranking , 2016, ArXiv.

[21]  Liang Sun,et al.  Realtime Active Sound Source Localization for Unmanned Ground Robots Using a Self-Rotational Bi-Microphone Array , 2018, J. Intell. Robotic Syst..