Recent advancements in multimodal human–robot interaction

Robotics have advanced significantly over the years, and human–robot interaction (HRI) is now playing an important role in delivering the best user experience, cutting down on laborious tasks, and raising public acceptance of robots. New HRI approaches are necessary to promote the evolution of robots, with a more natural and flexible interaction manner clearly the most crucial. As a newly emerging approach to HRI, multimodal HRI is a method for individuals to communicate with a robot using various modalities, including voice, image, text, eye movement, and touch, as well as bio-signals like EEG and ECG. It is a broad field closely related to cognitive science, ergonomics, multimedia technology, and virtual reality, with numerous applications springing up each year. However, little research has been done to summarize the current development and future trend of HRI. To this end, this paper systematically reviews the state of the art of multimodal HRI on its applications by summing up the latest research articles relevant to this field. Moreover, the research development in terms of the input signal and the output signal is also covered in this manuscript.

[1]  Álvaro Castro González,et al.  Active learning based on computer vision and human-robot interaction for the user profiling and behavior personalization of an autonomous social robot , 2023, Eng. Appl. Artif. Intell..

[2]  Guoqian Jiang,et al.  Multimodal Multitask Neural Network for Motor Imagery Classification With EEG and fNIRS Signals , 2022, IEEE Sensors Journal.

[3]  S. Paszkiel,et al.  BCI Wheelchair Control Using Expert System Classifying EEG Signals Based on Power Spectrum Estimation and Nervous Tics Detection , 2022, Applied Sciences.

[4]  M. Múnera,et al.  Biomechanical Effects of Adding an Ankle Soft Actuation in a Unilateral Exoskeleton , 2022, Biosensors.

[5]  Archan Misra,et al.  COSM2IC: Optimizing Real-Time Multi-Modal Instruction Comprehension , 2022, IEEE Robotics and Automation Letters.

[6]  A. Kakarountas,et al.  Methodology for Selecting the Appropriate Electric Motor for Robotic Modular Systems for Lower Extremities , 2022, Healthcare.

[7]  B. Yang,et al.  Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition , 2022, IEEE Transactions on Cognitive and Developmental Systems.

[8]  M. Hoffmann,et al.  A connectionist model of associating proprioceptive and tactile modalities in a humanoid robot , 2022, 2022 IEEE International Conference on Development and Learning (ICDL).

[9]  M. Mocan,et al.  Home-Based Robotic Upper Limbs Cardiac Telerehabilitation System , 2022, International journal of environmental research and public health.

[10]  Óscar Martínez Mozos,et al.  The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized , 2022, ArXiv.

[11]  Xiaohui Yang,et al.  STMMI: A Self-Tuning Multi-Modal Fusion Algorithm Applied in Assist Robot Interaction , 2022, Scientific Programming.

[12]  Md. Zair Hussain,et al.  Collaborative analysis of audio-visual speech synthesis with sensor measurements for regulating human–robot interaction , 2022, International Journal of System Assurance Engineering and Management.

[13]  K. Sasaki,et al.  Assessment of Socket Pressure during Walking in Rapid Fit Prosthetic Sockets , 2022, Sensors.

[14]  B. R. Barricelli,et al.  A Multi-Modal Approach to Creating Routines for Smart Speakers , 2022, AVI.

[15]  Changjoo Nam,et al.  Generation of co-speech gestures of robot based on morphemic analysis , 2022, Robotics Auton. Syst..

[16]  A. Al-Hamadi,et al.  Face Recognition and Tracking Framework for Human–Robot Interaction , 2022, Applied Sciences.

[17]  M. Haseyama,et al.  Human Emotion Recognition Using Multi-Modal Biological Signals Based On Time Lag-Considered Correlation Maximization , 2022, ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[18]  Zhaozheng Yin,et al.  Real-Time Multi-modal Human-Robot Collaboration Using Gestures and Speech , 2022, Journal of Manufacturing Science and Engineering.

[19]  U. Maniscalco,et al.  Bidirectional Multi-modal Signs of Checking Human-Robot Engagement and Interaction , 2022, International Journal of Social Robotics.

[20]  Luis F. C. Figueredo,et al.  Reshaping Robot Trajectories Using Natural Language Commands: A Study of Multi-Modal Data Alignment Using Transformers , 2022, 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[21]  Lihui Wang,et al.  Multimodal data driven robot control for human-robot collaborative assembly , 2022, Journal of Manufacturing Science and Engineering.

[22]  A. Al-Hamadi,et al.  Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction , 2022, Sensors.

[23]  Mingchuan Zhang,et al.  Intelligent Perception Recognition of Multi-modal EMG Signals Based on Machine Learning , 2022, BIC.

[24]  Wendong Wang,et al.  Motion intensity modeling and trajectory control of upper limb rehabilitation exoskeleton robot based on multi-modal information , 2022, Complex & Intelligent Systems.

[25]  A. Pedrocchi,et al.  XAI for myo-controlled prosthesis: Explaining EMG data for hand gesture classification , 2022, Knowl. Based Syst..

[26]  Cynthia Breazeal,et al.  Beyond the Words: Analysis and Detection of Self-Disclosure Behavior during Robot Positive Psychology Interaction , 2021, 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021).

[27]  Hao Tang,et al.  Multi-Modal Perception Attention Network with Self-Supervised Learning for Audio-Visual Speaker Tracking , 2021, AAAI.

[28]  A. Kappas,et al.  How does Modality Matter? Investigating the Synthesis and Effects of Multi-modal Robot Behavior on Social Intelligence , 2021, International Journal of Social Robotics.

[29]  Luka Peternel,et al.  Model Predictive Control with Gaussian Processes for Flexible Multi-Modal Physical Human Robot Interaction , 2021, 2022 International Conference on Robotics and Automation (ICRA).

[30]  Hui Zeng,et al.  Construction of multi-modal perception model of communicative robot in non-structural cyber physical system environment based on optimized BT-SVM model , 2021, Comput. Commun..

[31]  Fazel Ansari,et al.  Knowledge-Based Digital Twin for Predicting Interactions in Human-Robot Collaboration , 2021, 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA ).

[32]  G. Cheng,et al.  Interactive Force Control Based on Multimodal Robot Skin for Physical Human−Robot Collaboration , 2021, Adv. Intell. Syst..

[33]  Ruth Stock-Homburg,et al.  Survey of Emotions in Human–Robot Interactions: Perspectives from Robotic Psychology on 20 Years of Research , 2021, International Journal of Social Robotics.

[34]  Jongbaeg Kim,et al.  Recent Progress in Flexible Tactile Sensors for Human‐Interactive Systems: From Sensors to Advanced Applications , 2021, Advanced materials.

[35]  Fuchun Sun,et al.  Multi-modal broad learning for material recognition , 2021, Cognitive Computation and Systems.

[36]  Ali Akbar Shaikh,et al.  A review of multimodal human activity recognition with special emphasis on classification, applications, challenges and future directions , 2021, Knowl. Based Syst..

[37]  E. Mayo-Wilson,et al.  The PRISMA 2020 statement: an updated guideline for reporting systematic reviews , 2021, Systematic Reviews.

[38]  Giorgos Tziafas,et al.  Few-Shot Visual Grounding for Natural Human-Robot Interaction , 2021, 2021 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC).

[39]  Afsaneh Doryab,et al.  Adaptive Humanoid Robots for Pain Management in Children , 2021, HRI.

[40]  Thomas R. Groechel,et al.  Kinesthetic Curiosity: Towards Personalized Embodied Learning with a Robot Tutor Teaching Programming in Mixed Reality , 2021, ISER.

[41]  Gokhan Ince,et al.  An audiovisual interface-based drumming system for multimodal human–robot interaction , 2020, Journal on Multimodal User Interfaces.

[42]  Xiaogang Liu,et al.  Recent Developments in Prosthesis Sensors, Texture Recognition, and Sensory Stimulation for Upper Limb Prostheses , 2020, Annals of biomedical engineering.

[43]  Abolfazl Mohebbi,et al.  Human-Robot Interaction in Rehabilitation and Assistance: a Review , 2020, Current Robotics Reports.

[44]  Holger Voos,et al.  A Survey of Computer Vision Methods for 2D Object Detection from Unmanned Aerial Vehicles , 2020, J. Imaging.

[45]  Janne M. Hahne,et al.  Longitudinal Case Study of Regression-Based Hand Prosthesis Control in Daily Life , 2020, Frontiers in Neuroscience.

[46]  Beno Benhabib,et al.  User Affect Elicitation with a Socially Emotional Robot , 2020, Robotics.

[47]  Chris Chesher,et al.  Preparing for smart voice assistants: Cultural histories and media innovations , 2020, New Media Soc..

[48]  Nicolas Vignais,et al.  Controlling an upper-limb exoskeleton by EMG signal while carrying unknown load , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[49]  Shuang Lu,et al.  Review of Interfaces for Industrial Human-Robot Interaction , 2020, Current Robotics Reports.

[50]  Yadong Liu,et al.  Computer vision technology in agricultural automation —A review , 2020 .

[51]  Jinguo Liu,et al.  Hand gesture recognition using multimodal data fusion and multiscale parallel convolutional neural network for human–robot interaction , 2020, Expert Syst. J. Knowl. Eng..

[52]  Silvia Rossi,et al.  Emotional and Behavioural Distraction by a Social Robot for Children Anxiety Reduction During Vaccination , 2020, Int. J. Soc. Robotics.

[53]  Ho Seok Ahn,et al.  Hospital Receptionist Robot v2: Design for Enhancing Verbal Interaction with Social Skills , 2019, 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN).

[54]  Ping Luo,et al.  PolarMask: Single Shot Instance Segmentation With Polar Representation , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[55]  Sharifalillah Nordin,et al.  Voice Control Intelligent Wheelchair Movement Using CNNs , 2019, 2019 1st International Conference on Artificial Intelligence and Data Sciences (AiDAS).

[56]  T. Velnar,et al.  The Importance and Role of Proprioception in the Elderly: a Short Review , 2019, Materia socio-medica.

[57]  Xiangpeng Liu,et al.  Common Sensors in Industrial Robots: A Review , 2019, Journal of Physics: Conference Series.

[58]  Bin Fang,et al.  Skill learning for human-robot interaction using wearable device , 2019, Tsinghua Science and Technology.

[59]  Ram Avtar Jaswal,et al.  Development of EMG Controlled Electric Wheelchair Using SVM and kNN Classifier for SCI Patients , 2019, Communications in Computer and Information Science.

[60]  Jihong Zhu,et al.  A collaborative robot for the factory of the future: BAZAR , 2019, The International Journal of Advanced Manufacturing Technology.

[61]  Mehmet Erkan Kutuk,et al.  Design of a robot-assisted exoskeleton for passive wrist and forearm rehabilitation , 2019, Mechanical Sciences.

[62]  Oscar Chuy,et al.  Control and Evaluation of a Motorized Attendant Wheelchair With Haptic Interface , 2019 .

[63]  Walid Zgallai,et al.  Deep Learning AI Application to an EEG driven BCI Smart Wheelchair , 2019, 2019 Advances in Science and Engineering Technology International Conferences (ASET).

[64]  Goldie Nejat,et al.  How Robots Influence Humans: A Survey of Nonverbal Communication in Social Human–Robot Interaction , 2019, International Journal of Social Robotics.

[65]  H. Gunes,et al.  Computational Analysis of Affect, Personality, and Engagement in Human–Robot Interactions ⁎ ⁎The research reported in this chapter was completed while O. Celiktutan and E. Sariyanidi were with the Computer Laboratory, University of Cambridge, United Kingdom. , 2018 .

[66]  Rahim Mutlu,et al.  Reusable Flexible Concentric Electrodes Coated With a Conductive Graphene Ink for Electrotactile Stimulation , 2018, Front. Bioeng. Biotechnol..

[67]  Xin Huang,et al.  Research on multimodal human-robot interaction based on speech and gesture , 2018, Comput. Electr. Eng..

[68]  Farhat Fnaiech,et al.  A facial expression controlled wheelchair for people with disabilities , 2018, Comput. Methods Programs Biomed..

[69]  Erik Scheme,et al.  Real-time, simultaneous myoelectric control using a convolutional neural network , 2018, PloS one.

[70]  Agnès Roby-Brami,et al.  Movement-Based Control for Upper-Limb Prosthetics: Is the Regression Technique the Key to a Robust and Accurate Control? , 2018, Front. Neurorobot..

[71]  Carlos Carrascosa,et al.  A new emotional robot assistant that facilitates human interaction and persuasion , 2018, Knowledge and Information Systems.

[72]  Dilek Z. Hakkani-Tür,et al.  Dialogue Learning with Human Teaching and Feedback in End-to-End Trainable Task-Oriented Dialogue Systems , 2018, NAACL.

[73]  Siddhartha S. Srinivasa,et al.  Natural language instructions for human–robot collaborative manipulation , 2018, Int. J. Robotics Res..

[74]  Petros Maragos,et al.  Far-Field Audio-Visual Scene Perception of Multi-Party Human-Robot Interaction for Children and Adults , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[75]  Paolo Dario,et al.  Emotion Modelling for Social Robotics Applications: A Review , 2018 .

[76]  Farhat Fnaiech,et al.  Intelligent Control Wheelchair Using a New Visual Joystick , 2018, Journal of healthcare engineering.

[77]  Stefan Kopp,et al.  Guidelines for Designing Social Robots as Second Language Tutors , 2018, International Journal of Social Robotics.

[78]  Bin Yang,et al.  Accurate calibration of a multi-camera system based on flat refractive geometry. , 2017, Applied optics.

[79]  Areej Al-Wabil,et al.  Review and Classification of Emotion Recognition Based on EEG Brain-Computer Interface System Research: A Systematic Review , 2017 .

[80]  Jochen J. Steil,et al.  A User Study on Personalized Stiffness Control and Task Specificity in Physical Human–Robot Interaction , 2017, Front. Robot. AI.

[81]  Petros Maragos,et al.  Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot , 2017, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[82]  Sukhdev Singh,et al.  Natural language processing: state of the art, current trends and challenges , 2017, Multimedia Tools and Applications.

[83]  Dmitry Popov,et al.  Collision detection, localization & classification for industrial robots with joint torque sensors , 2017, 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[84]  P. Maurage,et al.  Preserved Crossmodal Integration of Emotional Signals in Binge Drinking , 2017, Front. Psychol..

[85]  Maja J. Mataric,et al.  Autonomous human–robot proxemics: socially aware navigation based on interaction potential , 2016, Autonomous Robots.

[86]  B. Scassellati,et al.  Social eye gaze in human-robot interaction , 2017, J. Hum. Robot Interact..

[87]  Dingguo Zhang,et al.  Toward Multimodal Human–Robot Interaction to Enhance Active Participation of Users in Gait Rehabilitation , 2017, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[88]  David Whitney,et al.  Reducing errors in object-fetching interactions through social feedback , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[89]  Seth Hutchinson,et al.  Customizing haptic and visual feedback for assistive human-robot interface and the effects on performance improvement , 2017, Robotics Auton. Syst..

[90]  John R. Spletzer,et al.  A smart wheelchair ecosystem for autonomous navigation in urban environments , 2017, Auton. Robots.

[91]  Wolfram Burgard,et al.  Navigating blind people with walking impairments using a smart walker , 2017, Auton. Robots.

[92]  Xiaoli Zhang,et al.  Implicit Intention Communication in Human–Robot Interaction Through Visual Behavior Studies , 2017, IEEE Transactions on Human-Machine Systems.

[93]  Hao He,et al.  Building an EEG-fMRI Multi-Modal Brain Graph: A Concurrent EEG-fMRI Study , 2016, Front. Hum. Neurosci..

[94]  Lu Yang,et al.  Survey on 3D Hand Gesture Recognition , 2016, IEEE Transactions on Circuits and Systems for Video Technology.

[95]  Sayali Rawat,et al.  Pick and place industrial robot controller with computer vision , 2016, 2016 International Conference on Computing Communication Control and automation (ICCUBEA).

[96]  Min Wu,et al.  A multimodal emotional communication based humans-robots interaction system , 2016, 2016 35th Chinese Control Conference (CCC).

[97]  A. Khasnobish,et al.  Emotion recognition employing ECG and GSR signals as markers of ANS , 2016, 2016 Conference on Advances in Signal Processing (CASP).

[98]  Andrej Gams,et al.  On-line coaching of robots through visual and physical interaction: Analysis of effectiveness of human-robot interaction strategies , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[99]  Cristina P. Santos,et al.  Development of a Biofeedback Approach Using Body Tracking with Active Depth Sensor in ASBGo Smart Walker , 2016, 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC).

[100]  Cristina P. Santos,et al.  Considerations and Mechanical Modifications on a Smart Walker , 2016, 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC).

[101]  Petros Maragos,et al.  Multimodal human action recognition in assistive human-robot interaction , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[102]  José Saenz,et al.  A large scale tactile sensor for safe mobile robot manipulation , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[103]  Manuel Giuliani,et al.  Ghost-in-the-Machine reveals human social signals for human–robot interaction , 2015, Front. Psychol..

[104]  Seemal Asif,et al.  The integration of contactless static pose recognition and dynamic hand motion tracking control system for industrial human and robot collaboration , 2015, Ind. Robot.

[105]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[106]  Jason Lu,et al.  Practical, stretchable smart skin sensors for contact-aware robots in safe and collaborative interactions , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[107]  Daniel J. Barber,et al.  Toward a Tactile Language for Human–Robot Interaction , 2015, Hum. Factors.

[108]  James Philbin,et al.  FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[109]  Michael Goldfarb,et al.  A Robotic Leg Prosthesis: Design, Control, and Implementation , 2014, IEEE Robotics & Automation Magazine.

[110]  Alexander Duschau-Wicke,et al.  Feedback control of arm movements using Neuro-Muscular Electrical Stimulation (NMES) combined with a lockable, passive exoskeleton for gravity compensation , 2014, Front. Neurosci..

[111]  Roberto Basili,et al.  Effective and Robust Natural Language Understanding for Human-Robot Interaction , 2014, ECAI.

[112]  Subhasis Bhaumik,et al.  A Bioinspired 10 DOF Wearable Powered Arm Exoskeleton for Rehabilitation , 2013, J. Robotics.

[113]  Kai-Tai Song,et al.  Robotic Emotional Expression Generation Based on Mood Transition and Personality Model , 2013, IEEE Transactions on Cybernetics.

[114]  Alberto J. Palma,et al.  Noise Suppression in ECG Signals through Efficient One-Step Wavelet Processing Techniques , 2013, J. Appl. Math..

[115]  Hui Wang,et al.  Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction: , 2012 .

[116]  Nick Campbell,et al.  Investigating the use of Non-verbal Cues in Human-Robot Interaction with a Nao robot , 2012, 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom).

[117]  Adriana Tapus,et al.  Prosody-driven robot arm gestures generation in human-robot interaction , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[118]  Maja J. Mataric,et al.  A probabilistic framework for autonomous proxemic control in situated and mobile human-robot interaction , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[119]  Stefan Kopp,et al.  Generation and Evaluation of Communicative Robot Gesture , 2012, Int. J. Soc. Robotics.

[120]  Chien-Chieh Huang,et al.  Human robot interactions using speech synthesis and recognition with lip synchronization , 2011, IECON 2011 - 37th Annual Conference of the IEEE Industrial Electronics Society.

[121]  Matthias Scheutz,et al.  Actions Speak Louder than Words: Evaluating Parsers in the Context of Natural Language Understanding Systems for Human-Robot Interaction , 2011, RANLP.

[122]  Norman I. Badler,et al.  Defining Next-Generation Multi-Modal Communication in Human Robot Interaction , 2011 .

[123]  Stefan Kopp,et al.  Towards an integrated model of speech and gesture production for multi-modal robot behavior , 2010, 19th International Symposium in Robot and Human Interactive Communication.

[124]  Gerhard Rigoll,et al.  Real-time framework for multimodal human-robot interaction , 2009, 2009 2nd Conference on Human System Interactions.

[125]  Jeff A. Bilmes,et al.  The VoiceBot: a voice controlled robot arm , 2009, CHI.

[126]  Matt Huenerfauth,et al.  Evaluation of a psycholinguistically motivated timing model for animations of american sign language , 2008, Assets '08.

[127]  Cynthia Breazeal,et al.  Achieving fluency through perceptual-symbol practice in human-robot collaboration , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[128]  Stefan Kopp,et al.  Multimodal Communication from Multimodal Thinking - towards an Integrated Model of Speech and Gesture Production , 2008, Int. J. Semantic Comput..

[129]  Gordon E. Legge,et al.  Blind Navigation and the Role of Technology , 2008 .

[130]  Sumi Helal,et al.  The Engineering Handbook of Smart Technology for Aging, Disability, and Independence , 2008 .

[131]  Alexander H. Waibel,et al.  Enabling Multimodal Human–Robot Interaction for the Karlsruhe Humanoid Robot , 2007, IEEE Transactions on Robotics.

[132]  Tao Zhang,et al.  Adaptive visual gesture recognition for human-robot interaction using a knowledge-based software platform , 2007, Robotics Auton. Syst..

[133]  S. Mitra,et al.  Gesture Recognition: A Survey , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[134]  Marjorie Skubic,et al.  Spatial language for human-robot dialogs , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[135]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[136]  M. Covington Building Natural Language Generation Systems (review) , 2001 .

[137]  Janne Heikkilä,et al.  Geometric Camera Calibration Using Circular Control Points , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[138]  Maja Pantic,et al.  Expert system for automatic analysis of facial expressions , 2000, Image Vis. Comput..

[139]  Neville Hogan,et al.  Impedance Control: An Approach to Manipulation , 1984, 1984 American Control Conference.

[140]  U. Tariq,et al.  Dynamic Hand Gesture Recognition Using 3D-CNN and LSTM Networks , 2022, Computers, Materials & Continua.

[141]  M. Hrúz,et al.  Multi-modal communication system for mobile robot , 2022, IFAC-PapersOnLine.

[142]  Xiaoming Zhao,et al.  Contextual and Cross-Modal Interaction for Multi-Modal Speech Emotion Recognition , 2022, IEEE Signal Processing Letters.

[143]  W. Pedrycz,et al.  Multimodal Emotion Recognition and Intention Understanding in Human-Robot Interaction , 2021 .

[144]  Jörg Franke,et al.  Human-robot-interaction using cloud-based speech recognition systems , 2021 .

[145]  Sotiris Makris,et al.  Multi-modal interfaces for natural Human-Robot Interaction , 2021, Procedia Manufacturing.

[146]  H. Gunes,et al.  Computational Analysis of Affect, Personality, and Engagement in Human–Robot Interactions ⁎ ⁎The research reported in this chapter was completed while O. Celiktutan and E. Sariyanidi were with the Computer Laboratory, University of Cambridge, United Kingdom. , 2018 .

[147]  S. S. Mantha,et al.  Advances in smart wheelchair technology , 2017, 2017 International Conference on Nascent Technologies in Engineering (ICNTE).

[148]  Pedro Núñez Trujillo,et al.  A Novel Multimodal Emotion Recognition Approach for Affective Human Robot Interaction , 2015, MuSRobS@IROS.

[149]  Heinz Wörn,et al.  Capacitive Tactile Proximity Sensing: From Signal Processing to Applications in Manipulation and Safe Human-Robot Interaction , 2015 .

[150]  Luís Paulo Reis,et al.  Invited Paper: Multimodal Interface for an Intelligent Wheelchair , 2015 .

[151]  Qingmei Yao,et al.  Multi-Sensory Emotion Recognition with Speech and Facial Expression , 2014 .

[152]  Cini Kurian A Review on Technological Development of Automatic Speech Recognition , 2014 .

[153]  S. Rautaray,et al.  Vision based hand gesture recognition for human computer interaction: a survey , 2012, Artificial Intelligence Review.

[154]  R. Gunderman,et al.  Emotional intelligence. , 2011, Journal of the American College of Radiology : JACR.

[155]  Sérgio Miguel Fontes de Vasconcelos,et al.  Multimodal interface for an intelligent wheelchair , 2011 .

[156]  David J. Reinkensmeyer,et al.  Rehabilitation and Health Care Robotics , 2008, Springer Handbook of Robotics, 2nd Ed..