Fusing Hand Postures and Speech Recognition for Tasks Performed by an Integrated Leg–Arm Hexapod Robot
暂无分享,去创建一个
Zhonghua Han | Xilun Ding | Weiwei Li | Kun Xu | Jing Qi | Xilun Ding | Jing Qi | Kun Xu | Weiwei Li | Zhonghua Han
[1] Rainer Stiefelhagen,et al. Visual recognition of pointing gestures for human-robot interaction , 2007, Image Vis. Comput..
[2] Haitham Sabah Badi,et al. Hand posture and gesture recognition technology , 2014, Neural Computing and Applications.
[3] Sebastian van Delden,et al. Pick-and-place application development using voice and visual commands , 2012, Ind. Robot.
[4] Vladimir Pavlovic,et al. Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review , 1997, IEEE Trans. Pattern Anal. Mach. Intell..
[5] Frédéric Lerasle,et al. Two-handed gesture recognition and fusion with speech to command a robot , 2011, Autonomous Robots.
[6] Lihui Wang,et al. Deep Learning-based Multimodal Control Interface for Human-Robot Collaboration , 2018 .
[7] Alexander H. Waibel,et al. Enabling Multimodal Human–Robot Interaction for the Karlsruhe Humanoid Robot , 2007, IEEE Transactions on Robotics.
[8] Toshi Takamori,et al. Multi-Modal Interaction of Human and Home Robot in the Context of Room Map Generation , 2002, Auton. Robots.
[9] Rainer Stiefelhagen,et al. Real-Time Person Tracking and Pointing Gesture Recognition for Human-Robot Interaction , 2004, ECCV Workshop on HCI.
[10] Chin-Chen Chang,et al. New Approach for Static Gesture Recognition , 2006, J. Inf. Sci. Eng..