Task execution based-on human-robot dialogue and deictic gestures
暂无分享,去创建一个
[1] Maren Bennewitz,et al. Whole-body motion planning for manipulation of articulated objects , 2013, 2013 IEEE International Conference on Robotics and Automation.
[2] Kevin Lee,et al. Tell me Dave: Context-sensitive grounding of natural language to manipulation instructions , 2014, Int. J. Robotics Res..
[3] Markus Vincze,et al. RGB-D object modelling for object recognition and tracking , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[4] Markus Vincze,et al. Multimodal cue integration through Hypotheses Verification for RGB-D object recognition and 6DOF pose estimation , 2013, 2013 IEEE International Conference on Robotics and Automation.
[5] Terrence Fong,et al. Robot, asker of questions , 2003, Robotics Auton. Syst..
[6] Huda Khayrallah,et al. Natural Language For Human Robot Interaction , 2015 .
[7] Javier González,et al. A minimal solution for the calibration of a 2D laser-rangefinder and a camera based on scene corners , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[8] Martin Jägersand,et al. SEPO: Selecting by pointing as an intuitive human-robot command interface , 2013, 2013 IEEE International Conference on Robotics and Automation.
[9] KofinasNikolaos,et al. Complete Analytical Forward and Inverse Kinematics for the NAO Humanoid Robot , 2015 .
[10] Fabio Maria Carlucci,et al. Explicit representation of social norms for social robots , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[11] Koji Zettsu,et al. Rospeex: A cloud robotics platform for human-robot spoken dialogues , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[12] Ross A. Knepper,et al. Asking for Help Using Inverse Semantics , 2014, Robotics: Science and Systems.
[13] David Whitney,et al. Interpreting multimodal referring expressions in real time , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).
[14] Hiroshi G. Okuno,et al. Design and Implementation of Robot Audition System 'HARK' — Open Source Software for Listening to Three Simultaneous Speakers , 2010, Adv. Robotics.
[15] Matthew R. Walter,et al. Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation , 2011, AAAI.
[16] Rainer Stiefelhagen,et al. “Look at this!” learning to guide visual saliency in human-robot interaction , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[17] Peter Stone,et al. Learning to Interpret Natural Language Commands through Human-Robot Dialog , 2015, IJCAI.
[18] Michail G. Lagoudakis,et al. Complete analytical inverse kinematics for NAO , 2013, 2013 13th International Conference on Autonomous Robot Systems.
[19] Polychronis Kondaxakis,et al. Real-time recognition of pointing gestures for robot to robot interaction , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[20] Maren Bennewitz,et al. Mobile manipulation in cluttered environments with humanoids: Integrated perception, task planning, and action execution , 2014, 2014 IEEE-RAS International Conference on Humanoid Robots.
[21] Manuela M. Veloso,et al. Using dialog and human observations to dictate tasks to a learning robot assistant , 2008, Intell. Serv. Robotics.