Nadine: A Social Robot that Can Localize Objects and Grasp Them in a Human Way

What makes a social humanoid robot behave like a human? It needs to understand and show emotions, has a chat box, a memory and also a decision-making process. However, more than that, it needs to recognize objects and be able to grasp them in a human way. To become an intimate companion, social robots need to behave the same way as real humans in all areas and understand real situations in order they can react properly. In this chapter, we describe our ongoing research on social robotics. It includes the making of articulated hands of Nadine Robot, the recognition of objects and their signification, as well as how to grasp them in a human way. State of the art is presented as well as some early results.

[1]  Kikuo Fujimura,et al.  The intelligent ASIMO: system overview and integration , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Honglak Lee,et al.  Deep learning for detecting robotic grasps , 2013, Int. J. Robotics Res..

[3]  Giulio Sandini,et al.  The iCub humanoid robot: an open platform for research in embodied cognition , 2008, PerMIS.

[4]  Trevor Darrell,et al.  Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Timothy Bretl,et al.  Tact: Design and performance of an open-source, affordable, myoelectric prosthetic hand , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Di Guo,et al.  Object discovery and grasp detection with a shared convolutional neural network , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Ross B. Girshick,et al.  Fast R-CNN , 2015, 1504.08083.

[8]  Jacob L. Segil,et al.  Mechanical design and performance specifications of anthropomorphic prosthetic hands: a review. , 2013, Journal of rehabilitation research and development.

[9]  Ashutosh Saxena,et al.  Efficient grasping from RGBD images: Learning using a new rectangle representation , 2011, 2011 IEEE International Conference on Robotics and Automation.

[10]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[13]  Joseph Redmon,et al.  Real-time grasp detection using convolutional neural networks , 2014, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Ying Wu,et al.  Modeling the constraints of human hand motion , 2000, Proceedings Workshop on Human Motion.

[15]  Quoc V. Le,et al.  Learning to grasp objects with multiple contact points , 2010, 2010 IEEE International Conference on Robotics and Automation.

[16]  Ashutosh Saxena,et al.  Robotic Grasping of Novel Objects using Vision , 2008, Int. J. Robotics Res..

[17]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.