Autonomous 3D Shape Modeling and Grasp Planning for Handling Unknown Objects

To handle a hand-size object is one of fundamental abilities for a robot which works on home and office environments. Such abilities have capable of doing various tasks by the robot, for instance, carrying an object from one place to another. Conventionally, researches which coped well with such challenging tasks have taken several approaches. The one is that detail object models were defined in advance (Miura et al., 2003) , (Nagatani & Yuta, 1997 ) and (Okada et al., 2006). 3D geometrical models or photometric models were utilized to recognize target objects by vision sensors, and their robots grasped its target objects based on the handling point given by manual. Other researchers took an approach to give information to their target objects by means of ID tags (Chong & Tanie, 2003} or QR codes (Katsuki et al., 2003). In these challenges, what kind of information of the object should be defined was mainly focused on. These researches had an essential problem that a new target object cannot be added without a heavy programming or a special tools. Because there are plenty of objects in real world, robots should have abilities to extract the information for picking up the objects autonomously. We are motiveted above way of thinking so that this chapter describes different approach from conventional researches. Our approach has two special policies for autonomous working. The one is to create dense 3D shape model from image streams (Yamazaki et. al., 2004). Another is to plan various grasp poses from the dense shape of the target object (Yamazaki et. al., 2006). By combining the two approaches, it is expected that the robot will be capable of handling in daily environment even if it targets an unknown object. In order to put all the characteristics, following conditions are allowed in our framework: The position of a target object is given No additional information on the object and environment is given No information about the shape of the object is given No information how to grasp it is given 22

[1]  Masayuki Inaba,et al.  Vision based behavior verification system of humanoid robot for daily environment tasks , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[2]  Takeo Kanade,et al.  A Paraperspective Factorization Method for Shape and Motion Recovery , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  J. Brian Burns,et al.  Path planning using Laplace's equation , 1990, Proceedings., IEEE International Conference on Robotics and Automation.

[4]  Kimitoshi Yamazaki,et al.  A grasp planning for picking up an unknown object for a mobile manipulator , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[5]  Kimitoshi Yamazaki,et al.  3-D Object Modeling by a Camera Mounted on a Mobile Robot , 2005 .

[6]  Katsushi Ikeuchi,et al.  Determining Grasp Configurations using Photometric Stereo and the PRISM Binocular Stereo System , 1986 .

[7]  Nak Young Chong,et al.  Object Directive Manipulation Through RFID , 2003 .

[8]  Yasushi Makihara,et al.  Development of a Personal Service Robot with User-Friendly Interfaces , 2003, FSR.

[9]  Lars Petersson,et al.  Systems integration for real-world manipulation tasks , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[10]  Jun Ota,et al.  Design of an artificial mark to determine 3D pose by monocular vision , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).