Clothes handling using visual recognition in cooperation with actions

In this paper, we propose a method of visual recognition in cooperation with actions for automatic handling of clothing by a robot. Difficulty in visual recognition of clothing largely depends on the observed shape of the clothing. Therefore, strategy of actively making clothing into the shape easier to recognize should be effective. First, after clothing is observed by a trinocular stereo vision system, it is checked whether the observation gives enough information to recognize the clothing shape robustly or not. If not, proper “recognition-aid” actions, such as rotating and/or spreading the clothing, are automatically planned based on the visual analysis of the current shape. After executing the planned action, the clothing is observed again to recognize. The effect of the action of spreading clothes was demonstrated through experimental results using an actual humanoid.

[1]  Nobuyuki Kita,et al.  A method for handling a specific part of clothing by dual arms , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Nobuyuki Kita,et al.  A deformable model driven visual method for handling clothes , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[3]  Hitoshi Akami,et al.  Robot hand with sensor for handling cloth , 1990, EEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications.

[4]  Nobuyuki Kita,et al.  Clothes state recognition using 3D observed data , 2009, 2009 IEEE International Conference on Robotics and Automation.

[5]  Hitoshi Akami,et al.  Robot Hand with a Sensor for Cloth Handling , 1989 .

[6]  Kimitoshi YAMAZAKI,et al.  生活支援ロボットのためのしわ特徴に基づく衣類発見法 ○ 山崎公俊 稲葉雅幸(東京大学 ) A Cloth Detection Method Based on Wrinkle Features , 2009 .

[7]  Pieter Abbeel,et al.  Cloth grasp point detection based on multiple-view geometric cues with application to robotic towel folding , 2010, 2010 IEEE International Conference on Robotics and Automation.

[8]  Kazuhito Yokoi,et al.  Whole-Body Motion Generation Integrating Operator's Intention and Robot's Autonomy in Controlling Humanoid Robots , 2007, IEEE Transactions on Robotics.

[9]  Shinichi Hirai,et al.  Wiping motion for deformable object handling , 2009, 2009 IEEE International Conference on Robotics and Automation.

[10]  M. Kakikura,et al.  Planning strategy for putting away laundry-isolating and unfolding task , 2001, Proceedings of the 2001 IEEE International Symposium on Assembly and Task Planning (ISATP2001). Assembly and Disassembly in the Twenty-first Century. (Cat. No.01TH8560).

[11]  Hirofumi Nakagaki,et al.  Study of deformation and insertion tasks of a flexible wire , 1997, Proceedings of International Conference on Robotics and Automation.

[12]  Masayuki INABA,et al.  Hand Eye Coordination in Rope Handling , 1985 .

[13]  Toshio Ueshiba An Efficient Implementation Technique of Bidirectional Matching for Real-time Trinocular Stereo Vision , 2006, 18th International Conference on Pattern Recognition (ICPR'06).