Vision based manipulation of non-rigid objects

Since the analytical expressions for the representation of nonrigid object structure and motion are severely underconstrained, current techniques for nonrigid object manipulation employ physical object models known prior to sensing. Recently, however, psychophysical studies have revealed that humans are able to discover proper motor coordination skills through sensory input without the use of previously known physical models. In this paper, a robust, discovery-driven, vision-based robotic manipulation algorithm for nonrigid objects, based on the novel concept of relative elasticity, is developed which requires the use of no a priori physical models. The manipulation technique is also experimentally verified on different flexible linear objects.

[1]  Kazuaki Iwata,et al.  Modeling of linear objects considering bend, twist, and extensional deformations , 1995, Proceedings of 1995 IEEE International Conference on Robotics and Automation.

[2]  Hikaru Inooka,et al.  Handling of a constrained flexible object by a robot , 1995, Proceedings of 1995 IEEE International Conference on Robotics and Automation.

[3]  B. Kay,et al.  Infant bouncing: the assembly and tuning of action systems. , 1993, Child development.

[4]  Yuan F. Zheng,et al.  Strategies for automatic assembly of deformable objects , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[5]  N. Nandhakumar,et al.  Representation of deformable object structure and motion for autonomous manipulation using relative elasticity , 1995, Proceedings of International Symposium on Computer Vision - ISCV.