Task-Oriented Generation of Visual Sensing Strategies in Assembly Tasks

This paper describes a method of systematically generating visual sensing strategies based on knowledge of the assembly task to be performed. Since visual sensing is usually performed with limited resources, visual sensing strategies should be planned so that only necessary information is obtained efficiently. The generation of the appropriate visual sensing strategy entails knowing what information to extract, where to get it, and how to get it. This is facilitated by the knowledge of the task, which describes what objects are involved in the operation, and how they are assembled. In the proposed method, using the task analysis based on face contact relations between objects, necessary information for the current operation is first extracted. Then, visual features to be observed are determined using the knowledge of the sensor, which describes the relationship between a visual feature and information to be obtained. Finally, feasible visual sensing strategies are evaluated based on the predicted success probability, and the best strategy is selected. Our method has been implemented using a laser range finder as the sensor. Experimental results show the feasibility of the method, and point out the importance of task-oriented evaluation of visual sensing strategies.

[1]  Konstantinos A. Tarabanis,et al.  The MVP sensor planning system for robotic vision tasks , 1995, IEEE Trans. Robotics Autom..

[2]  Katsushi Ikeuchi,et al.  Numerical Shape from Shading and Occluding Boundaries , 1981, Artif. Intell..

[3]  Takeo Kanade,et al.  Sensor placement design for object pose determination with three light-stripe range finders , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[4]  Takeo Kanade,et al.  Modelling sensors: Toward automatic generation of object recognition program , 1989, Comput. Vis. Graph. Image Process..

[5]  Rangasami L. Kashyap,et al.  Active visual inspection based on CAD models , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[6]  C. K. Cowan Automatic camera and light-source placement using CAD models , 1991, [1991 Proceedings] Workshop on Directions in Automated CAD-Based Vision.

[7]  Yiannis Aloimonos,et al.  Purposive and qualitative active vision , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.

[8]  Avinash C. Kak,et al.  Planning sensing strategies in a robot work cell with multi-sensor capabilities , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[9]  Russell H. Taylor,et al.  Automatic Synthesis of Fine-Motion Strategies for Robots , 1984 .

[10]  Kunikatsu Takase,et al.  Representation and Control of Motion in Contact and Its Application to Assembly Tasks , 1988 .

[11]  Dana H. Ballard,et al.  Reference Frames for Animate Vision , 1989, IJCAI.

[12]  Olivier D. Faugeras,et al.  Maintaining representations of the environment of a mobile robot , 1988, IEEE Trans. Robotics Autom..

[13]  Lawrence Birnbaum,et al.  Looking for trouble: Using causal semantics to direct focus of attention , 1993, 1993 (4th) International Conference on Computer Vision.

[14]  Shinichi Hirai,et al.  Kinematics and Statics of Manipulation Using the Theory of Polyhedral Convex Cones , 1993, Int. J. Robotics Res..

[15]  Victor Scheinman Robotworld: a multiple robot vision guided assembly system , 1988 .

[16]  Takeo Kanade,et al.  Automatic generation of object recognition programs , 1988, Proc. IEEE.

[17]  Ian Horswill,et al.  Specialization of perceptual processes , 1993 .

[18]  Katsushi Ikeuchi,et al.  Task Oriented Vision , 1992, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.

[19]  Hong Zhang,et al.  Optimal sensor placement , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[20]  John K. Tsotsos The Complexity of Perceptual Search Tasks , 1989, IJCAI.

[21]  Seth Hutchinson Exploiting visual constraints in robot motion planning , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[22]  Aristides A. G. Requicha,et al.  Automatic programming of coordinate measuring machines , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[23]  Katsushi Ikeuchi,et al.  Partitioning contact-state space using the theory of polyhedral convex cones , 1995, Proceedings of 1995 IEEE International Conference on Robotics and Automation.

[24]  Katsushi Ikeuchi,et al.  Toward an assembly plan from observation. I. Task recognition with polyhedral objects , 1994, IEEE Trans. Robotics Autom..

[25]  Matthew T. Mason,et al.  Compliance and Force Control for Computer Controlled Manipulators , 1981, IEEE Transactions on Systems, Man, and Cybernetics.

[26]  Frank P. Ferrie,et al.  From uncertainty to visual exploration , 1990, [1990] Proceedings Third International Conference on Computer Vision.

[27]  Konstantinos A. Tarabanis,et al.  A survey of sensor planning in computer vision , 1995, IEEE Trans. Robotics Autom..

[28]  Tomomasa Sato,et al.  Automatic planning of light source and camera placement for an active photometric stereo system , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[29]  Takeo Kanade,et al.  Towards automatic generation of object recognition programs , 1988 .