Robotics with Perception and Action Nets

In this chapter, the problem of intelligent robotic system architecture, referred to as perception-action net (PAN), is presented. The proposed PAN architecture provides a formal mechanism for integrating sensing, knowledge, and action in real time for intelligent robots. The architecture emphasizes uncertainty management as well as error monitoring and recovery so that the system can provide robots with the capability of generating goal-oriented, yet robust and fault-tolerant behaviors. Perception-action net is composed of the perception and action nets interconnected in closed loops. The perception net connects features of various levels of abstraction or logical sensors in the hierarchy. The net is capable of self-calibrating itself by maintaining the consistency of logical sensors based on the forward propagation of sensor outputs and uncertainties as well as based on the backward propagation of errors from constraints. The action net consists of a hierarchy of state transition networks of multi resolution time scales. The net embeds all the feasible system behaviors in various levels of abstraction, such that the system can replan and control its behaviors toward the set goals under errors and faults. A novel geometric method is presented as a unified framework for computing forward and backward propagations through which the net achieves the self-reduction of uncertainties and self-calibration of biases. The proposed method is applied to the self-calibration of the eye-hand system equipped for a JPL-NASA planetary rover. Simulations and experimental results are shown.

[1]  R. Lobbia,et al.  Data fusion of decentralized local tracker outputs , 1994 .

[2]  Roger Y. Tsai,et al.  Automated sensor planning for robotic vision tasks , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[3]  Gregory D. Hager,et al.  Computational Methods for Task-directed Sensor Data Fusion and Sensor Planning , 1991, Int. J. Robotics Res..

[4]  Rodney A. Brooks,et al.  A Robust Layered Control Syste For A Mobile Robot , 2022 .

[5]  W. Gray,et al.  Optimal data fusion of correlated local decisions in multiple sensor detection systems , 1992 .

[6]  Tom Henderson,et al.  Logical sensor systems , 1984, J. Field Robotics.

[7]  Ren C. Luo,et al.  Dynamic multi-sensor data fusion system for intelligent robots , 1988, IEEE J. Robotics Autom..

[8]  D. Gennery Least-Squares Camera Calibration Including Lens Distortion and Automatic Editing of Calibration Points , 2001 .

[9]  Gregory D. Hager,et al.  Task-directed multisensor fusion , 1989, Proceedings, 1989 International Conference on Robotics and Automation.

[10]  Yoseph Bar-Cohen,et al.  Dexterous robotic sampling for Mars in-situ science , 1997, Other Conferences.

[11]  Hugh F. Durrant-Whyte,et al.  A Bayesian Approach to Optimal Sensor Placement , 1990, Int. J. Robotics Res..

[12]  Larry E. Banta,et al.  Sensor fusion for mining robots , 1994 .

[13]  H. F. Durrant-White Consistent integration and propagation of disparate sensor observations , 1987 .

[14]  James S. Albus,et al.  Brains, behavior, and robotics , 1981 .

[15]  Le-Pond Chin,et al.  Application of neural networks in target tracking data fusion , 1994 .

[16]  W. Baek,et al.  Optimal m-ary data fusion with distributed sensors , 1995 .

[17]  Ishwar K. Sethi,et al.  Optimal multiple level decision fusion with distributed sensors , 1993 .

[18]  Aviv Bergman,et al.  Determining the camera and light source location for a visual task , 1989, Proceedings, 1989 International Conference on Robotics and Automation.

[19]  Hans P. Moravec Sensor Fusion in Certainty Grids for Mobile Robots , 1988, AI Mag..

[20]  John N. Tsitsiklis,et al.  Data fusion with minimal communication , 1994, IEEE Trans. Inf. Theory.