Sensor Fusion and Planning with Perception–Action Network

Robot intelligence requires a real-time connection between sensing and action. A new computation principle of robotics that efficiently implements such a connection is utmost important for the new generation of robotics. In this paper, a perception–action network is presented as a means of efficiently integrating sensing, knowledge, and action for sensor fusion and planning. The network consists of a number of heterogeneous computational units, representing feature transformation and decision-making for action, which are interconnected as a dynamic system. New input stimuli to the network invoke the evolution of network states to a new equilibrium, through which a real-time integration of sensing, knowledge, and action can be accomplished. The network provides a formal, yet general and efficient, method of achieving sensor fusion and planning. This is because the uncertainties of signals, propagated in the network, can be controlled by modifying sensing parameters and robot actions. Algorithms for sensor planning based on the proposed network are established and applied to robot self-localization. Simulation and experimental results are shown.

[1]  Tom Henderson,et al.  Logical sensor systems , 1984, J. Field Robotics.

[2]  Hugh F. Durrant-Whyte,et al.  Sensor Models and Multisensor Integration , 1988, Int. J. Robotics Res..

[3]  Gregory D. Hager,et al.  Task-directed multisensor fusion , 1989, Proceedings, 1989 International Conference on Robotics and Automation.

[4]  James S. Albus,et al.  Brains, behavior, and robotics , 1981 .

[5]  Keinosuke Fukunaga,et al.  Introduction to statistical pattern recognition (2nd ed.) , 1990 .

[6]  Sukhan Lee,et al.  Sensor planning with hierarchically distributed perception net , 1994, Proceedings of 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for Intelligent Systems.

[7]  Rui J. P. de Figueiredo,et al.  Fusion of radar and optical sensors for space robotic vision , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[8]  Rodney A. Brooks,et al.  A Robust Layered Control Syste For A Mobile Robot , 2022 .

[9]  Sukhan Lee,et al.  Robot kinematic control based on bidirectional mapping neural network , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[10]  Roger Y. Tsai,et al.  Automated sensor planning for robotic vision tasks , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[11]  Sukhan Lee,et al.  Sensor planning with hierarchically distributed perception net , 1993, Defense, Security, and Sensing.

[12]  Arthur Gelb,et al.  Applied Optimal Estimation , 1974 .

[13]  Aviv Bergman,et al.  Determining the camera and light source location for a visual task , 1989, Proceedings, 1989 International Conference on Robotics and Automation.

[14]  Hans P. Moravec Sensor Fusion in Certainty Grids for Mobile Robots , 1988, AI Mag..

[15]  Hugh F. Durrant-Whyte,et al.  Sensor Models and Multisensor Integration , 1988, Int. J. Robotics Res..

[16]  Malik Ghallab,et al.  Perception planning for a multi-sensory interpretation machine , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[17]  Ren C. Luo,et al.  Dynamic multi-sensor data fusion system for intelligent robots , 1988, IEEE J. Robotics Autom..

[18]  R. M. Kil,et al.  Bidirectional continuous associator based on Gaussian potential function network , 1989, International 1989 Joint Conference on Neural Networks.

[19]  Keinosuke Fukunaga,et al.  Introduction to Statistical Pattern Recognition , 1972 .

[20]  Hugh F. Durrant-Whyte,et al.  A Bayesian Approach to Optimal Sensor Placement , 1990, Int. J. Robotics Res..