Model-based strategies for high-level robot vision

Abstract The higher levels of a sensory system for a robot manipulator are described. The sensory system constructs and maintains a representation of the world in a form suitable for fast responses to questions posed by other robot subsystems. This is achieved by separating the sensing processes from the descriptive processes, allowing questions to be answered without waiting for the sensors to respond. Four groups of processes are described. Predictive processes (world modellers) are needed to set up initial expectations about the world and to generate predictions about sensor responses. Processes are also needed to analyze the sensory input. They make use of the predictions in analyzing the world. A third essential function is matching, which involves comparing the sensed data with the expectations, and provides errors that help to servo the models to the world. Finally, the descriptive process constructs and maintains the internal representation of the world. It constructs the representation from the sensed information and the expectations, and contains at all times everything known about the world. The sensory system is responsive to changes in the world, but can also deal with interruptions in sensing, and can supply information that may not be available by sensing the world directly.

[1]  Marilyn Nashman,et al.  Real-time cooperative interaction between structured-light and reflectance ranging for robot guidance , 1985, Robotica.

[2]  Rodney A. Brooks,et al.  Symbolic Reasoning Among 3-D Models and 2-D Images , 1981, Artif. Intell..

[3]  Narendra Ahuja,et al.  Octree representations of moving objects , 1984, Comput. Vis. Graph. Image Process..

[4]  Ernest W. Kent,et al.  Representing Workspace and Model Knowledge for a Robot with Mobile Sensors | NIST , 1984 .

[5]  J. A. Simpson,et al.  The automated manufacturing research facility of the national bureau of standards , 1984 .

[6]  Wallace S. Rutkowski,et al.  Model-driven determination of object pose for a visually servoed robot , 1987, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[7]  Wallace S. Rutkowski,et al.  Using chebyshev polynomials for interpreting structured light images , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[8]  Michael Shneier,et al.  Describing a Robot's Workspace Using a Sequence of Views from a Moving Camera , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Ronald Lumia,et al.  Representing Solids for a Real-Time Robot Sensory System , 1985 .

[10]  James S. Albus,et al.  The automated manufacturing research facility of the national bureau of standards , 1984 .

[11]  Ronald Lumia,et al.  PIPE (Pipelined Image-Processing Engine) , 1985, J. Parallel Distributed Comput..