Spinoza: a stereoscopic visually guided mobile robot

Our mobile robot, Spinoza, embodies a sophisticated real-time vision system for the control of a mobile robot in a dynamic environment. The complexity of our robot architecture arises from the wide variety of tasks that need to be performed and the resulting challenge of coordinating multiple distributed, concurrent processes on a diverse range of processor architectures, including transputers, digital signal processors and a workstation host. The system handles the sensing, reasoning and action components of a robot, distributed over these architectures, and responds to unpredictable events in an unknown dynamic environment. Spinoza relies heavily on its capability to perform real-time vision processing in order to perform task such as mapping, navigation, exploration, tracking and simple manipulation.

[1]  Nicholas Ayache,et al.  Towards Real-time Trinocular Stereo , 1988, [1988 Proceedings] Second International Conference on Computer Vision.

[2]  Takeo Kanade,et al.  A multiple-baseline stereo , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[3]  Michael K. Sahota Reactive Deliberation: An Architecture for Real-Time Intelligent Control in Dynamic Environments , 1994, AAAI.

[4]  Martin Herman,et al.  Real-time obstacle avoidance using central flow divergence and peripheral flow , 2017, Proceedings of IEEE International Conference on Computer Vision.

[5]  T. Poggio,et al.  A parallel algorithm for real-time computation of optical flow , 1989, Nature.

[6]  Russell L. Anderson,et al.  A Robot Ping-Pong Player: Experiments in Real-Time Intelligent Control , 1988 .

[7]  James J. Little,et al.  Dynamo: Real-time Experiments With Multiple Mobile Robots , 1993, Proceedings of the Intelligent Vehicles '93 Symposium.

[8]  Ying Zhang,et al.  Constraint Nets: A Semantic Model for Hybrid Dynamic Systems , 1995, Theor. Comput. Sci..

[9]  Alan K. Mackworth,et al.  Real-time control of soccer-playing robots using off-board vision: the dynamite testbed , 1995, 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century.

[10]  James J. Little,et al.  Computational Architectures for Responsive Vision: the Vision Engine , 1991 .

[11]  H. Damasio,et al.  IEEE Transactions on Pattern Analysis and Machine Intelligence: Special Issue on Perceptual Organization in Computer Vision , 1998 .

[12]  Ian Horswill,et al.  A $1000 Active Stereo Vision System , 1994 .

[13]  John Porrill,et al.  Matching geometrical descriptions in three-space , 1987, Image Vis. Comput..

[14]  James J. Little Vision servers and their clients , 1994, Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 2 - Conference B: Computer Vision & Image Processing. (Cat. No.94CH3440-5).

[15]  Alberto Elfes,et al.  Using occupancy grids for mobile robot perception and navigation , 1989, Computer.

[16]  Jean-Claude Latombe,et al.  Robot motion planning , 1970, The Kluwer international series in engineering and computer science.

[17]  James J. Little,et al.  A smart buffer for tracking using motion data , 1993, 1993 Computer Architectures for Machine Perception.

[18]  Roger Y. Tsai,et al.  Techniques for Calibration of the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Rodney A. Brooks,et al.  Situated Vision in a Dynamic World: Chasing Objects , 1988, AAAI.

[20]  Ying Zhang,et al.  Synthesis of Hybrid Constraint-Based Controllers , 1994, Hybrid Systems.

[21]  Gregory Dudek,et al.  Just-in-time sensing: efficiently combining sonar and laser range data for exploring unknown worlds , 1996, Proceedings of IEEE International Conference on Robotics and Automation.