Every digital consumer camera today can read images from a sensor chip and (optionally) display them in some form on a screen. However, what we want to do is implement an embedded vision system, so reading and maybe displaying image data is only the necessary first step. We want to extract information from an image in order to steer a robot, for example following a colored object. Since both the robot and the object may be moving, we have to be fast. Ideally, we want to achieve a frame rate of 10 fps (frames per second) for the whole perception-action cycle. Of course, given the limited processing power of an embedded controller, this restricts us in the choice of both the image resolution and the complexity of the image processing operations.
[1]
Yoshiaki Shirai,et al.
Three-Dimensional Computer Vision
,
1987,
Symbolic Computation.
[2]
Berthold K. P. Horn.
Robot vision
,
1986,
MIT electrical engineering and computer science series.
[3]
Jim R. Parker,et al.
Algorithms for image processing and computer vision
,
1996
.
[4]
Vishvjit S. Nalwa,et al.
A guided tour of computer vision
,
1993
.
[5]
Illah R. Nourbakhsh,et al.
The 1996 AAAI Mobile Robot Competition and Exhibition
,
1997,
AI Mag..
[6]
Thomas Bräunl,et al.
A Color Segmentation Algorithm for Real-Time Object Localization on Small Embedded Systems
,
2001,
RobVis.