Occlusions as a Guide for Planning the Next View

A strategy for acquiring 3-D data of an unknown scene, using range images obtained by a light stripe range finder is addressed. The foci of attention are occluded regions, i.e., only the scene at the borders of the occlusions is modeled to compute the next move. Since the system has knowledge of the sensor geometry, it can resolve the appearance of occlusions by analyzing them. The problem of 3-D data acquisition is divided into two subproblems due to two types of occlusions. An occlusion arises either when the reflected laser light does not reach the camera or when the directed laser light does not reach the scene surface. After taking the range image of a scene, the regions of no data due to the first kind of occlusion are extracted. The missing data are acquired by rotating the sensor system in the scanning plane, which is defined by the first scan. After a complete image of the surface illuminated from the first scanning plane has been built, the regions of missing data due to the second kind of occlusions are located. Then, the directions of the next scanning planes for further 3-D data acquisition are computed. >

[1]  Masayoshi Kakikura,et al.  Occlusion avoidance of visual sensors based on a hand-eye action simulator system: HEAVEN , 1987, Adv. Robotics.

[2]  Sung Yong Shin,et al.  An optimal algorithm for finding all visible edges in a simple polygon , 1989, IEEE Trans. Robotics Autom..

[3]  J. Kahn,et al.  Traditional Galleries Require Fewer Watchmen , 1983 .

[4]  Theo Pavlidis,et al.  Algorithms for Graphics and Imag , 1983 .

[5]  Frank P. Ferrie,et al.  From uncertainty to visual exploration , 1990, [1990] Proceedings Third International Conference on Computer Vision.

[6]  Steve Fisk,et al.  A short proof of Chvátal's Watchman Theorem , 1978, J. Comb. Theory, Ser. B.

[7]  R.M. McElhaney,et al.  Algorithms for graphics and image processing , 1983, Proceedings of the IEEE.

[8]  Aviv Bergman,et al.  Determining the camera and light source location for a visual task , 1989, Proceedings, 1989 International Conference on Robotics and Automation.

[9]  Narendra Ahuja,et al.  Generating Octrees from Object Silhouettes in Orthographic Views , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Godfried T. Toussaint,et al.  SOME NEW RESULTS ON MOVING POLYGONS IN THE PLANE. , 1983 .

[11]  Katsushi Ikeuchi,et al.  Modeling sensor detectability with the VANTAGE geometric/sensor modeler , 1989, IEEE Trans. Robotics Autom..

[12]  K. Ramesh Babu,et al.  Linear Feature Extraction and Description , 1979, IJCAI.

[13]  Roger Y. Tsai,et al.  Viewpoint planning: the visibility constraint , 1989 .

[14]  C. Ian Connolly,et al.  The determination of next best views , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[15]  Yiannis Aloimonos,et al.  Purposive and qualitative active vision , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.

[16]  Avinash C. Kak,et al.  Planning sensing strategies in a robot work cell with multi-sensor capabilities , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.