Three-dimensional registration using range and intensity information

The determination of relative pose between two range images, also called registration, is a ubiquitous problem in computer vision, for geometric model building as well as dimensional inspection. The method presented in this paper takes advantage of the ability of many active optical range sensors to record intensity or even color in addition to the range information. This information is used to improve the registration procedure by constraining potential matches between pairs of points based on a similarity measure derived from the intensity information. One difficulty in using the intensity information is its dependence on the measuring conditions such as distance and orientation. The intensity or color information must first be converted into a viewpoint-independent feature. This can be achieved by inverting an illumination model, by differential feature measurements or by simple clustering. Following that step, a robust iterative closest point method is then used to perform the pose determination. Using the intensity can help to speed up convergence or, in cases of remaining degrees of freedom (e.g. on images of a sphere), to additionally constrain the match. The paper will describe the algorithmic framework and provide examples using range-and-color images.

[1]  Marc Rioux,et al.  Color Reflectance Modeling Using a Polychromatic Laser Range Sensor , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Robert Bergevin,et al.  Registration of multiple range views for automatic 3-D model building , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[3]  Francois Blais,et al.  Differential inspection of shapes using optical 3D measurements , 1994, Other Conferences.

[4]  Sabry F. El-Hakim,et al.  Practical range camera calibration , 1993, Other Conferences.

[5]  Richard O. Duda,et al.  Use of Range and Reflectance Data to Find Planar Surface Regions , 1979, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  M. Hebert,et al.  The Representation, Recognition, and Locating of 3-D Objects , 1986 .

[7]  Jake K. Aggarwal,et al.  Determining motion parameters using intensity guided range sensing , 1986, Pattern Recognit..

[8]  Marc Rioux,et al.  Direct Estimation of Range Flow on Deformable Shape From a Video Rate Range Camera , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Glenn Healey,et al.  Using color for geometry-insensitive segmentation , 1989 .

[10]  Homer H. Pien,et al.  Variational approach to sensor fusion using registered range and intensity data , 1993, Defense, Security, and Sensing.

[11]  Robert Bergevin,et al.  Estimating the 3D rigid transformation between two range views of a complex object , 1992, [1992] Proceedings. 11th IAPR International Conference on Pattern Recognition.

[12]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Jake K. Aggarwal,et al.  Experiments in combining intensity and range-edge maps , 1982, Comput. Graph. Image Process..

[14]  Geir Storvik,et al.  Combining range and intensity data with a hidden Markov model , 1992, Proceedings., 11th IAPR International Conference on Pattern Recognition. Vol.II. Conference B: Pattern Recognition Methodology and Systems.

[15]  M Rioux,et al.  Laser range finder based on synchronized scanners. , 1984, Applied optics.

[16]  Richard Szeliski,et al.  Method for registering overlapping range images of arbitrarily shaped surfaces for 3D object reconstruction , 1993, Other Conferences.

[17]  Jay K. Hacket,et al.  Segmentation Using Intensity And Range Data , 1989 .

[18]  Martin Beckerman,et al.  Data fusion through simulated annealing of registered range and reflectance images , 1993, Defense, Security, and Sensing.

[19]  J. K. Aggarwal,et al.  RECOGNITION WITH RANGE AND INTENSITY DATA. , 1984 .