Solution to the next best view problem for automated CAD model acquisiton of free-form objects using range cameras

To acquire the complete surface description of a nontrivial object using range cameras several range images from different viewpoints are needed. We present a complete system to automatically acquire a surface model of an arbitrary part and outline the constraints this system places on a solution to the problem of where to position the range camera to take the next range image, i.e. the next best view (NBV) problem. We present a solution which uses no a-priori knowledge about the part and which addresses the most crucial of these constraints which is that each new range image must contain range data of part of the object's surface already scanned so that it can be registered with the previously taken range images. A novel representation, positional space, is presented which is capable of representing both those hypothetical sampling directions which could scan the unseen portions of the viewing volume and those which could rescan parts of the object. In addition, positional space makes explicit the actual sampling directions available given a particular range camera and the set of relative motions possible between it and the object. A solution of the NBV problem is achieved by aligning the positional space representation of the range camera with the positional space representations of the scanned portions of the object and the unseen portions of the viewing volume using simple translations. Since complex motions of the range camera in its workspace are represented by translations in positional space the search for the next best view is computationally inexpensive. No assumptions are made about the geometry or topology of the object being scanned. Occlusions and impossible sensing configurations are easily addressed within this framework. The algorithm is complete in the sense that all surfaces that can be scanned will be scanned. In addition, confidence values for range samples can be used to instruct the algorithm to position the range camera so that all surfaces of the object are scanned with at least a minimum confidence wherever possible. The algorithm can determined when all scannable surfaces have been sampled and can be used with any range camera provided a model of its exists. The algorithm can also accommodate nearly any set of relative motions possible between the range camera and the object.

[1]  Azriel Rosenfeld,et al.  Registration of Multiple Overlapping Range Images: Scenes Without Distinctive Features , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  John E. Howland,et al.  Computer graphics , 1990, IEEE Potentials.

[3]  Avinash C. Kak,et al.  Planning sensing strategies in a robot work cell with multi-sensor capabilities , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[4]  Marc Levoy,et al.  Zippered polygon meshes from range images , 1994, SIGGRAPH.

[5]  Robert Bergevin,et al.  Estimating the 3D rigid transformation between two range views of a complex object , 1992, [1992] Proceedings. 11th IAPR International Conference on Pattern Recognition.

[6]  Ruzena Bajcsy,et al.  Occlusions as a Guide for Planning the Next View , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  J. Kahn,et al.  Traditional Galleries Require Fewer Watchmen , 1983 .

[8]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Katsushi Ikeuchi,et al.  Task-oriented generation of visual sensing strategies , 1995, Proceedings of IEEE International Conference on Computer Vision.

[10]  C. Ian Connolly,et al.  The determination of next best views , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[11]  Gérard G. Medioni,et al.  Object modelling by registration of multiple range images , 1992, Image Vis. Comput..

[12]  Takeo Kanade,et al.  Sensor placement design for object pose determination with three light-stripe range finders , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[13]  Frank P. Ferrie,et al.  From uncertainty to visual exploration , 1990, [1990] Proceedings Third International Conference on Computer Vision.

[14]  Marc Levoy,et al.  Better optical triangulation through spacetime analysis , 1995, Proceedings of IEEE International Conference on Computer Vision.

[15]  Richard Szeliski Estimating Motion From Sparse Range Data Without Correspondence , 1988, [1988 Proceedings] Second International Conference on Computer Vision.

[16]  Konstantinos A. Tarabanis,et al.  A survey of sensor planning in computer vision , 1995, IEEE Trans. Robotics Autom..

[17]  Nasser Kehtarnavaz,et al.  A framework for estimation of motion parameters from range images , 1989, Comput. Vis. Graph. Image Process..