A Solution to the Next Best View Problem for Automated CADModel Acquisition of Free-form Objects Using Range CamerasRichard PitoGRASP

To acquire the complete surface description of a nontrivial object using range cameras several range images from diierent viewpoints are needed. We present a complete system to automatically acquire a surface model of an arbitrary part and outline the constraints this system places on a solution to the problem of where to position the range camera to take the next range image, i.e. the next best view (NBV) problem. We present a solution which uses no a-priori knowledge about the part and which addresses the most crucial of these constraints which is that each new range image must contain range data of part of the object's surface already scanned so that it can be registered with the previously taken range images. A novel representation, positional space, is presented which is capable of representing both those hypothetical sampling directions which could scan the unseen portions of the viewing volume and those which could rescan parts of the object. In addition, positional space makes explicit the actual sampling directions available given a particular range camera and the set of relative motions possible between it and the object. A solution to the NBV problem is achieved by aligning the positional space representation of the range camera with the positional space representations of the scanned portions of the object and the unseen portions of the viewing volume using simple translations. Since complex motions of the range camera in its workspace are represented by translations in positional space the search for the next best view is computationally inexpensive. No assumptions are made about the geometry or topology of the object being scanned. Occlusions and impossible sensing conngurations are easily addressed within this framework. The algorithm is complete in the sense that all surfaces than can be scanned will be scanned. In addition, conndence values for range samples can be used to instruct the algorithm to position the range camera so that all surfaces of the object are scanned with at least a minimum conndence wherever possible. The algorithm can determine when all scannable surfaces have been sampled and can be used with any range camera provided a model of it exists. The algorithm can also accommodate nearly any set of relative motions possible between the range camera and the object.

[1]  Takeo Kanade,et al.  Sensor placement design for object pose determination with three light-stripe range finders , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[2]  Frank P. Ferrie,et al.  From uncertainty to visual exploration , 1990, [1990] Proceedings Third International Conference on Computer Vision.

[3]  Ruzena Bajcsy,et al.  Occlusions as a Guide for Planning the Next View , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  William E. Lorensen,et al.  Decimation of triangle meshes , 1992, SIGGRAPH.

[5]  J. Kahn,et al.  Traditional Galleries Require Fewer Watchmen , 1983 .

[6]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  C. Ian Connolly,et al.  The determination of next best views , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[8]  Marc Levoy,et al.  Zippered polygon meshes from range images , 1994, SIGGRAPH.

[9]  Konstantinos A. Tarabanis,et al.  A survey of sensor planning in computer vision , 1995, IEEE Trans. Robotics Autom..

[10]  Katsushi Ikeuchi,et al.  Task-oriented generation of visual sensing strategies , 1995, Proceedings of IEEE International Conference on Computer Vision.