Depth from magnification and blurring

A new method of constructing 3D maps based on the relative magnification and blurring of a pair of images is presented, where the images are taken at two camera positions of a small displacement. The method, referred to as depth from magnification and blurring, aims at generating a precise 3D map of local scenes or objects to be manipulated by a robot arm with a hand-eye camera. The method uses a single standard camera with telecentric lens, and assumes neither active illumination nor active control of camera parameters. The proposed depth extraction algorithm is simple in computation. Fusing the two disparate sources of depth information, magnification and blurring, the proposed method provides more accurate and robust depth estimation. Experimental results demonstrate the effectiveness of the proposed method.

[1]  Shree K. Nayar,et al.  Real-time focus range sensor , 1995, Proceedings of IEEE International Conference on Computer Vision.

[2]  M. Turk,et al.  A simple, real-time range camera , 1989, Proceedings CVPR '89: IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[3]  Tomás Lozano-Pérez,et al.  An algorithm for planning collision-free paths among polyhedral obstacles , 1979, CACM.

[4]  Z. Zenn Bien,et al.  Collision-Free Trajectory Planning for Two Robot Arms , 1989, Robotica.

[5]  Peter Lawrence,et al.  An Investigation of Methods for Determining Depth from Focus , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  B. Faverjon,et al.  A local based approach for path planning of manipulators with a high number of degrees of freedom , 1987, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[7]  S. Zucker,et al.  Toward Efficient Trajectory Planning: The Path-Velocity Decomposition , 1986 .

[8]  Myung Jin Chung,et al.  Collision-free motion planning for two articulated robot arms using minimum distance functions , 1990, Robotica.

[9]  Bernd Girod,et al.  Depth from Defocus of Structured Light , 1990, Other Conferences.

[10]  P. Grossmann,et al.  Depth from focus , 1987, Pattern Recognit. Lett..

[11]  Steven A. Shafer,et al.  Depth from focusing and defocusing , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[12]  Murali Subbarao,et al.  Focused image recovery from two defocused images recorded with different camera settings , 1995, IEEE Transactions on Image Processing.

[13]  Alex Pentland,et al.  A New Sense for Depth of Field , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  T Okada Optimization of Mechanisms for Force Generation by Using Pulleys and Spring , 1986 .

[15]  Jihong Lee,et al.  A minimum-time trajectory planning method for two robots , 1992, IEEE Trans. Robotics Autom..

[16]  Tomás Lozano-Pérez,et al.  Spatial Planning: A Configuration Space Approach , 1983, IEEE Transactions on Computers.

[17]  E. Freund,et al.  Real-Time Pathfinding in Multirobot Systems Including Obstacle Avoidance , 1988, Int. J. Robotics Res..

[18]  J. Schwartz,et al.  On the “piano movers” problem. II. General techniques for computing topological properties of real algebraic manifolds , 1983 .

[19]  V. Bove Entropy-based depth from focus , 1993 .

[20]  Bum Hee Lee,et al.  Collision-Free Motion Planning of Two Robots , 1987, IEEE Transactions on Systems, Man, and Cybernetics.

[21]  Jihong Lee,et al.  Collision-free trajectory control for multiple robots based on neural optimization network , 1990, Robotica.