Determining Motion Directly from Normal Flows Upon the Use of a Spherical Eye Platform

We address the problem of recovering camera motion from video data, which does not require the establishment of feature correspondences or computation of optical flows but from normal flows directly. We have designed an imaging system that has a wide field of view by fixating a number of cameras together to form an approximate spherical eye. With a substantially widened visual field, we discover that estimating the directions of translation and rotation components of the motion separately are possible and particularly efficient. In addition, the inherent ambiguities between translation and rotation also disappear. Magnitude of rotation is recovered subsequently. Experimental results on synthetic and real image data are provided. The results show that not only the accuracy of motion estimation is comparable to those of the state-of-the-art methods that require explicit feature correspondences or optical flows, but also a faster computation time.

[1]  Michael J. Black,et al.  Secrets of optical flow estimation and their principles , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[2]  Yiannis Aloimonos,et al.  Observability of 3D Motion , 2000, International Journal of Computer Vision.

[3]  Hongdong Li,et al.  Motion Estimation for Nonoverlapping Multicamera Rigs: Linear Algebraic and L∞ Geometric Solutions , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Joachim Weickert,et al.  Lucas/Kanade Meets Horn/Schunck: Combining Local and Global Optic Flow Methods , 2005, International Journal of Computer Vision.

[5]  Nick Barnes,et al.  Estimation of the Epipole using Optical Flow at Antipodal Points , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[6]  Loong Fah Cheong,et al.  Linear Quasi-Parallax SfM Using Laterally-Placed Eyes , 2009, International Journal of Computer Vision.

[7]  Y. Aloimonos,et al.  Direct Perception of Three-Dimensional Motion from Patterns of Visual Motion , 1995, Science.

[8]  Robert Pless,et al.  Using many cameras as one , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[9]  Berthold K. P. Horn,et al.  Direct methods for recovering motion , 1988, International Journal of Computer Vision.

[10]  Kostas Daniilidis,et al.  Correspondence-free Structure from Motion , 2007, International Journal of Computer Vision.

[11]  Berthold K. P. Horn,et al.  Determining Optical Flow , 1981, Other Conferences.

[12]  Ronald Chung,et al.  Determining Spatial Motion Directly from Normal Flow Field: A Comprehensive Treatment , 2010, ACCV Workshops.

[13]  Ralph Etienne-Cummings,et al.  A simplified normal optical flow measurement CMOS camera , 2006, IEEE Transactions on Circuits and Systems I: Regular Papers.

[14]  Yiannis Aloimonos,et al.  Directions of Motion Fields are Hardly Ever Ambiguous , 2004, International Journal of Computer Vision.

[15]  J. Aloimonos,et al.  Finding motion parameters from spherical motion fields (or the advantages of having eyes in the back of your head) , 1988, Biological Cybernetics.

[16]  Yiannis Aloimonos,et al.  Ambiguity in Structure from Motion: Sphere versus Plane , 1998, International Journal of Computer Vision.

[17]  Manolis I. A. Lourakis Egomotion Estimation Using Quadruples of Collinear Image Points , 2000, ECCV.

[18]  Nick Barnes,et al.  Directions of egomotion from antipodal points , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[19]  José Santos-Victor,et al.  Robust Egomotion Estimation From the Normal Flow Using Search Subspaces , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[20]  Yiannis Aloimonos,et al.  Qualitative egomotion , 1995, International Journal of Computer Vision.

[21]  Hongdong Li,et al.  Five-Point Motion Estimation Made Easy , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[22]  David Nistér,et al.  An efficient solution to the five-point relative pose problem , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[24]  Takeo Kanade,et al.  Spherical approximation for multiple cameras in motion estimation: Its applicability and advantages , 2010, Comput. Vis. Image Underst..

[25]  Roland Siegwart,et al.  Finding the Exact Rotation between Two Images Independently of the Translation , 2012, ECCV.

[26]  Heiko Neumann,et al.  An Efficient Linear Method for the Estimation of Ego-Motion from Optical Flow , 2009, DAGM-Symposium.

[27]  Allan D. Jepson,et al.  Subspace methods for recovering rigid motion I: Algorithm and implementation , 2004, International Journal of Computer Vision.

[28]  Yiannis Aloimonos,et al.  On the Geometry of Visual Correspondence , 1997, International Journal of Computer Vision.

[29]  M. Srinivasan,et al.  Visual motor computations in insects. , 2004, Annual review of neuroscience.

[30]  Robert Pless,et al.  A spherical eye from multiple cameras (makes better models of the world) , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[31]  Yiannis Aloimonos,et al.  Estimating the heading direction using normal flow , 1994, International Journal of Computer Vision.

[32]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[33]  Yiannis Aloimonos,et al.  Structure from Motion: Beyond the Epipolar Constraint , 2000, International Journal of Computer Vision.