Calculation of the Location Coordinates of an Object Observed by a Camera

In this paper the algorithm Camera of the calculation of the position and orientation coordinates of the object observed by the camera is presented. This algorithm is more accurate and faster than the algorithms presented in the literature based on the minimization of the quadratic forms of errors. The camera is mounted above the technological station on which the object appears. These coordinates are calculated relative to the station frame (coordinate system associated with the technological station) or relative to the base frame (coordinate system associated with the base of robot). The orientation is described by the x-y-z fixed angles of the rotation relative to the station or the base frame. In this algorithm the perspective model of camera is used. From the image on the camera matrix sensor of three characteristic points of an object, 2D coordinates of these points are obtained. The location (position and orientation) of the object is calculated on the basis of these coordinates. The calculated location coordinates make it possible for the robot to automatically approach the object and carry out the technological operations. For example, the car’s body can constitute an object and the technological operation are to be sealing or welding.

[1]  O. Faugeras Three-dimensional computer vision: a geometric viewpoint , 1993 .

[2]  Laurent Moll,et al.  Real time correlation-based stereo: algorithm, implementations and applications , 1993 .

[3]  Christopher M. Bishop,et al.  Non-linear Bayesian Image Modelling , 2000, ECCV.

[4]  Olivier D. Faugeras,et al.  What can two images tell us about a third one? , 1994, International Journal of Computer Vision.

[5]  Richard I. Hartley,et al.  In Defense of the Eight-Point Algorithm , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Ramesh Jain,et al.  Range image analysis , 1994 .

[7]  Bill Triggs,et al.  Camera pose and calibration from 4 or 5 known 3D points , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[8]  Roberto Cipolla,et al.  On the Estimation of the Fundamental Matrix: A Convex Approach to Constrained Least-Squares , 2000, ECCV.

[9]  Long Quan,et al.  A Unification of Autocalibration Methods , 2000 .

[10]  Gene H. Golub,et al.  Matrix computations (3rd ed.) , 1996 .

[11]  Jake K. Aggarwal,et al.  Nonrigid Motion Analysis: Articulated and Elastic Motion , 1998, Comput. Vis. Image Underst..

[12]  R. Paul Robot manipulators : mathematics, programming, and control : the computer control of robot manipulators , 1981 .

[13]  Bill Triggs,et al.  Autocalibration and the absolute quadric , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[14]  Guillermo Sapiro,et al.  Robust anisotropic diffusion , 1998, IEEE Trans. Image Process..

[15]  Bernd Jähne,et al.  Spatio-Temporal Image Processing , 1993, Lecture Notes in Computer Science.

[16]  Guna S. Seetharaman Image sequence analysis for three-dimensional perception of dynamic scenes , 1994 .

[17]  W. Grochowski,et al.  Polska Akademia Nauk , 1994 .