Calibrating a mobile camera's extrinsic parameters with respect to its platform

The problem of estimating the fixed rotation and translation between the camera's and the mobile robot's coordinate systems given a sequence of monocular images and robot movements is addressed. Existing hand/eye calibration algorithms are not directly applicable because they require the robot hand to have at least two rotational degrees of freedom, while a mobile robot can usually execute only planar motion. By using the proper representation for camera rotation, the proposed algorithm decomposes the calibration task, and thus is able to calibrate all the three rotational degrees of freedom and the two translational degrees of freedom. The remaining translational degree of freedom is not needed for the purpose of camera-centered robot vision applications. To recover the camera's rotation motion between two images, the algorithm uses inverse perspective geometry constraints on a rectangular corner. Complicated calibration patterns are not needed and thus the algorithm can be easily implemented and used in structured environments.<<ETX>>