Vision-based rov system
暂无分享,去创建一个
In order to relieve remotely operated vehicle (ROV) operators of low-level tasks such as station keeping and local navigation, a computer vision based ROV system is developed as part of this dissertation work. By exploiting the information in seafloor images, this system can perform real-time automatic station keeping, trajectory following, seafloor mosaicking and image data collecting. These are accomplished by utilizing the readily measurable spatio-temporal image gradients directly to detect and estimate the vehicle motion. The navigation drift is suppressed by registering the incoming frames with the constructed seafloor visual map, i.e. a concurrent mapping and localization (CM&L) strategy. The vision system has been implemented on a Phantom XTL ROV, and tested in ocean environment.
Further research has been conducted to enhance the performance of the vision-based positioning system. One such enhancement is to identify and compensate for the systematic errors in the direct motion estimation algorithm. This involves the computation of second-order spatial gradients. Another direction is to incorporate the covariance information into our CM&L strategy, to take account of the errors in mapping. The strategy is implemented as a covariance-based data fusion procedure. For this purpose, a novel data fusion strategy, referred to as Extended Covariance Intersection (ECI), is proposed. This provides an estimate that is not as over-optimistic as the Kalman Filter, nor is as over-conservative as the Covariance Intersection (CI) solution.
Finally, this dissertation also proposes a particular formulation for recovering the 3D shape of the underwater scene. It is based on the fusion of structure from motion (SFM) and shape from shading (SFS), with an underwater illumination model. In this formulation, the commonly used constant albedo assumption is relaxed to piecewise constant in the SFS computation.