Towards a Visual Perception System for Pipe Inspection: Monocular Visual Odometry

Liquid Natural Gas (LNG) processing facilities contain large complex networks of pipes of varying diameter and orientation intermixed with control valves, processes and sensors. Regular inspection of these pipes for corrosion, caused by impurities in the gas processing chain, is critical for safety. Popular existing non-destructive technologies that used for corrosion inspection in LNG pipes include Magnetic Flux Leakage (MFL), radiography (X-rays), and ultrasound among others. These methods can be used to obtain measurements of pipe wall thickness, and by monitoring for changes in pipe wall thickness over time the rate of corrosion can be estimated. For LNG pipes, unlike large mainstream gas pipelines, the complex infrastructure means that these sensors are currently employed external to the pipe itself making comprehensive, regular coverage of the pipe network difficult to impossible. As a result, a sampling-based approach is taken where parts of the pipe network are sampled regularly, and the corrosion estimate is extrapolated to the remainder of the pipe using predictive corrosion models derived from metallurgical properties. We argue that a robot crawler that can move a suite of sensors inside the pipe network, can provide a mechanism to achieve more comprehensive and effective coverage. In this technical report, we explore a vision-based system for building 2D registered appearance maps of the pipe surface whilst simultaneously localizing the robot in the pipe. Such a system is essential to provide a localization estimate for overlaying other non-destructive sensors, registering changes over time, and the resulting 2D metric appearance maps may also be useful for corrosion detection. For this work, we restrict ourselves to linear pipe formations. We explore two distinct classes of algorithms that can be used to estimate this pose are investigated, both visual odometry systems which estimate motion by observing how the appearance of images change between frames. The first is a class of dense algorithms that use the greyscale intensity values and their derivatives of all pixels in adjacent images. The second class is a sparse algorithm that use the change in position (sparse optical flow) of salient point feature correspondences between adjacent images. Pose estimate results obtained using the dense and sparse algorithms are presented for a number of images sequences captured by different cameras as they moved through two pipes having diameters of 152.40mm (6”) and 406.40mm (16”), and lengths 6 and 4 meters respectively. These results show that accurate pose estimates can be obtained which consistently have errors of less than 1 percent for distance traveled down the pipe. Examples of the stitched images are also presented, which highlight the accuracy of these pose estimates.

[1]  Cordelia Schmid,et al.  Scale & Affine Invariant Interest Point Detectors , 2004, International Journal of Computer Vision.

[2]  A. Ardeshir Goshtasby,et al.  2-D and 3-D Image Registration , 2004 .

[3]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[4]  T A Bubenik,et al.  Magnetic flux leakage (MFL) technology for natural gas pipeline inspection , 1992 .

[5]  K. Konolige Rough Terrain Visual Odometry , 2007 .

[6]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[7]  Michel Dhome,et al.  Real Time Localization and 3D Reconstruction , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[8]  B. Ripley,et al.  Robust Statistics , 2018, Wiley Series in Probability and Statistics.

[9]  Kurt Konolige,et al.  Large-Scale Visual Odometry for Rough Terrain , 2007, ISRR.

[10]  Andrew Zisserman,et al.  An Affine Invariant Salient Region Detector , 2004, ECCV.

[11]  Michael Brady,et al.  Saliency, Scale and Image Description , 2001, International Journal of Computer Vision.

[12]  Shree K. Nayar,et al.  Ego-motion and omnidirectional cameras , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[13]  Matthew A. Brown,et al.  Invariant Features from Interest Point Groups , 2002, BMVC.

[14]  Cordelia Schmid,et al.  Evaluation of Interest Point Detectors , 2000, International Journal of Computer Vision.

[15]  David Suter,et al.  Assessing the performance of corner detectors for point feature tracking applications , 2004, Image Vis. Comput..

[16]  James R. Bergen,et al.  Visual odometry for ground vehicle applications , 2006, J. Field Robotics.

[17]  Qi Tian,et al.  Algorithms for subpixel registration , 1986 .

[18]  Hans-Hellmut Nagel,et al.  The coupling of rotation and translation in motion estimation of planar surfaces , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[19]  P. Anandan,et al.  Mosaic based representations of video sequences and their applications , 1995, Proceedings of IEEE International Conference on Computer Vision.

[20]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[21]  David Nistér,et al.  Reconstruction from Uncalibrated Sequences with a Hierarchy of Trifocal Tensors , 2000, ECCV.

[22]  Cordelia Schmid,et al.  Comparing and evaluating interest points , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[23]  P. Anandan,et al.  Hierarchical Model-Based Motion Estimation , 1992, ECCV.

[24]  Roland Siegwart,et al.  Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles , 2008, IEEE Transactions on Robotics.

[25]  C. Fermuller,et al.  Eyes from eyes: new cameras for structure from motion , 2002, Proceedings of the IEEE Workshop on Omnidirectional Vision 2002. Held in conjunction with ECCV'02.

[26]  Hagen Schempf,et al.  Visual and nondestructive evaluation inspection of live gas mains using the Explorer™ family of pipe robots , 2010 .

[27]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[28]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[29]  Hongdong Li,et al.  Five-Point Motion Estimation Made Easy , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[30]  Jan Flusser,et al.  Image registration methods: a survey , 2003, Image Vis. Comput..

[31]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.

[32]  Richard Szeliski,et al.  Visual odometry and map correlation , 2004, CVPR 2004.

[33]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[34]  Cordelia Schmid,et al.  Matching images with different resolutions , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[35]  Jiri Matas,et al.  Robust wide-baseline stereo from maximally stable extremal regions , 2004, Image Vis. Comput..

[36]  Kostas Daniilidis,et al.  Monocular visual odometry in urban environments using an omnidirectional camera , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[37]  Didier Le Gall,et al.  MPEG: a video compression standard for multimedia applications , 1991, CACM.

[38]  Harry Shum,et al.  Full-frame video stabilization with motion inpainting , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[39]  Larry H. Matthies,et al.  Two years of Visual Odometry on the Mars Exploration Rovers , 2007, J. Field Robotics.

[40]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.