An RGB-D based Augmented Reality 3D Reconstruction System for Robotic Environmental Inspection of Radioactive Areas

Preparing human intervention in hazardous, unknown and unstructured environments is a difficult task. The intervention should focus on the optimization of the operations in order to reduce the personnel exposure to hazards. Optimizing these operations is not always possible, due to a lack of information about the intervention environment: such information can be collected through a robotic inspection before the preparation of the intervention. The data collected during this inspection, such as radiation, temperature and oxygen level, must be accurate and precisely positioned in the environment in order to optimize the humans approaching path and their stay in the intervention area. In this paper we present a robotic system for collecting physical quantities, precisely positioned in the environment, which is easy to use by the robot operator and it is seamlessly integrated in the robot control. The operator is helped by the system in finding the most dangerous zones, which collects all the sensor readings while building a 3D model of the environment. Preliminary results are presented using CERN’s accelerators facilities as testing area.

[1]  Tony DeRose,et al.  Surface reconstruction from unorganized points , 1992, SIGGRAPH.

[2]  Luiz Velho,et al.  Kinect and RGBD Images: Challenges and Applications , 2012, 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images Tutorials.

[3]  S Agosta,et al.  High-Speed Mobile Communications in Hostile Environments , 2015 .

[4]  Alessandro Masi,et al.  The LHC Radiation Monitoring System - RadMon , 2012 .

[5]  Eloy Reguero Fuentes,et al.  INTEGRATED OPERATIONAL DOSIMETRY SYSTEM AT CERN , 2017, Radiation protection dosimetry.

[6]  Peter Deutsch,et al.  GZIP file format specification version 4.3 , 1996, RFC.

[7]  Mario Di Castro,et al.  An Advanced, Adaptive and Multimodal Graphical User Interface for Human-robot Teleoperation in Radioactive Scenarios , 2016, ICINCO.

[8]  Andrew W. Fitzgibbon,et al.  KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera , 2011, UIST.

[9]  Ajmal S. Mian,et al.  Using Kinect for face recognition under varying poses, expressions, illumination and disguise , 2013, 2013 IEEE Workshop on Applications of Computer Vision (WACV).

[10]  J. C. Leader Atmospheric propagation of partially coherent radiation , 1978 .

[11]  Bruno Feral,et al.  Remote Inspection, Measurement and Handling for Maintenance and Operation at CERN , 2013 .

[12]  Wolfram Burgard,et al.  Real-time 3D visual SLAM with a hand-held camera , 2011 .

[13]  Mohammad Modarres Risk Analysis in Engineering : Techniques, Tools, and Trends , 2016 .

[14]  Andrew W. Fitzgibbon,et al.  Efficient and precise interactive hand tracking through joint, continuous optimization of pose and correspondences , 2016, ACM Trans. Graph..

[15]  Andrew W. Fitzgibbon,et al.  Real-time non-rigid reconstruction using an RGB-D camera , 2014, ACM Trans. Graph..

[16]  Donald Meagher,et al.  Geometric modeling using octree encoding , 1982, Computer Graphics and Image Processing.

[17]  Marco Silari,et al.  Radiation protection at CERN , 2013 .

[18]  F. Cerutti,et al.  The FLUKA code: Description and benchmarking , 2007 .