Multi-resolution point cloud generation based on heterogeneous sensor fusion system

Various three-dimensional environment representation techniques exist depending on the type of sensor, and the level of expression varies depending on the characteristics of the sensor. Our sensor system is composed of a three-dimensional lasers scanner and a stereo camera. A rigid transformation matrix representing a three-dimensional relative pose is obtained by calibration between a stereo camera and laser scanner and creates a multi-resolution point cloud. By combining a stereo camera with a high resolution and a laser scanner with wide range and low-resolution, it is able to complement each other and achieve better results.

[1]  Il Hong Suh,et al.  Building a 3-D Line-Based Map Using Stereo SLAM , 2015, IEEE Transactions on Robotics.

[2]  Yassine Ruichek,et al.  Occupancy Grid Mapping in Urban Environments from a Moving On-Board Stereo-Vision System , 2014, Sensors.

[3]  Adam Herout,et al.  Calibration of RGB camera with velodyne LiDAR , 2014 .

[4]  S. Shankar Sastry,et al.  An Invitation to 3-D Vision , 2004 .

[5]  Bing-Fei Wu,et al.  Particle-Filter-Based Radio Localization for Mobile Robots in the Environments With Low-Density WLAN APs , 2014, IEEE Transactions on Industrial Electronics.

[6]  Roland Siegwart,et al.  Point Clouds Registration with Probabilistic Data Association , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[7]  Davide Scaramuzza,et al.  SVO: Fast semi-direct monocular visual odometry , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Jo HyungGi,et al.  Efficient 3D mapping with RGB-D camera based on distance dependent update , 2016 .