TrueView: A LIDAR Only Perception System for Autonomous Vehicle (Interactive Presentation)

Real time perception and understanding of the environment is essential for an autonomous vehicle. To obtain the most accurate perception, existing solutions propose to combine multiple sensors. However, a large number of embedded sensors in the vehicle implies to process a large amount of data thus increasing the system complexity and cost. In this work, we present a novel approach that uses only one LIDAR sensor. Our approach enables reducing the size and complexity of the used machine learning algorithm. A novel approach is proposed to generate multiple 2D representation from 3D points cloud using the LIDAR sensor. The obtained representation solves the sparsity and connectivity issues encountered with LIDAR-based solution. 2012 ACM Subject Classification Computing methodologies → Computer vision representations

[1]  Long Chen,et al.  A novel way to organize 3D LiDAR point cloud as 2D depth map height map and surface normal map , 2015, 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO).

[2]  Sebastian Thrun,et al.  An Application of Markov Random Fields to Range Sensing , 2005, NIPS.

[3]  Bo Li,et al.  SECOND: Sparsely Embedded Convolutional Detection , 2018, Sensors.

[4]  Masayoshi Tomizuka,et al.  RoarNet: A Robust 3D Object Detection based on RegiOn Approximation Refinement , 2018, 2019 IEEE Intelligent Vehicles Symposium (IV).

[5]  Steven Lake Waslander,et al.  Joint 3D Proposal Generation and Object Detection from View Aggregation , 2017, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[6]  Yin Zhou,et al.  VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7]  Leonidas J. Guibas,et al.  Randomized incremental construction of Delaunay and Voronoi diagrams , 1990, Algorithmica.

[8]  Jitendra Malik,et al.  Learning Rich Features from RGB-D Images for Object Detection and Segmentation , 2014, ECCV.

[9]  Rex A. Dwyer A faster divide-and-conquer algorithm for constructing delaunay triangulations , 1987, Algorithmica.

[10]  Zhe Chen,et al.  RBNet: A Deep Neural Network for Unified Road and Road Boundary Detection , 2017, ICONIP.

[11]  Marc Noy,et al.  Flipping Edges in Triangulations , 1999, Discret. Comput. Geom..

[12]  Marc Pollefeys,et al.  Semantically Guided Depth Upsampling , 2016, GCPR.

[13]  Xiaogang Wang,et al.  HMS-Net: Hierarchical Multi-Scale Sparsity-Invariant Network for Sparse Depth Completion , 2018, IEEE Transactions on Image Processing.

[14]  Robert L. Scot Drysdale,et al.  A Comparison of Sequential Delaunay Triangulation Algorithms , 1997, Comput. Geom..

[15]  Sertac Karaman,et al.  Self-Supervised Sparse-to-Dense: Self-Supervised Depth Completion from LiDAR and Monocular Camera , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[16]  Wilfried Philips,et al.  Learning Morphological Operators for Depth Completion , 2018, ACIVS.

[17]  Rudolph Triebel,et al.  Non-Iterative Vision-Based Interpolation of 3D Laser Scans , 2007 .