MmWave Radar Point Cloud Segmentation using GMM in Multimodal Traffic Monitoring

In multimodal traffic monitoring, we gather traffic statistics for distinct transportation modes, such as pedestrians, cars and bicycles, in order to analyze and improve people's daily mobility in terms of safety and convenience. On account of its robustness to bad light and adverse weather conditions, and inherent speed measurement ability, the radar sensor is a suitable option for this application. However, the sparse radar data from conventional commercial radars make it extremely challenging for transportation mode classification. Thus, we propose to use a high-resolution millimeter-wave(mmWave) radar sensor to obtain a relatively richer radar point cloud representation for a traffic monitoring scenario. Based on a new feature vector, we use the multivariate Gaussian mixture model (GMM) to do the radar point cloud segmentation, i.e. ‘point-wise’ classification, in an unsupervised learning environment. In our experiment, we collected radar point clouds for pedestrians and cars, which also contained the inevitable clutter from the surroundings. The experimental results using GMM on the new feature vector demonstrated a good segmentation performance in terms of the intersection-over-union (IoU) metrics. The detailed methodology and validation metrics are presented and discussed.

[1]  Trevor Darrell,et al.  Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Jürgen Dickmann,et al.  Semantic Segmentation on Radar Point Clouds , 2018, 2018 21st International Conference on Information Fusion (FUSION).

[3]  Jürgen Dickmann,et al.  Comparison of random forest and long short-term memory network performances in classification tasks using radar , 2017, 2017 Sensor Data Fusion: Trends, Solutions, Applications (SDF).

[4]  P. Sivakumar,et al.  A REVIEW ON IMAGE SEGMENTATION TECHNIQUES , 2016 .

[5]  Trevor Darrell,et al.  Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Leonidas J. Guibas,et al.  Frustum PointNets for 3D Object Detection from RGB-D Data , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[8]  Leonidas J. Guibas,et al.  PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Leonidas J. Guibas,et al.  PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space , 2017, NIPS.

[10]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[11]  Joydeep Ghosh,et al.  Data Clustering Algorithms And Applications , 2013 .

[12]  Klaus C. J. Dietmayer,et al.  2D Car Detection in Radar Data with PointNets , 2019, 2019 IEEE Intelligent Transportation Systems Conference (ITSC).

[13]  Rama Chellappa,et al.  Segmentation of polarimetric synthetic aperture radar data , 1992, IEEE Trans. Image Process..

[14]  Simon Haykin,et al.  Neural Networks and Learning Machines , 2010 .