The role of cloud-computing in the development and application of ADAS

This work elaborates on the cycles involved in developing ADAS applications which involve resources not permanently available in the vehicles. For example, data is collected from LIDAR, video cameras, precise localization, and user interaction with the ADAS features. These data are consumed by machine learning algorithms hosted locally or in the cloud. This paper investigates the requirements involved in processing camera streams on the fly in the vehicle and the possibility of off-loading the processing load onto the cloud in order to reduce the cost of the in-vehicle hardware. We highlight some representative computer vision applications and assess numerically in what network conditions offload to cloud is feasible.

[1]  Luis Miguel Bergasa,et al.  Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes , 2015, Sensors.

[2]  Klaus Bengler,et al.  Driver State Monitoring Systems-- Transferable Knowledge Manual Driving to HAD , 2015 .

[3]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[4]  Song Yao,et al.  Reconfigurable processor for deep learning in autonomous vehicles , 2017 .

[5]  Akihiro Nakao,et al.  Vehicle control system coordinated between cloud and mobile edge computing , 2016, 2016 55th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE).

[6]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[7]  Dushyant Rao,et al.  Vote3Deep: Fast object detection in 3D point clouds using efficient convolutional neural networks , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Meikang Qiu,et al.  A Scalable and Quick-Response Software Defined Vehicular Network Assisted by Mobile Edge Computing , 2017, IEEE Communications Magazine.

[10]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Ji Zhang,et al.  Visual-lidar odometry and mapping: low-drift, robust, and fast , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[12]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Hubertus Feussner,et al.  Enabling Real-Time Context-Aware Collaboration through 5G and Mobile Edge Computing , 2015, 2015 12th International Conference on Information Technology - New Generations.

[14]  Yan Liu,et al.  Deep residual learning for image steganalysis , 2018, Multimedia Tools and Applications.

[15]  Keiichi Uchimura,et al.  Driver inattention monitoring system for intelligent vehicles: A review , 2009 .

[16]  Wei Chen,et al.  Learning Deep Correspondence through Prior and Posterior Feature Constancy , 2017, ArXiv.

[17]  Luis Salgado,et al.  Real-time lane tracking using Rao-Blackwellized particle filter , 2012, Journal of Real-Time Image Processing.

[18]  Tian Xia,et al.  Vehicle Detection from 3D Lidar Using Fully Convolutional Network , 2016, Robotics: Science and Systems.

[19]  Andreas Geiger,et al.  Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art , 2017, Found. Trends Comput. Graph. Vis..

[20]  J. Fung Computer Vision on the GPU , 2005 .

[21]  Der-Jiunn Deng,et al.  Latency Control in Software-Defined Mobile-Edge Vehicular Networking , 2017, IEEE Communications Magazine.

[22]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.