Open Source Platform for Extended Perception Using Communications and Machine Learning on a Small-Scale Vehicular Testbed

The aggregation of the data provided by beaconing services and sensors of a vehicle can be used to support safety applications, such as dissemination of early warnings to drivers about potential dangers of the road. In addition, these safety applications can be enhanced by having vehicles communicate with each other information about the surrounding environment. In this paper, we propose an open-source library for extended perception created by merging local maps. Moreover, we introduce a novel architecture including key components, such as ego perception, map merging, environment mapping. Each of these components is described and tested using physical vehicles from the ALVE platform. In addition, the vehicle’s perception is analysed by our own Autonomous physical Car Artificial Intelligence, named ACAI, an object detection convolutional neural network. Through simulations, we show that the delay from sending data from a vehicle to another is minimal, which makes the map merging system works as intended, and we are able to create and communicate an accurate extended perception.

[1]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Abdelhakim Hafid,et al.  Setting up an extended perception in a vehicular network environment: A proof of concept , 2016, 2016 IEEE Wireless Communications and Networking Conference.

[3]  C DeepikaH An Overview of You Only Look Once: Unified, Real-Time Object Detection , 2020 .

[4]  Jihene Rezgui,et al.  Autonomous Learning Intelligent Vehicles Engineering: ALIVE 1.0 , 2020, 2020 International Symposium on Networks, Computers and Communications (ISNCC).

[5]  Jonathan P. How,et al.  Duckietown: An open, inexpensive and flexible platform for autonomy education and research , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).