Phantom of the ADAS: Phantom Attacks on Driver-Assistance Systems

The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker’s identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today’s most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car’s ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object’s context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers’ opportunities to apply phantom attacks but won’t eliminate them.

[1]  Matti Valovirta,et al.  Experimental Security Analysis of a Modern Automobile , 2011 .

[2]  Juan Antonio Álvarez,et al.  Evaluation of deep neural networks for traffic sign detection systems , 2018, Neurocomputing.

[3]  Johannes Stallkamp,et al.  The German Traffic Sign Recognition Benchmark: A multi-class classification competition , 2011, The 2011 International Joint Conference on Neural Networks.

[4]  Y. M. Mustafah,et al.  Stereo vision images processing for real-time object distance and size measurements , 2012, 2012 International Conference on Computer and Communication Engineering (ICCCE).

[5]  Dawn Song,et al.  Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.

[6]  Sijing Zhang,et al.  Vehicular ad hoc networks (VANETs): Current state, challenges, potentials and way forward , 2014, 2014 20th International Conference on Automation and Computing.

[7]  Kevin Fu,et al.  Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving , 2019, CCS.

[8]  Bart Preneel,et al.  Fast, Furious and Insecure: Passive Keyless Entry and Start Systems in Modern Supercars , 2019, IACR Trans. Cryptogr. Hardw. Embed. Syst..

[9]  A M Turing,et al.  Computing Machinery and Intelligence A.M. Turing , 2007 .

[10]  Ruigang Liang,et al.  Seeing isn't Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors , 2019, CCS.

[11]  Hovav Shacham,et al.  Comprehensive Experimental Analyses of Automotive Attack Surfaces , 2011, USENIX Security Symposium.

[12]  Duen Horng Chau,et al.  ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.

[13]  Dawn Song,et al.  Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.

[14]  Jonathan Petit,et al.  Remote Attacks on Automated Vehicles Sensors : Experiments on Camera and LiDAR , 2015 .

[15]  Riender Happee,et al.  The Deployment of Advanced Driver Assistance Systems in Europe , 2015 .

[16]  Jenq-Neng Hwang,et al.  Handbook of Neural Network Signal Processing , 2000, IEEE Transactions on Neural Networks.

[17]  Nir Morgulis,et al.  Fooling a Real Car with Adversarial Traffic Signs , 2019, ArXiv.

[18]  Prateek Mittal,et al.  DARTS: Deceiving Autonomous Cars with Toxic Signs , 2018, ArXiv.