Abstract The emergence of new autonomous driving systems and functions – in particular, systems that base their decisions on the output of machine learning subsystems responsible for environment perception – brings a significant change in the risks to the safety and security of transportation. These kinds of Advanced Driver Assistance Systems are vulnerable to new types of malicious attacks, and their properties are often not well understood. This paper demonstrates the theoretical and practical possibility of deliberate physical adversarial attacks against deep learning perception systems in general, with a focus on safety-critical driver assistance applications such as traffic sign classification in particular. Our newly developed traffic sign stickers are different from other similar methods insofar that they require no special knowledge or precision in their creation and deployment, thus they present a realistic and severe threat to traffic safety and security. In this paper we preemptively point out the dangers and easily exploitable weaknesses that current and future systems are bound to face.
[1]
Jianping Wu,et al.
Shape- and Texture-Based 1-D Image Processing Algorithm for Real-Time Stop Sign Road Inventory Data Collection
,
2002,
J. Intell. Transp. Syst..
[2]
Jungmin So,et al.
On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
,
2020,
Applied Sciences.
[3]
Wolfgang Hirschberg,et al.
Potentials of Active Safety and Driver Assistance Systems
,
2011
.
[4]
Maen Ghadi,et al.
A comparative analysis of black spot identification methods and road accident segmentation methods.
,
2019,
Accident; analysis and prevention.
[5]
Lujo Bauer,et al.
Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
,
2016,
CCS.
[6]
Keechoo Choi,et al.
Driver Behavior and Preferences for Changeable Message Signs and Variable Speed Limits in Reduced Visibility Conditions
,
2012,
J. Intell. Transp. Syst..