Bias Busters: Robustifying DL-based Lithographic Hotspot Detectors Against Backdooring Attacks

Deep learning (DL) offers potential improvements throughout the CAD tool-flow, one promising application being lithographic hotspot detection. However, DL techniques have been shown to be especially vulnerable to inference and training time adversarial attacks. Recent work has demonstrated that a small fraction of malicious physical designers can stealthily "backdoor" a DL-based hotspot detector during its training phase such that it accurately classifies regular layout clips but predicts hotspots containing a specially crafted trigger shape as non-hotspots. We propose a novel training data augmentation strategy as a powerful defense against such backdooring attacks. The defense works by eliminating the intentional biases introduced in the training data but does not require knowledge of which training samples are poisoned or the nature of the backdoor trigger. Our results show that the defense can drastically reduce the attack success rate from 84% to ~0%.

[1]  Wen-Chuan Lee,et al.  Trojaning Attack on Neural Networks , 2018, NDSS.

[2]  Evangeline F. Y. Young,et al.  Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection , 2019, ArXiv.

[3]  Taghi M. Khoshgoftaar,et al.  A survey on Image Data Augmentation for Deep Learning , 2019, Journal of Big Data.

[4]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[5]  Evangeline F. Y. Young,et al.  Layout hotspot detection with feature tensor generation and deep biased learning , 2017, 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC).

[6]  Yiorgos Makris,et al.  On Improving Hotspot Detection Through Synthetic Pattern-Based Database Enhancement , 2020, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[7]  Yao Wang,et al.  Lithography Hotspot Detection with FFT-based Feature Extraction and Imbalanced Learning Rate , 2019, ACM Trans. Design Autom. Electr. Syst..

[8]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[9]  Ramesh Karri,et al.  NNoculation: Broad Spectrum and Targeted Treatment of Backdoored DNNs , 2020, ArXiv.

[10]  Brendan Dolan-Gavitt,et al.  Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.

[11]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[12]  Yukun Yang,et al.  Defending Neural Backdoors via Generative Distribution Modeling , 2019, NeurIPS.

[13]  Dawn Xiaodong Song,et al.  Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.

[14]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[15]  Aleksander Madry,et al.  There Is No Free Lunch In Adversarial Robustness (But There Are Unexpected Benefits) , 2018, ArXiv.

[16]  Ismail Bustany,et al.  Eh?Predictor: A Deep Learning Framework to Identify Detailed Routing Short Violations From a Placed Netlist , 2020, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[17]  Fabio Roli,et al.  Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.

[18]  Chenxi Lin,et al.  Imbalance aware lithography hotspot detection: a deep learning approach , 2017 .

[19]  Saurabh Sinha,et al.  ASAP7: A 7-nm finFET predictive process design kit , 2016, Microelectron. J..

[20]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[21]  Ben Y. Zhao,et al.  Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[22]  Samy Bengio,et al.  Adversarial Machine Learning at Scale , 2016, ICLR.

[23]  Fan Yang,et al.  Efficient Layout Hotspot Detection via Binarized Residual Neural Network , 2019, 2019 56th ACM/IEEE Design Automation Conference (DAC).

[24]  Damith Chinthana Ranasinghe,et al.  STRIP: a defence against trojan attacks on deep neural networks , 2019, ACSAC.

[25]  Siddharth Garg,et al.  BadNets: Evaluating Backdooring Attacks on Deep Neural Networks , 2019, IEEE Access.

[26]  Ramesh Karri,et al.  Poisoning the (Data) Well in ML-Based CAD: A Case Study of Hiding Lithographic Hotspots , 2020, 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[27]  Quoc V. Le,et al.  AutoAugment: Learning Augmentation Strategies From Data , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[28]  Yiorgos Makris,et al.  Enhanced hotspot detection through synthetic pattern generation and design of experiments , 2018, 2018 IEEE 36th VLSI Test Symposium (VTS).

[29]  Xiangyu Zhang,et al.  ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation , 2019, CCS.

[30]  Andrew B. Kahng,et al.  Machine Learning Applications in Physical Design: Recent Results and Directions , 2018, ISPD.

[31]  Tudor Dumitras,et al.  Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.

[32]  Yiorgos Makris,et al.  Machine Learning-Based Hotspot Detection: Fallacies, Pitfalls and Marching Orders , 2019, 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[33]  Vadim Borisov,et al.  Research on data augmentation for lithography hotspot detection using deep learning , 2018, European Mask and Lithography Conference.