Can Optical Trojans Assist Adversarial Perturbations?
暂无分享,去创建一个
[1] Asaf Shabtai,et al. The Translucent Patch: A Physical and Universal Attack on Object Detectors , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[3] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[4] S. Nayar,et al. What are good apertures for defocus deblurring? , 2009, 2009 IEEE International Conference on Computational Photography (ICCP).
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Andreas Geiger,et al. Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[7] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[8] Ayan Chakrabarti,et al. Depth and Deblurring from a Spectrally-Varying Depth-of-Field , 2012, ECCV.
[9] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[10] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[11] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[12] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[13] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[14] Lukasz Kaiser,et al. Depthwise Separable Convolutions for Neural Machine Translation , 2017, ICLR.
[15] Siddharth Garg,et al. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks , 2019, IEEE Access.
[16] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[17] Ramesh Raskar,et al. Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing , 2007, ACM Trans. Graph..
[18] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[19] Bing-Yu Chen,et al. Extracting depth and matte using a color-filtered aperture , 2008, SIGGRAPH Asia '08.
[20] Frédo Durand,et al. Image and depth from a conventional camera with a coded aperture , 2007, ACM Trans. Graph..
[21] Xin He,et al. Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models , 2019, 2019 IEEE International Conference on Embedded Software and Systems (ICESS).
[22] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[23] E. E. Fenimore,et al. Coded Aperture Imaging: Many Holes Make Light Work , 1980 .
[24] J. M. Pierre Langlois,et al. Camera intrinsic blur kernel estimation: A reliable framework , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[26] J. Doug Tygar,et al. Adversarial machine learning , 2019, AISec '11.
[27] S. Bacchini. The Encyclopedia of Applied Linguistics , 2014 .
[28] Shree K. Nayar,et al. Programmable Aperture Camera Using LCoS , 2010, IPSJ Trans. Comput. Vis. Appl..
[29] Liang Tong,et al. Defending Against Physically Realizable Attacks on Image Classification , 2020, ICLR.
[30] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[31] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[32] J. Zico Kolter,et al. Adversarial camera stickers: A physical camera-based attack on deep learning systems , 2019, ICML.
[33] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).