暂无分享,去创建一个
[1] Ming Fan,et al. Can We Trust Your Explanations? Sanity Checks for Interpreters in Android Malware Analysis , 2020, IEEE Transactions on Information Forensics and Security.
[2] Pascal Vincent,et al. Visualizing Higher-Layer Features of a Deep Network , 2009 .
[3] Theodoros Spyridopoulos,et al. Efficient and Interpretable Real-Time Malware Detection Using Random-Forest , 2019, 2019 International Conference on Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA).
[4] Gang Wang,et al. LEMNA: Explaining Deep Learning based Security Applications , 2018, CCS.
[5] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[6] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[7] Kenli Li,et al. MalFCS: An effective malware classification framework with automated feature extraction based on deep convolutional neural networks , 2020, J. Parallel Distributed Comput..
[8] Xiao Wang,et al. Defensive dropout for hardening deep neural networks under adversarial attacks , 2018, ICCAD.
[9] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[10] Haytham Elmiligi,et al. The Curious Case of Machine Learning In Malware Detection , 2019, ICISSP.
[11] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[12] George Panoutsos,et al. Interpretable Machine Learning: Convolutional Neural Networks with RBF Fuzzy Logic Classification Rules , 2018, 2018 International Conference on Intelligent Systems (IS).
[13] Qin Zheng,et al. IMCFN: Image-based malware classification using fine-tuned convolutional neural network architecture , 2020, Comput. Networks.
[14] Li Chen,et al. Deep Transfer Learning for Static Malware Classification , 2018, ArXiv.
[15] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[16] Kaushik Roy,et al. Going Deeper in Spiking Neural Networks: VGG and Residual Architectures , 2018, Front. Neurosci..
[17] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[18] Danda B Rawat,et al. Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS , 2021, IEEE Communications Surveys & Tutorials.
[19] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[20] Brendan J. Frey,et al. Adaptive dropout for training deep neural networks , 2013, NIPS.
[21] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[22] Klaus-Robert Müller,et al. Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models , 2018, Pattern Recognit..
[23] Alun D. Preece,et al. Interpretability of deep learning models: A survey of results , 2017, 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI).
[24] Markus H. Gross,et al. Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation , 2019, ICML.
[25] Alexander Binder,et al. Evaluating the Visualization of What a Deep Neural Network Has Learned , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[26] Qin Zheng,et al. Image-Based malware classification using ensemble of CNN architectures (IMCEC) , 2020, Comput. Secur..
[27] Heiner Stuckenschmidt,et al. iDropout: Leveraging Deep Taylor Decomposition for the Robustness of Deep Neural Networks , 2019, OTM Conferences.
[28] Michael R. Lyu,et al. Why an Android App is Classified as Malware? Towards Malware Classification Interpretation , 2020, ArXiv.
[29] B. S. Manjunath,et al. Malware images: visualization and automatic classification , 2011, VizSec '11.
[30] Andreas Nürnberger,et al. The Power of Ensembles for Active Learning in Image Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[31] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[32] GiannottiFosca,et al. A Survey of Methods for Explaining Black Box Models , 2018 .
[33] Songqing Yue,et al. Imbalanced Malware Images Classification: a CNN based Approach , 2017, ArXiv.
[34] Qi Jing,et al. SEdroid: A Robust Android Malware Detector using Selective Ensemble Learning , 2019, 2020 IEEE Wireless Communications and Networking Conference (WCNC).
[35] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[36] Fuxin Li,et al. Visualizing Deep Networks by Optimizing with Integrated Gradients , 2019, CVPR Workshops.
[37] Davide Bacciu,et al. Augmenting Recurrent Neural Networks Resilience by Dropout , 2020, IEEE Transactions on Neural Networks and Learning Systems.
[38] Steven H. H. Ding,et al. I-MAD: A Novel Interpretable Malware Detector Using Hierarchical Transformer , 2019, ArXiv.
[39] Mark Stamp,et al. Deep Learning versus Gist Descriptors for Image-based Malware Classification , 2018, ICISSP.
[40] Alexander Binder,et al. Explaining nonlinear classification decisions with deep Taylor decomposition , 2015, Pattern Recognit..
[41] Duen Horng Chau,et al. Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations , 2019, IEEE Transactions on Visualization and Computer Graphics.