How good Neural Networks interpretation methods really are? A quantitative benchmark
暂无分享,去创建一个
[1] Y. Moreau,et al. From genotype to phenotype in Arabidopsis thaliana: in-silico genome interpretation predicts 288 phenotypes from sequencing data , 2021, Nucleic acids research.
[2] Oriol Vinyals,et al. Highly accurate protein structure prediction with AlphaFold , 2021, Nature.
[3] E. Trucco,et al. Using machine learning approaches for multi-omics data analysis: A review. , 2021, Biotechnology advances.
[4] Wei-Yin Loh,et al. Classification and regression trees , 2011, WIREs Data Mining Knowl. Discov..
[5] Enrico Costanza,et al. Evaluating saliency map explanations for convolutional neural networks: a user study , 2020, IUI.
[6] M. Yamada,et al. FsNet: Feature Selection Network on High-dimensional Biological Data , 2020, 2023 International Joint Conference on Neural Networks (IJCNN).
[7] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[8] Yves Moreau,et al. Exploring the limitations of biophysical propensity scales coupled with machine learning for protein sequence analysis , 2019, Scientific Reports.
[9] Brandon M. Greenwell,et al. Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.
[10] Gjergji Kasneci,et al. CancelOut: A Layer for Feature Selection in Deep Neural Networks , 2019, ICANN.
[11] R. Tibshirani,et al. LassoNet: A Neural Network with Feature Sparsity , 2019, J. Mach. Learn. Res..
[12] Alexander Binder,et al. Unmasking Clever Hans predictors and assessing what machines really learn , 2019, Nature Communications.
[13] James Zou,et al. Concrete Autoencoders for Differentiable Feature Selection and Reconstruction , 2019, ArXiv.
[14] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[15] William Stafford Noble,et al. DeepPINK: reproducible feature selection in deep neural networks , 2018, NeurIPS.
[16] A Min Tjoa,et al. Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI , 2018, CD-MAKE.
[17] Yang Zhang,et al. A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations , 2018, ICML.
[18] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[19] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[20] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[21] Yoshua Bengio,et al. Diet Networks: Thin Parameters for Fat Genomic , 2016, ICLR.
[22] Sergio Gomez Colmenarejo,et al. Hybrid computing using a neural network with dynamic external memory , 2016, Nature.
[23] Andrea Vedaldi,et al. Salient Deconvolutional Networks , 2016, ECCV.
[24] Lucas Janson,et al. Panning for gold: ‘model‐X’ knockoffs for high dimensional controlled variable selection , 2016, 1610.02351.
[25] Anna Shcherbina,et al. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences , 2016, ArXiv.
[26] K. Thangadurai,et al. RELIEF: Feature Selection Approach , 2015 .
[27] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[28] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[29] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[30] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[31] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[32] Daniel Gómez,et al. Polynomial calculation of the Shapley value based on sampling , 2009, Comput. Oper. Res..
[33] Stephen P. Boyd,et al. An Interior-Point Method for Large-Scale $\ell_1$-Regularized Least Squares , 2007, IEEE Journal of Selected Topics in Signal Processing.
[34] R. Tibshirani,et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction , 2004 .
[35] Fuhui Long,et al. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy , 2003, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[36] A. Kraskov,et al. Estimating mutual information. , 2003, Physical review. E, Statistical, nonlinear, and soft matter physics.
[37] R. Tibshirani,et al. Diagnosis of multiple cancer types by shrunken centroids of gene expression , 2002, Proceedings of the National Academy of Sciences of the United States of America.
[38] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..
[39] L. Breiman. Random Forests , 2001, Machine Learning.