Interpretable and Differentially Private Predictions
暂无分享,去创建一个
[1] Dejing Dou,et al. Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction , 2016, AAAI.
[2] Avrim Blum,et al. The Johnson-Lindenstrauss Transform Itself Preserves Differential Privacy , 2012, 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science.
[3] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[4] Martín Abadi,et al. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.
[5] Paul Voigt,et al. The Eu General Data Protection Regulation (Gdpr): A Practical Guide , 2017 .
[6] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[7] Alexander Binder,et al. Explaining nonlinear classification decisions with deep Taylor decomposition , 2015, Pattern Recognit..
[8] Li Zhang,et al. Learning Differentially Private Language Models Without Losing Accuracy , 2017, ArXiv.
[9] Vitaly Shmatikov,et al. Machine Learning Models that Remember Too Much , 2017, CCS.
[10] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[11] Chaoyang Zhang,et al. Deep learning architectures for multi-label classification of intelligent health risk prediction , 2017, BMC Bioinformatics.
[12] Vitaly Shmatikov,et al. Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[13] Dejing Dou,et al. Preserving differential privacy in convolutional deep belief networks , 2017, Machine Learning.
[14] Nina Mishra,et al. Privacy via the Johnson-Lindenstrauss Transform , 2012, J. Priv. Confidentiality.
[15] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[16] Chaoyang Zhang,et al. An Ensemble Multilabel Classification for Disease Risk Prediction , 2017, Journal of healthcare engineering.
[17] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[18] Úlfar Erlingsson,et al. The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets , 2018, ArXiv.
[19] Yin Yang,et al. Functional Mechanism: Regression Analysis under Differential Privacy , 2012, Proc. VLDB Endow..
[20] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[21] Reza Ebrahimpour,et al. Mixture of experts: a literature survey , 2014, Artificial Intelligence Review.
[22] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[23] Úlfar Erlingsson,et al. Scalable Private Learning with PATE , 2018, ICLR.
[24] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[25] Geoffrey E. Hinton,et al. The EM algorithm for mixtures of factor analyzers , 1996 .
[26] Markus H. Gross,et al. A unified view of gradient-based attribution methods for Deep Neural Networks , 2017, NIPS 2017.
[27] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[28] Aaron Roth,et al. The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..
[29] Ramprasaath R. Selvaraju,et al. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization , 2016 .