暂无分享,去创建一个
Úlfar Erlingsson | Dawn Xiaodong Song | Chang Liu | Jernej Kos | Nicholas Carlini | Nicholas Carlini | D. Song | Ú. Erlingsson | Jernej Kos | Chang Liu
[1] F. Massey. The Kolmogorov-Smirnov Test for Goodness of Fit , 1951 .
[2] A. O'Hagan,et al. Bayes estimation subject to uncertainty about parameter constraints , 1976 .
[3] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[4] Anders Krogh,et al. A Simple Weight Decay Can Improve Generalization , 1991, NIPS.
[5] Beatrice Santorini,et al. Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.
[6] Yuichi Nakamura,et al. Approximation of dynamical systems by continuous time recurrent neural networks , 1993, Neural Networks.
[7] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[8] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[9] Irit Dinur,et al. Revealing information while preserving privacy , 2003, PODS.
[10] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[11] Cynthia Dwork,et al. Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.
[12] Cynthia Dwork,et al. Differential Privacy: A Survey of Results , 2008, TAMC.
[13] Kamalika Chaudhuri,et al. Privacy-preserving logistic regression , 2008, NIPS.
[14] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[15] Lukás Burget,et al. Recurrent neural network based language model , 2010, INTERSPEECH.
[16] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.
[17] Klaus-Robert Müller,et al. Efficient BackProp , 2012, Neural Networks: Tricks of the Trade.
[18] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[19] Geoffrey E. Hinton,et al. On the importance of initialization and momentum in deep learning , 2013, ICML.
[20] Xin-She Yang,et al. Introduction to Algorithms , 2021, Nature-Inspired Optimization Algorithms.
[21] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[22] Somesh Jha,et al. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing , 2014, USENIX Security Symposium.
[23] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[24] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[25] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[26] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[27] Giovanni Felici,et al. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers , 2013, Int. J. Secur. Networks.
[28] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[29] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[30] Christopher D. Manning,et al. Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models , 2016, ACL.
[31] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[32] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[33] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[34] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[35] Vitaly Shmatikov,et al. Machine Learning Models that Remember Too Much , 2017, CCS.
[36] Yang You,et al. Large Batch Training of Convolutional Networks , 2017, 1708.03888.
[37] Naftali Tishby,et al. Opening the Black Box of Deep Neural Networks via Information , 2017, ArXiv.
[38] Richard Socher,et al. Quasi-Recurrent Neural Networks , 2016, ICLR.
[39] Ben Y. Zhao,et al. Automated Crowdturfing Attacks and Defenses in Online Review Systems , 2017, CCS.
[40] Shiho Moriai,et al. Privacy-Preserving Deep Learning: Revisited and Enhanced , 2017, ATIS.
[41] Yann Dauphin,et al. Convolutional Sequence to Sequence Learning , 2017, ICML.
[42] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[43] Ilya Sutskever,et al. Learning to Generate Reviews and Discovering Sentiment , 2017, ArXiv.
[44] Elad Hoffer,et al. Train longer, generalize better: closing the generalization gap in large batch training of neural networks , 2017, NIPS.
[45] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[46] Ran El-Yaniv,et al. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations , 2016, J. Mach. Learn. Res..
[47] Yang You,et al. Scaling SGD Batch Size to 32K for ImageNet Training , 2017, ArXiv.
[48] Ilia Zaitsev,et al. Pre-Trained Word Vectors in RNN-Based Text Classifiers , 2017 .
[49] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[50] Yann Dauphin,et al. A Convolutional Encoder Model for Neural Machine Translation , 2016, ACL.
[51] James Demmel,et al. 100-epoch ImageNet Training with AlexNet in 24 Minutes , 2017, ArXiv.
[52] Reza Shokri,et al. Machine Learning with Membership Privacy using Adversarial Regularization , 2018, CCS.
[53] Binghui Wang,et al. Stealing Hyperparameters in Machine Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).
[54] Vitaly Shmatikov,et al. The Natural Auditor: How To Tell If Someone Used Your Words To Train Their Model , 2018, ArXiv.
[55] Richard Socher,et al. Regularizing and Optimizing LSTM Language Models , 2017, ICLR.
[56] Richard Socher,et al. An Analysis of Neural Language Modeling at Multiple Scales , 2018, ArXiv.
[57] Ling Liu,et al. Towards Demystifying Membership Inference Attacks , 2018, ArXiv.
[58] Somesh Jha,et al. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting , 2017, 2018 IEEE 31st Computer Security Foundations Symposium (CSF).
[59] Seong Joon Oh,et al. Towards Reverse-Engineering Black-Box Neural Networks , 2017, ICLR.
[60] Andrew M. Dai,et al. Gmail Smart Compose: Real-Time Assisted Writing , 2019, KDD.
[61] David Evans,et al. Evaluating Differentially Private Machine Learning in Practice , 2019, USENIX Security Symposium.