暂无分享,去创建一个
[1] George D. Magoulas,et al. Improving the Convergence of the Backpropagation Algorithm Using Learning Rate Adaptation Methods , 1999, Neural Computation.
[2] Martín Abadi,et al. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.
[3] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[4] Philip Bachman,et al. Deep Reinforcement Learning that Matters , 2017, AAAI.
[5] Jeffrey F. Naughton,et al. Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics , 2016, SIGMOD Conference.
[6] Yoshua Bengio,et al. Three Factors Influencing Minima in SGD , 2017, ArXiv.
[7] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[8] Praneeth Netrapalli,et al. Non-Gaussianity of Stochastic Gradient Noise , 2019, ArXiv.
[9] Christoph H. Lampert,et al. Data-Dependent Stability of Stochastic Gradient Descent , 2017, ICML.
[10] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[11] Denis J. Dean,et al. Comparison of neural networks and discriminant analysis in predicting forest cover types , 1998 .
[12] Cynthia Dwork,et al. Differential Privacy , 2006, ICALP.
[13] Kobbi Nissim,et al. On the Generalization Properties of Differential Privacy , 2015, ArXiv.
[14] Levent Sagun,et al. A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks , 2019, ICML.
[15] Sofya Raskhodnikova,et al. Smooth sensitivity and sampling in private data analysis , 2007, STOC '07.
[16] Norma Zagaglia Salvi,et al. A Generalization of the 'Problème des Rencontres' , 2018, J. Integer Seq..
[17] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[18] Amir Houmansadr,et al. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[19] Prateek Jain,et al. SGD without Replacement: Sharper Rates for General Smooth Convex Functions , 2019, ICML.
[20] Yoram Singer,et al. Train faster, generalize better: Stability of stochastic gradient descent , 2015, ICML.
[21] Léon Bottou,et al. Stochastic Gradient Descent Tricks , 2012, Neural Networks: Tricks of the Trade.
[22] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[23] Aaron Roth,et al. The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..
[24] Úlfar Erlingsson,et al. Scalable Private Learning with PATE , 2018, ICLR.
[25] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[26] Matthias Hein,et al. The Loss Surface of Deep and Wide Neural Networks , 2017, ICML.
[27] Quoc V. Le,et al. A Bayesian Perspective on Generalization and Stochastic Gradient Descent , 2017, ICLR.
[28] John D. Hunter,et al. Matplotlib: A 2D Graphics Environment , 2007, Computing in Science & Engineering.
[29] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[30] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[31] Anand D. Sarwate,et al. Stochastic gradient descent with differentially private updates , 2013, 2013 IEEE Global Conference on Signal and Information Processing.
[32] Tassilo Klein,et al. Differentially Private Federated Learning: A Client Level Perspective , 2017, ArXiv.
[33] David M. Blei,et al. Stochastic Gradient Descent as Approximate Bayesian Inference , 2017, J. Mach. Learn. Res..
[34] Raef Bassily,et al. Differentially Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds , 2014, 1405.7085.
[35] Y. B. Wah,et al. Power comparisons of Shapiro-Wilk , Kolmogorov-Smirnov , Lilliefors and Anderson-Darling tests , 2011 .
[36] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[37] André Elisseeff,et al. Stability and Generalization , 2002, J. Mach. Learn. Res..
[38] Wes McKinney,et al. Data Structures for Statistical Computing in Python , 2010, SciPy.
[39] F. Bach,et al. Bridging the gap between constant step size stochastic gradient descent and Markov chains , 2017, The Annals of Statistics.
[40] Arun Rajkumar,et al. A Differentially Private Stochastic Gradient Descent Algorithm for Multiparty Classification , 2012, AISTATS.
[41] Quoc V. Le,et al. The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study , 2019, ICML.
[42] J. Schmidhuber,et al. The Sacred Infrastructure for Computational Research , 2017, SciPy.
[43] Razvan Pascanu,et al. Sharp Minima Can Generalize For Deep Nets , 2017, ICML.
[44] Ohad Shamir,et al. Without-Replacement Sampling for Stochastic Gradient Methods , 2016, NIPS.
[45] David Evans,et al. Evaluating Differentially Private Machine Learning in Practice , 2019, USENIX Security Symposium.
[46] Stelvio Cimato,et al. Encyclopedia of Cryptography and Security , 2005 .
[47] Anand D. Sarwate,et al. Differentially Private Empirical Risk Minimization , 2009, J. Mach. Learn. Res..
[48] Marcus A. Badgeley,et al. Automated deep-neural-network surveillance of cranial images for acute neurologic events , 2018, Nature Medicine.
[49] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.