Federated Learning: Challenges, Methods, and Future Directions
暂无分享,去创建一个
Anit Kumar Sahu | Ameet Talwalkar | Virginia Smith | Tian Li | Ameet S. Talwalkar | Virginia Smith | Tian Li | Ameet Talwalkar
[1] Rich Caruana,et al. Multitask Learning , 1998, Encyclopedia of Machine Learning and Data Mining.
[2] Sebastian Thrun,et al. Learning to Learn , 1998, Springer US.
[3] Miguel Oom Temudo de Castro,et al. Practical Byzantine fault tolerance , 1999, OSDI '99.
[4] Yehuda Lindell,et al. Privacy Preserving Data Mining , 2000, Journal of Cryptology.
[5] Rakesh Agrawal,et al. Privacy-preserving data mining , 2000, SIGMOD 2000.
[6] Andrew S. Tanenbaum,et al. Distributed systems: Principles and Paradigms , 2001 .
[7] David Chaum,et al. The dining cryptographers problem: Unconditional sender and recipient untraceability , 1988, Journal of Cryptology.
[8] Massimiliano Pontil,et al. Regularized multi--task learning , 2004, KDD.
[9] Wei Hong,et al. TinyDB: an acquisitional query processing system for sensor networks , 2005, TODS.
[10] Wei Hong,et al. Model-based approximate querying in sensor networks , 2005, The VLDB Journal.
[11] Cynthia Dwork,et al. Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.
[12] Y. Yao,et al. On Early Stopping in Gradient Descent Learning , 2007 .
[13] C.H. van Berkel,et al. Multi-core for mobile phones , 2009, 2009 Design, Automation & Test in Europe Conference & Exhibition.
[14] Alexander J. Smola,et al. Parallelized Stochastic Gradient Descent , 2010, NIPS.
[15] Chris Clifton,et al. δ-Presence without Complete World Knowledge , 2010, IEEE Transactions on Knowledge and Data Engineering.
[16] Nikolaos G. Bourbakis,et al. A Survey on Wearable Sensor-Based Systems for Health Monitoring and Prognosis , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).
[17] C. Dwork. A firm foundation for private data analysis , 2011, Commun. ACM.
[18] Stephen J. Wright,et al. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.
[19] Ameet Talwalkar,et al. Divide-and-Conquer Matrix Factorization , 2011, NIPS.
[20] Stephen P. Boyd,et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..
[21] Anand D. Sarwate,et al. Differentially Private Empirical Risk Minimization , 2009, J. Mach. Learn. Res..
[22] Judy Kay,et al. Challenges and Solutions of Ubiquitous User Modeling , 2012, Ubiquitous Display Environments.
[23] Ohad Shamir,et al. Optimal Distributed Online Prediction Using Mini-Batches , 2010, J. Mach. Learn. Res..
[24] Stratis Ioannidis,et al. Privacy-Preserving Ridge Regression on Hundreds of Millions of Records , 2013, 2013 IEEE Symposium on Security and Privacy.
[25] Seunghak Lee,et al. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server , 2013, NIPS.
[26] Mark W. Schmidt,et al. Fast Convergence of Stochastic Gradient Descent under a Strong Growth Condition , 2013, 1308.6370.
[27] Feng Qian,et al. An in-depth study of LTE: effect of network protocol and application behavior on performance , 2013, SIGCOMM.
[28] Shai Shalev-Shwartz,et al. Accelerated Mini-Batch Stochastic Dual Coordinate Ascent , 2013, NIPS.
[29] Davide Anguita,et al. A Public Domain Dataset for Human Activity Recognition using Smartphones , 2013, ESANN.
[30] Dong Lin,et al. Data Center Networks: Topologies, Architectures and Fault-Tolerance Characteristics , 2013 .
[31] Michael I. Jordan,et al. Estimation, Optimization, and Parallelism when Data is Sparse , 2013, NIPS.
[32] David Lillethun,et al. Mobile fog: a programming model for large-scale applications on the internet of things , 2013, MCC '13.
[33] Tianbao Yang,et al. Trading Computation for Communication: Distributed Stochastic Dual Coordinate Ascent , 2013, NIPS.
[34] Aaron Roth,et al. The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..
[35] Thomas Hofmann,et al. Communication-Efficient Distributed Dual Coordinate Ascent , 2014, NIPS.
[36] Dong Yu,et al. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs , 2014, INTERSPEECH.
[37] Martin J. Wainwright,et al. Privacy Aware Learning , 2012, JACM.
[38] Shucheng Yu,et al. Privacy Preserving Back-Propagation Neural Network Learning Made Practical with Cloud Computing , 2014, IEEE Transactions on Parallel and Distributed Systems.
[39] Ohad Shamir,et al. Communication-Efficient Distributed Optimization using an Approximate Newton-type Method , 2013, ICML.
[40] Ohad Shamir,et al. Distributed stochastic optimization and learning , 2014, 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[41] Raef Bassily,et al. Differentially Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds , 2014, 1405.7085.
[42] Martin J. Wainwright,et al. Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates , 2013, J. Mach. Learn. Res..
[43] Dan Roth,et al. Distributed Box-Constrained Quadratic Optimization for Dual Linear SVM , 2015, ICML.
[44] Michael I. Jordan,et al. Adding vs. Averaging in Distributed Primal-Dual Optimization , 2015, ICML.
[45] Shafi Goldwasser,et al. Machine Learning Classification over Encrypted Data , 2015, NDSS.
[46] Eric P. Xing,et al. High-Performance Distributed ML at Scale through Parameter Server Consistency Models , 2014, AAAI.
[47] Teruo Higashino,et al. Edge-centric Computing: Vision and Challenges , 2015, CCRV.
[48] Peter Richtárik,et al. Quartz: Randomized Dual Coordinate Ascent with Arbitrary Sampling , 2015, NIPS.
[49] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[50] Yann LeCun,et al. Deep learning with Elastic Averaging SGD , 2014, NIPS.
[51] Emiliano De Cristofaro,et al. Efficient Private Statistics with Succinct Sketches , 2015, NDSS.
[52] Peter Richtárik,et al. Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.
[53] Peter Richtárik,et al. Distributed Coordinate Descent Method for Learning with Big Data , 2013, J. Mach. Learn. Res..
[54] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[55] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[56] Alexander J. Smola,et al. AIDE: Fast and Communication Efficient Distributed Optimization , 2016, ArXiv.
[57] Alexandros G. Dimakis,et al. Gradient Coding: Avoiding Stragglers in Distributed Learning , 2017, ICML.
[58] Michael I. Jordan,et al. CoCoA: A General Framework for Communication-Efficient Distributed Optimization , 2016, J. Mach. Learn. Res..
[59] Ameet Talwalkar,et al. Federated Multi-Task Learning , 2017, NIPS.
[60] Li Xiong,et al. A Comprehensive Comparison of Multiparty Secure Additions with Differential Privacy , 2017, IEEE Transactions on Dependable and Secure Computing.
[61] Farinaz Koushanfar,et al. Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications , 2018, IACR Cryptol. ePrint Arch..
[62] Wei Zhang,et al. Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent , 2017, NIPS.
[63] Venkatesh Saligrama,et al. Adaptive Neural Networks for Efficient Inference , 2017, ICML.
[64] Dan Alistarh,et al. ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning , 2017, ICML.
[65] Dimitris S. Papailiopoulos,et al. Approximate Gradient Coding via Sparse Random Graphs , 2017, ArXiv.
[66] Jeffrey F. Naughton,et al. Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics , 2016, SIGMOD Conference.
[67] Tassilo Klein,et al. Differentially Private Federated Learning: A Client Level Perspective , 2017, ArXiv.
[68] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[69] Sarvar Patel,et al. Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..
[70] Martín Abadi,et al. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.
[71] Vitaly Feldman,et al. Privacy Amplification by Iteration , 2018, 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS).
[72] Mehdi Bennis,et al. Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data , 2018, ArXiv.
[73] Hanlin Tang,et al. Communication Compression for Decentralized Training , 2018, NeurIPS.
[74] Walid Saad,et al. Federated Learning for Ultra-Reliable Low-Latency V2V Communications , 2018, 2018 IEEE Global Communications Conference (GLOBECOM).
[75] Kannan Ramchandran,et al. Speeding Up Distributed Machine Learning Using Codes , 2015, IEEE Transactions on Information Theory.
[76] Martin Jaggi,et al. COLA: Decentralized Linear Learning , 2018, NeurIPS.
[77] Farinaz Koushanfar,et al. DeepSecure: Scalable Provably-Secure Deep Learning , 2017, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).
[78] Neel Guha,et al. Knowledge Aggregation via Epsilon Model Spaces , 2018, ArXiv.
[79] Nathan Srebro,et al. Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization , 2018, NeurIPS.
[80] Hao Deng,et al. LoAdaBoost: Loss-Based AdaBoost Federated Machine Learning on medical Data , 2018, ArXiv.
[81] Virginia Smith,et al. Model Aggregation via Good-Enough Model Spaces , 2018 .
[82] Úlfar Erlingsson,et al. Scalable Private Learning with PATE , 2018, ICLR.
[83] Dimitris S. Papailiopoulos,et al. Gradient Coding Using the Stochastic Block Model , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).
[84] Zhenguo Li,et al. Federated Meta-Learning with Fast Convergence and Efficient Communication , 2018, 1802.07876.
[85] Shusen Wang,et al. GIANT: Globally Improved Approximate Newton Method for Distributed Optimization , 2017, NeurIPS.
[86] Kannan Ramchandran,et al. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates , 2018, ICML.
[87] Yue Zhao,et al. Federated Learning with Non-IID Data , 2018, ArXiv.
[88] Peter Rindal,et al. ABY3: A Mixed Protocol Framework for Machine Learning , 2018, IACR Cryptol. ePrint Arch..
[89] Dimitris S. Papailiopoulos,et al. Gradient Diversity: a Key Ingredient for Scalable Distributed Learning , 2017, AISTATS.
[90] Gaurav Kapoor,et al. Protection Against Reconstruction and Its Applications in Private Federated Learning , 2018, ArXiv.
[91] Jianyu Wang,et al. Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms , 2018, ArXiv.
[92] Wei Shi,et al. Federated learning of predictive models from federated Electronic Health Records , 2018, Int. J. Medical Informatics.
[93] Sebastian Caldas,et al. LEAF: A Benchmark for Federated Settings , 2018, ArXiv.
[94] Zhenguo Li,et al. Federated Meta-Learning for Recommendation , 2018, ArXiv.
[95] Sanjiv Kumar,et al. cpSGD: Communication-efficient and differentially-private distributed SGD , 2018, NeurIPS.
[96] Spyridon Bakas,et al. Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation , 2018, BrainLes@MICCAI.
[97] Sebastian Caldas,et al. Expanding the Reach of Federated Learning by Reducing Client Resource Requirements , 2018, ArXiv.
[98] David Eckhoff,et al. Metrics : a Systematic Survey , 2018 .
[99] Fan Zhou,et al. On the convergence properties of a K-step averaging stochastic gradient descent algorithm for nonconvex optimization , 2017, IJCAI.
[100] Dimitris S. Papailiopoulos,et al. ATOMO: Communication-efficient Learning via Atomic Sparsification , 2018, NeurIPS.
[101] Hubert Eichner,et al. Federated Learning for Mobile Keyboard Prediction , 2018, ArXiv.
[102] Ramesh Raskar,et al. Split learning for health: Distributed deep learning without sharing raw patient data , 2018, ArXiv.
[103] Peng Jiang,et al. A Linear Speedup Analysis of Distributed Deep Learning with Sparse and Quantized Communication , 2018, NeurIPS.
[104] Shenghuo Zhu,et al. Parallel Restarted SGD for Non-Convex Optimization with Faster Convergence and Less Communication , 2018, ArXiv.
[105] H. Brendan McMahan,et al. Learning Differentially Private Recurrent Language Models , 2017, ICLR.
[106] Anit Kumar Sahu,et al. Communication-Efficient Distributed Strongly Convex Stochastic Optimization: Non-Asymptotic Rates. , 2018, 1809.02920.
[107] Úlfar Erlingsson,et al. The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets , 2018, ArXiv.
[108] Mark W. Schmidt,et al. Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron , 2018, AISTATS.
[109] Ameet Talwalkar,et al. One-Shot Federated Learning , 2019, ArXiv.
[110] Mehryar Mohri,et al. Agnostic Federated Learning , 2019, ICML.
[111] Swaroop Ramaswamy,et al. Federated Learning for Emoji Prediction in a Mobile Keyboard , 2019, ArXiv.
[112] Rong Jin,et al. On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization , 2019, ICML.
[113] Dawn Song,et al. Towards Practical Differentially Private Convex Optimization , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[114] Vitaly Shmatikov,et al. Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[115] S. H. Song,et al. Edge-Assisted Hierarchical Federated Learning with Non-IID Data , 2019, ArXiv.
[116] Nathan Srebro,et al. Semi-Cyclic Stochastic Gradient Descent , 2019, ICML.
[117] Tara Javidi,et al. Decentralized Bayesian Learning over Graphs , 2019, ArXiv.
[118] Dan Alistarh,et al. Distributed Learning over Unreliable Networks , 2018, ICML.
[119] Jianyu Wang,et al. Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD , 2018, MLSys.
[120] H. Brendan McMahan,et al. Differentially Private Learning with Adaptive Clipping , 2019, NeurIPS.
[121] Úlfar Erlingsson,et al. The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks , 2018, USENIX Security Symposium.
[122] Yasaman Khazaeni,et al. Bayesian Nonparametric Federated Learning of Neural Networks , 2019, ICML.
[123] Qiang Yang,et al. Federated Machine Learning , 2019, ACM Trans. Intell. Syst. Technol..
[124] Paul M. Thompson,et al. Federated Learning in Distributed Medical Databases: Meta-Analysis of Large-Scale Subcortical Brain Data , 2018, 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019).
[125] Gustavo Alonso,et al. SysML: The New Frontier of Machine Learning Systems , 2019, ArXiv.
[126] Li Huang,et al. Patient Clustering Improves Efficiency of Federated Machine Learning to predict mortality and hospital stay time using distributed Electronic Medical Records , 2019, J. Biomed. Informatics.
[127] Hubert Eichner,et al. Towards Federated Learning at Scale: System Design , 2019, SysML.
[128] Mariana Raykova,et al. Secure Computation for Machine Learning With SPDZ , 2019, ArXiv.
[129] Jun Zhao,et al. Mobile Edge Computing, Blockchain and Reputation-based Crowdsourcing IoT Federated Learning: A Secure, Decentralized and Privacy-preserving System , 2019, ArXiv.
[130] Sebastian U. Stich,et al. Local SGD Converges Fast and Communicates Little , 2018, ICLR.
[131] Nuwan S. Ferdinand,et al. Anytime Minibatch: Exploiting Stragglers in Online Distributed Optimization , 2020, ICLR.
[132] Maria-Florina Balcan,et al. Adaptive Gradient-Based Meta-Learning Methods , 2019, NeurIPS.
[133] Aryan Mokhtari,et al. Robust and Communication-Efficient Collaborative Learning , 2019, NeurIPS.
[134] Raja Lavanya,et al. Fog Computing and Its Role in the Internet of Things , 2019, Advances in Computer and Electrical Engineering.
[135] Ying-Chang Liang,et al. Incentive Design for Efficient Federated Learning in Mobile Networks: A Contract Theory Approach , 2019, 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS).
[136] R. Raskar,et al. R EDUCING LEAKAGE IN DISTRIBUTED DEEP LEARNING FOR SENSITIVE HEALTH DATA , 2019 .
[137] Joachim M. Buhmann,et al. Variational Federated Multi-Task Learning , 2019, ArXiv.
[138] Christopher De Sa,et al. MLSys: The New Frontier of Machine Learning Systems , 2019, 1904.03257.
[139] Eric P. Xing,et al. Fault Tolerance in Iterative-Convergent Machine Learning , 2018, ICML.
[140] Kin K. Leung,et al. Adaptive Federated Learning in Resource Constrained Edge Computing Systems , 2018, IEEE Journal on Selected Areas in Communications.
[141] Kuan Eeik Tan,et al. Federated Collaborative Filtering for Privacy-Preserving Personalized Recommendation System , 2019, ArXiv.
[142] Takayuki Nishio,et al. Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge , 2018, ICC 2019 - 2019 IEEE International Conference on Communications (ICC).
[143] Amir Salman Avestimehr,et al. Coded Computation Over Heterogeneous Clusters , 2019, IEEE Transactions on Information Theory.
[144] Badih Ghazi,et al. Scalable and Differentially Private Distributed Aggregation in the Shuffled Model , 2019, ArXiv.
[145] Aryan Mokhtari,et al. FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization , 2019, AISTATS.
[146] Klaus-Robert Müller,et al. Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data , 2019, IEEE Transactions on Neural Networks and Learning Systems.
[147] Jun Zhang,et al. Edge-Assisted Hierarchical Federated Learning with Non-IID Data , 2019, ArXiv.
[148] Tian Li,et al. Fair Resource Allocation in Federated Learning , 2019, ICLR.
[149] Tao Lin,et al. Don't Use Large Mini-Batches, Use Local SGD , 2018, ICLR.
[150] Xiang Li,et al. On the Convergence of FedAvg on Non-IID Data , 2019, ICLR.
[151] Anit Kumar Sahu,et al. Federated Optimization in Heterogeneous Networks , 2018, MLSys.