暂无分享,去创建一个
[1] Scott Shenker,et al. Spark: Cluster Computing with Working Sets , 2010, HotCloud.
[2] Abutalib Aghayev,et al. Litz: Elastic Framework for High-Performance Distributed Machine Learning , 2018, USENIX Annual Technical Conference.
[3] Martin Jaggi,et al. Primal-Dual Rates and Certificates , 2016, ICML.
[4] Michael I. Jordan,et al. CoCoA: A General Framework for Communication-Efficient Distributed Optimization , 2016, J. Mach. Learn. Res..
[5] Gregory R. Ganger,et al. Proteus: agile ML elasticity through tiered reliability in dynamic resource markets , 2017, EuroSys.
[6] Ryan Stutsman,et al. Crail : A High-Performance I / O Architecture for Distributed Data Processing , .
[7] Dimitrios Sarigiannis,et al. Snap ML: A Hierarchical Framework for Machine Learning , 2018, NeurIPS.
[8] Tao Lin,et al. Don't Use Large Mini-Batches, Use Local SGD , 2018, ICLR.
[9] Nikolas Ioannou,et al. Crail: A High-Performance I/O Architecture for Distributed Data Processing , 2017, IEEE Data Eng. Bull..
[10] Michael J. Freedman,et al. SLAQ: quality-driven scheduling for distributed machine learning , 2017, SoCC.
[11] Thomas Hofmann,et al. Communication-Efficient Distributed Dual Coordinate Ascent , 2014, NIPS.
[12] Chris Jermaine,et al. An experimental comparison of complex object implementations for big data systems , 2017, SoCC.