Training on the Edge: The why and the how
暂无分享,去创建一个
Paul D. Hovland | Olivier Beaumont | Gerard Gorman | Navjot Kukreja | Jan Hückelheim | Alena Shilova | Nicola Ferrier | P. Hovland | Olivier Beaumont | N. Ferrier | Alena Shilova | G. Gorman | Navjot Kukreja | J. Hückelheim
[1] Amos J. Storkey,et al. Moonshine: Distilling with Cheap Convolutions , 2017, NeurIPS.
[2] Pete Beckman,et al. Waggle: An open sensor platform for edge computing , 2016, 2016 IEEE SENSORS.
[3] Minjie Wang,et al. Supporting Very Large Models using Automatic Dataflow Graph Partitioning , 2018, EuroSys.
[4] Teruo Higashino,et al. Edge-centric Computing: Vision and Challenges , 2015, CCRV.
[5] Andreas Griewank,et al. Algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation , 2000, TOMS.
[6] Dong Huang,et al. Cutting Down Training Memory by Re-fowarding , 2018, ArXiv.
[7] Andrea Walther,et al. High-level python abstractions for optimal checkpointing in inversion problems , 2018, ArXiv.
[8] James Irvine,et al. Privacy Implications of Wearable Health Devices , 2014, SIN.
[9] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[10] Alex Graves,et al. Memory-Efficient Backpropagation Through Time , 2016, NIPS.
[11] Gianpaolo Francesco Trotta,et al. Computer vision and deep learning techniques for pedestrian detection and tracking: A survey , 2018, Neurocomputing.
[12] Zheng Zhang,et al. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems , 2015, ArXiv.
[13] Charles E. Catlett,et al. Array of things: a scientific research instrument in the public way: platform design and early lessons learned , 2017, SCOPE@CPSWeek.
[14] Tianqi Chen,et al. Training Deep Nets with Sublinear Memory Cost , 2016, ArXiv.
[15] Quoc V. Le,et al. GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism , 2018, ArXiv.