GSLICE: controlled spatial sharing of GPUs for a scalable inference platform
暂无分享,去创建一个
K. K. Ramakrishnan | Sameer G. Kulkarni | Aditya Dhakal | Sameer G Kulkarni | K. Ramakrishnan | Aditya Dhakal
[1] Amar Phanishayee,et al. Themis: Fair and Efficient GPU Cluster Scheduling , 2020, NSDI.
[2] Sue B. Moon,et al. NBA (network balancing act): a high-performance packet processing framework for heterogeneous processors , 2015, EuroSys.
[3] John Tran,et al. cuDNN: Efficient Primitives for Deep Learning , 2014, ArXiv.
[4] Pieter Hintjens,et al. ZeroMQ: Messaging for Many Applications , 2013 .
[5] Joel Emer,et al. Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks , 2016, CARN.
[6] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, NIPS.
[7] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[8] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[9] Daniel Raumer,et al. MoonGen: A Scriptable High-Speed Packet Generator , 2014, Internet Measurement Conference.
[10] Peng Liu,et al. EdgeEye: An Edge Service Framework for Real-time Intelligent Video Analytics , 2018, EdgeSys@MobiSys.
[11] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[12] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[13] Bingsheng He,et al. G-NET: Effective GPU Sharing in NFV Systems , 2018, NSDI.
[14] 日本妊娠高血圧学会. 妊娠高血圧症候群の診療指針 : best practice guide , 2015 .
[15] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[16] Sangjin Han,et al. PacketShader: a GPU-accelerated software router , 2010, SIGCOMM '10.
[17] D V Glass. Diminishing returns. , The Eugenics review.
[18] Scott A. Mahlke,et al. Chimera: Collaborative Preemption for Multitasking on a Shared GPU , 2015, ASPLOS.
[19] Paolo Napoletano,et al. Benchmark Analysis of Representative Deep Neural Network Architectures , 2018, IEEE Access.
[20] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Paramvir Bahl,et al. Real-Time Video Analytics: The Killer App for Edge Computing , 2017, Computer.
[22] Haichen Shen,et al. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning , 2018, OSDI.
[23] Xin Wang,et al. Clipper: A Low-Latency Online Prediction Serving System , 2016, NSDI.
[24] Torsten Hoefler,et al. Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis. , 2018 .
[25] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[26] John E. Stone,et al. OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems , 2010, Computing in Science & Engineering.
[27] Wencong Xiao,et al. Gandiva: Introspective Cluster Scheduling for Deep Learning , 2018, OSDI.
[28] Sotiris Ioannidis,et al. GASPP: A GPU-Accelerated Stateful Packet Processing Framework , 2014, USENIX Annual Technical Conference.
[29] Clément Farabet,et al. Torch7: A Matlab-like Environment for Machine Learning , 2011, NIPS 2011.
[30] Haichen Shen,et al. Nexus: a GPU cluster engine for accelerating DNN-based video analysis , 2019, SOSP.
[31] Mosharaf Chowdhury,et al. Salus: Fine-Grained GPU Sharing Primitives for Deep Learning Applications , 2019, MLSys.
[32] Andreas Gerstlauer,et al. DeepThings: Distributed Adaptive Deep Learning Inference on Resource-Constrained IoT Edge Clusters , 2018, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.
[33] Seungyeop Han,et al. SSLShader: Cheap SSL Acceleration with Commodity Processors , 2011, NSDI.
[34] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[35] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[36] KyoungSoo Park,et al. APUNet: Revitalizing GPU as Packet Processing Accelerator , 2017, NSDI.
[37] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[38] Nikhil R. Devanur,et al. PipeDream: generalized pipeline parallelism for DNN training , 2019, SOSP.
[39] David A. Patterson,et al. In-datacenter performance analysis of a tensor processing unit , 2017, 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA).
[40] Boris Ginsburg,et al. Jasper: An End-to-End Convolutional Neural Acoustic Model , 2019, INTERSPEECH.
[41] Amit Agarwal,et al. CNTK: Microsoft's Open-Source Deep-Learning Toolkit , 2016, KDD.
[42] Ajay Jain,et al. Dynamic Space-Time Scheduling for GPU Inference , 2018, ArXiv.