DISC: A Dynamic Shape Compiler for Machine Learning Workloads
暂无分享,去创建一个
Lansong Diao | Wei Lin | Pengzhan Zhao | Wenyi Zhao | Jun Yang | Kai Zhu | Zhen Zheng | Tianyou Guo | Junjie Bai | Xiaoyong Liu | Junjie Bai | Zhen Zheng | Lansong Diao | Xiaoyong Liu | Wei Lin | Jun Yang | Pengzhan Zhao | Wenyi Zhao | Kai Zhu | Tianyou Guo
[1] Wei Chen,et al. Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference , 2020, MLSys.
[2] Cody Hao Yu,et al. Ansor : Generating High-Performance Tensor Programs for Deep Learning , 2020, OSDI.
[3] Zheng Zhang,et al. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems , 2015, ArXiv.
[4] Guoping Long,et al. FusionStitching: Boosting Memory Intensive Computations for Deep Learning Workloads , 2020, ArXiv.
[5] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[6] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[7] Uday Bondhugula,et al. MLIR: A Compiler Infrastructure for the End of Moore's Law , 2020, ArXiv.
[8] Albert Cohen,et al. Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions , 2018, ArXiv.
[9] Haichen Shen,et al. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning , 2018, OSDI.
[10] Shoaib Kamil,et al. Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code , 2018, 2019 IEEE/ACM International Symposium on Code Generation and Optimization (CGO).