暂无分享,去创建一个
Kiyokuni Kawachiya | Tung D. Le | Haruki Imai | Yasushi Negishi | Kiyokuni Kawachiya | Yasushi Negishi | Haruki Imai | K. Kawachiya
[1] Michaela Blott,et al. Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks , 2017, ICONIP.
[2] Atsushi Ike,et al. Memory reduction method for deep neural network training , 2016, 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP).
[3] Jungwon Lee,et al. Universal Deep Neural Network Compression , 2018, IEEE Journal of Selected Topics in Signal Processing.
[4] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[5] Martín Abadi,et al. A computational model for TensorFlow: an introduction , 2017, MAPL@PLDI.
[6] Natalia Gimelshein,et al. vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design , 2016, 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).
[7] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Chen Meng,et al. Training Deeper Models by GPU Memory Optimization on TensorFlow , 2017 .
[9] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[10] Zenglin Xu,et al. Superneurons: dynamic GPU memory management for training deep neural networks , 2018, PPoPP.
[11] Tianqi Chen,et al. Training Deep Nets with Sublinear Memory Cost , 2016, ArXiv.
[12] Thomas Brox,et al. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation , 2016, MICCAI.