暂无分享,去创建一个
Lin Jie | Vijay Chandrasekhar | Manas Gupta | Siddharth Aravindan | Aleksandra Kalisz | V. Chandrasekhar | Aleksandra Kalisz | Manas Gupta | Lin Jie | Siddharth Aravindan
[1] Erich Elsen,et al. The State of Sparsity in Deep Neural Networks , 2019, ArXiv.
[2] Mark Sandler,et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[3] Andrew Y. Ng,et al. Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping , 1999, ICML.
[4] Andrew McCallum,et al. Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.
[5] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[6] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[7] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[8] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[9] Jiwen Lu,et al. Runtime Neural Pruning , 2017, NIPS.
[10] Gintare Karolina Dziugaite,et al. Stabilizing the Lottery Ticket Hypothesis , 2019 .
[11] Song Han,et al. AMC: AutoML for Model Compression and Acceleration on Mobile Devices , 2018, ECCV.
[12] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[13] Tao Zhang,et al. A Survey of Model Compression and Acceleration for Deep Neural Networks , 2017, ArXiv.
[14] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.