暂无分享,去创建一个
Yuandong Tian | Haonan Yu | Sergey Edunov | Ari S. Morcos | Yuandong Tian | Sergey Edunov | Haonan Yu
[1] Yann LeCun,et al. Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks , 2018, ArXiv.
[2] Quoc V. Le,et al. Do Better ImageNet Models Transfer Better? , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Jason Yosinski,et al. Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask , 2019, NeurIPS.
[4] Philip Bachman,et al. Deep Reinforcement Learning that Matters , 2017, AAAI.
[5] Taehoon Kim,et al. Quantifying Generalization in Reinforcement Learning , 2018, ICML.
[6] Jacek M. Zurada,et al. Redundant feature pruning for accelerated inference in deep neural networks , 2019, Neural Networks.
[7] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[8] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[9] David Silver,et al. A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning , 2017, NIPS.
[10] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[11] Marc G. Bellemare,et al. The Arcade Learning Environment: An Evaluation Platform for General Agents (Extended Abstract) , 2012, IJCAI.
[12] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[13] Yuandong Tian,et al. ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games , 2017, NIPS.
[14] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[15] Ryota Tomioka,et al. In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning , 2014, ICLR.
[16] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[17] Myle Ott,et al. Scaling Neural Machine Translation , 2018, WMT.
[18] Yuanzhi Li,et al. Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers , 2018, NeurIPS.
[19] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[20] Barnabás Póczos,et al. Gradient Descent Provably Optimizes Over-parameterized Neural Networks , 2018, ICLR.
[21] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[22] Jacob Andreas,et al. Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games? , 2017, ICML.
[23] Stefan Carlsson,et al. CNN Features Off-the-Shelf: An Astounding Baseline for Recognition , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.
[24] Pushmeet Kohli,et al. Uncovering Surprising Behaviors in Reinforcement Learning via Worst-case Analysis , 2018 .
[25] Chenchen Liu,et al. Interpretable Convolutional Filter Pruning , 2018, ArXiv.
[26] Richard Socher,et al. Pointer Sentinel Mixture Models , 2016, ICLR.
[27] Gintare Karolina Dziugaite,et al. The Lottery Ticket Hypothesis at Scale , 2019, ArXiv.
[28] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[29] Yuanzhi Li,et al. A Convergence Theory for Deep Learning via Over-Parameterization , 2018, ICML.
[30] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[31] Erich Elsen,et al. The State of Sparsity in Deep Neural Networks , 2019, ArXiv.
[32] Jason D. Lee,et al. On the Power of Over-parametrization in Neural Networks with Quadratic Activation , 2018, ICML.