暂无分享,去创建一个
Jie Zhang | Dawei Li | Jingwen Zhu | Shalini Ghosh | Heming Zhang | Junting Zhang | Yalin Wang | Shalini Ghosh | Yalin Wang | Heming Zhang | Dawei Li | Jingwen Zhu | Jie Zhang | Junting Zhang
[1] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[2] Ngoc Thang Vu,et al. Densely Connected Convolutional Networks for Speech Recognition , 2018, ITG Symposium on Speech Communication.
[3] Junmo Kim,et al. Less-forgetful Learning for Domain Expansion in Deep Neural Networks , 2017, AAAI.
[4] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[5] Quoc V. Le,et al. Neural Architecture Search with Reinforcement Learning , 2016, ICLR.
[6] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[7] Joost van de Weijer,et al. Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting , 2018, 2018 24th International Conference on Pattern Recognition (ICPR).
[8] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[9] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[10] Kuldip K. Paliwal,et al. Bidirectional recurrent neural networks , 1997, IEEE Trans. Signal Process..
[11] Thomas G. Dietterich. What is machine learning? , 2020, Archives of Disease in Childhood.
[12] Razvan Pascanu,et al. Progressive Neural Networks , 2016, ArXiv.
[13] Jieping Ye,et al. Multi-Task Feature Learning Via Efficient l2, 1-Norm Minimization , 2009, UAI.
[14] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[15] G. G. Stokes. "J." , 1890, The New Yale Book of Quotations.
[16] Tianqi Chen,et al. Net2Net: Accelerating Learning via Knowledge Transfer , 2015, ICLR.
[17] Christoph H. Lampert,et al. iCaRL: Incremental Classifier and Representation Learning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[19] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Christian P. Robert,et al. Machine Learning, a Probabilistic Perspective , 2014 .
[21] Pinghua Gong,et al. Multi-Stage MultiTask Feature Learning ∗ , 2012 .
[22] Sung Ju Hwang,et al. Lifelong Learning with Dynamically Expandable Networks , 2017, ICLR.
[23] Massimiliano Pontil,et al. Regularized multi--task learning , 2004, KDD.
[24] Quoc V. Le,et al. Efficient Neural Architecture Search via Parameter Sharing , 2018, ICML.
[25] Yong Yu,et al. Efficient Architecture Search by Network Transformation , 2017, AAAI.
[26] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[27] Derek Hoiem,et al. Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[28] Chrisantha Fernando,et al. PathNet: Evolution Channels Gradient Descent in Super Neural Networks , 2017, ArXiv.
[29] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[30] Yishay Mansour,et al. Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.
[31] Pietro Perona,et al. The Caltech-UCSD Birds-200-2011 Dataset , 2011 .
[32] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[33] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[34] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[35] Jieping Ye,et al. Multi-stage multi-task feature learning , 2012, J. Mach. Learn. Res..