Transferring Knowledge to Smaller Network with Class-Distance Loss
暂无分享,去创建一个
[1] Tianqi Chen,et al. Net2Net: Accelerating Learning via Knowledge Transfer , 2015, ICLR.
[2] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[3] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[4] Michael I. Jordan,et al. Distance Metric Learning with Application to Clustering with Side-Information , 2002, NIPS.
[5] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[6] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[7] Misha Denil,et al. Predicting Parameters in Deep Learning , 2014 .
[8] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[9] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[10] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[11] Samy Bengio,et al. An Online Algorithm for Large Scale Image Similarity Learning , 2009, NIPS.
[12] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[13] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.