暂无分享,去创建一个
[1] Beatrice Santorini,et al. Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.
[2] Bernard Chazelle,et al. The Fast Johnson--Lindenstrauss Transform and Approximate Nearest Neighbors , 2009, SIAM J. Comput..
[3] Fang Liu,et al. Learning Intrinsic Sparse Structures within Long Short-term Memory , 2017, ICLR.
[4] Satoshi Nakamura,et al. Compressing recurrent neural network with tensor train , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).
[5] Zenglin Xu,et al. Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[6] Shih-Fu Chang,et al. An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[7] Alexander J. Smola,et al. Fastfood: Approximate Kernel Expansions in Loglinear Time , 2014, ArXiv.
[8] Misha Denil,et al. Predicting Parameters in Deep Learning , 2014 .
[9] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[10] Yann Chevaleyre,et al. Training compact deep learning models for video classification using circulant matrices , 2018, ECCV Workshops.
[11] Erich Elsen,et al. Exploring Sparsity in Recurrent Neural Networks , 2017, ICLR.
[12] Yoshua Bengio,et al. Unitary Evolution Recurrent Neural Networks , 2015, ICML.
[13] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[14] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.
[15] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[16] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[17] Joan Bruna,et al. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation , 2014, NIPS.
[18] Le Song,et al. Deep Fried Convnets , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[19] Volker Tresp,et al. Tensor-Train Recurrent Neural Networks for Video Classification , 2017, ICML.
[20] Xuelong Li,et al. Towards Convolutional Neural Networks Compression via Global Error Reconstruction , 2016, IJCAI.
[21] Rongrong Ji,et al. Accelerating Convolutional Networks via Global & Dynamic Filter Pruning , 2018, IJCAI.
[22] Zhongfeng Wang,et al. Accelerating Recurrent Neural Networks: A Memory-Efficient Approach , 2017, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.
[23] Vladlen Koltun,et al. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling , 2018, ArXiv.
[24] Roberto Cipolla,et al. Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[26] Y. Meyer,et al. Wavelets and Filter Banks , 1991 .
[27] Zenglin Xu,et al. Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition , 2018, AAAI.
[28] Alexander Novikov,et al. Tensorizing Neural Networks , 2015, NIPS.