Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations
暂无分享,去创建一个
Atri Rudra | Tri Dao | Christopher Ré | Albert Gu | Matthew Eichhorn | C. Ré | A. Rudra | Tri Dao | Albert Gu | Matthew Eichhorn
[1] Rémi Gribonval,et al. Chasing butterflies: In search of efficient dictionaries , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[2] Markus Püschel,et al. Algebraic Signal Processing Theory: Cooley–Tukey Type Algorithms for Real DFTs , 2008, IEEE Transactions on Signal Processing.
[3] Abbas Mehrabian,et al. Nearly-tight VC-dimension bounds for piecewise linear neural networks , 2017, COLT.
[4] T. Chihara,et al. An Introduction to Orthogonal Polynomials , 1979 .
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] John G. Proakis,et al. Digital Signal Processing: Principles, Algorithms, and Applications , 1992 .
[7] Yi Ma,et al. Robust principal component analysis? , 2009, JACM.
[8] Tara N. Sainath,et al. Structured Transforms for Small-Footprint Deep Learning , 2015, NIPS.
[9] R. Tibshirani,et al. Sparse Principal Component Analysis , 2006 .
[10] M. Morf,et al. Displacement ranks of matrices and linear equations , 1979 .
[11] Markus Püschel,et al. Automatic generation of fast discrete signal transforms , 2001, IEEE Trans. Signal Process..
[12] Rina Panigrahy,et al. Sparse Matrix Factorization , 2013, ArXiv.
[13] Shih-Fu Chang,et al. An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[14] Atri Rudra,et al. A Two-pronged Progress in Structured Dense Matrix Vector Multiplication , 2018, SODA.
[15] Anima Anandkumar,et al. StrassenNets: Deep learning with a multiplication budget , 2017, ICML.
[16] Samy Bengio,et al. Neural Combinatorial Optimization with Reinforcement Learning , 2016, ICLR.
[17] Alexander J. Smola,et al. Fastfood - Computing Hilbert Space Expansions in loglinear time , 2013, ICML.
[18] Peter L. Bartlett,et al. Almost Linear VC-Dimension Bounds for Piecewise Polynomial Networks , 1998, Neural Computation.
[19] Evgeny Burnaev,et al. Quadrature-based features for kernel approximation , 2018, NeurIPS.
[20] Atri Rudra,et al. Learning Compressed Transforms with Low Displacement Rank , 2018, NeurIPS.
[21] Misha Denil,et al. Predicting Parameters in Deep Learning , 2014 .
[22] Sanjiv Kumar,et al. Orthogonal Random Features , 2016, NIPS.
[23] V. Pan. Structured Matrices and Polynomials: Unified Superfast Algorithms , 2001 .
[24] Yoshua Bengio,et al. Semi-supervised Learning by Entropy Minimization , 2004, CAP.
[25] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[26] Chao Wang,et al. CirCNN: Accelerating and Compressing Deep Neural Networks Using Block-Circulant Weight Matrices , 2017, 2017 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).
[27] Yixin Chen,et al. Compressing Neural Networks with the Hashing Trick , 2015, ICML.
[28] Ameet Talwalkar,et al. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization , 2016, J. Mach. Learn. Res..
[29] Rémi Gribonval,et al. Flexible Multilayer Sparse Approximations of Matrices and Applications , 2015, IEEE Journal of Selected Topics in Signal Processing.
[30] Jack J. Dongarra,et al. Guest Editors Introduction to the top 10 algorithms , 2000, Comput. Sci. Eng..
[31] Stefano Ermon,et al. Stochastic Optimization of Sorting Networks via Continuous Relaxations , 2019, ICLR.
[32] Amin Shokrollahi,et al. Matrix-vector product for confluent Cauchy-like matrices with application to confluent rational interpolation , 2000, STOC '00.
[33] Le Song,et al. Deep Fried Convnets , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[34] Chris Dyer,et al. Neural Arithmetic Logic Units , 2018, NeurIPS.
[35] J. Makhoul. A fast cosine transform in one and two dimensions , 1980 .
[36] Scott W. Linderman,et al. Learning Latent Permutations with Gumbel-Sinkhorn Networks , 2018, ICLR.
[37] Yann LeCun,et al. Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs , 2016, ICML.
[38] Shih-Fu Chang,et al. Compact Nonlinear Maps and Circulant Extensions , 2015, ArXiv.
[39] José M. F. Moura,et al. Algebraic Signal Processing Theory , 2006, ArXiv.
[40] Lek-Heng Lim,et al. Every Matrix is a Product of Toeplitz Matrices , 2013, Found. Comput. Math..
[41] Guy Van den Broeck,et al. A Semantic Loss Function for Deep Learning with Symbolic Knowledge , 2017, ICML.
[42] Guillermo Sapiro,et al. Supervised Dictionary Learning , 2008, NIPS.
[43] Dennis M. Healy,et al. Fast Discrete Polynomial Transforms with Applications to Data Analysis for Distance Transitive Graphs , 1997, SIAM J. Comput..
[44] Michael Clausen,et al. Algebraic complexity theory , 1997, Grundlehren der mathematischen Wissenschaften.
[45] Razvan Pascanu,et al. On the difficulty of training recurrent neural networks , 2012, ICML.
[46] Markus Püschel,et al. Symmetry-based matrix factorization , 2004, J. Symb. Comput..