暂无分享,去创建一个
[1] Aaron R. Voelker,et al. Dynamical Systems in Spiking Neuromorphic Hardware , 2019 .
[2] Edward De Brouwer,et al. GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series , 2019, NeurIPS.
[3] Ilya Sutskever,et al. Generating Long Sequences with Sparse Transformers , 2019, ArXiv.
[4] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[5] Hui Xiong,et al. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting , 2020, AAAI.
[6] Quoc V. Le,et al. Learning Longer-term Dependencies in RNNs with Auxiliary Losses , 2018, ICML.
[7] Yann Dauphin,et al. Language Modeling with Gated Convolutional Networks , 2016, ICML.
[8] Liu Yang,et al. Long Range Arena: A Benchmark for Efficient Transformers , 2020, ICLR.
[9] Vladlen Koltun,et al. Trellis Networks for Sequence Modeling , 2018, ICLR.
[10] Thomas S. Huang,et al. Fast Generation for Convolutional Autoregressive Models , 2017, ICLR.
[11] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[12] Thomas S. Huang,et al. Dilated Recurrent Neural Networks , 2017, NIPS.
[13] Heiga Zen,et al. WaveNet: A Generative Model for Raw Audio , 2016, SSW.
[14] A. Tustin. A method of analysing the behaviour of linear systems in terms of time series , 1947 .
[15] Razvan Pascanu,et al. On the difficulty of training recurrent neural networks , 2012, ICML.
[16] Mark Hoogendoorn,et al. CKConv: Continuous Kernel Convolution For Sequential Data , 2021, ArXiv.
[17] Shuai Li,et al. Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Yann Dauphin,et al. Pay Less Attention with Lightweight and Dynamic Convolutions , 2019, ICLR.
[19] V. Pan. Structured Matrices and Polynomials: Unified Superfast Algorithms , 2001 .
[20] Siddhartha Mishra,et al. UnICORNN: A recurrent model for learning very long time dependencies , 2021, ICML.
[21] Pete Warden,et al. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition , 2018, ArXiv.
[22] Victor Y. Pan,et al. Fast approximate computations with Cauchy matrices and polynomials , 2015, Math. Comput..
[23] Yoshua Bengio,et al. Unitary Evolution Recurrent Neural Networks , 2015, ICML.
[24] Omri Azencot,et al. Lipschitz Recurrent Neural Networks , 2020, ICLR.
[25] Alexander Kolesnikov,et al. MLP-Mixer: An all-MLP Architecture for Vision , 2021, NeurIPS.
[26] Georg Heigold,et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2021, ICLR.
[27] Yuhong Guo,et al. Time-aware Large Kernel Convolutions , 2020, ICML.
[28] C. Ré,et al. HiPPO: Recurrent Memory with Optimal Polynomial Projections , 2020, NeurIPS.
[29] Xi Chen,et al. PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications , 2017, ICLR.
[30] Nikolaos Pappas,et al. Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention , 2020, ICML.
[31] Alexei Baevski,et al. Adaptive Input Representations for Neural Language Modeling , 2018, ICLR.
[32] Mario Lezcano Casado,et al. Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group , 2019, ICML.
[33] Peter Dayan,et al. Fast Parametric Learning with Activation Memorization , 2018, ICML.
[34] Victor Y. Pan,et al. How Bad Are Vandermonde Matrices? , 2015, SIAM J. Matrix Anal. Appl..
[35] Chris Eliasmith,et al. Parallelizing Legendre Memory Unit Training , 2021, ICML.
[36] Chris Donahue,et al. Adversarial Audio Synthesis , 2018, ICLR.
[37] Lukasz Kaiser,et al. Rethinking Attention with Performers , 2020, ArXiv.
[38] Vladlen Koltun,et al. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling , 2018, ArXiv.
[39] David Duvenaud,et al. Latent Ordinary Differential Equations for Irregularly-Sampled Time Series , 2019, NeurIPS.
[40] Victor Y. Pan,et al. Transformations of Matrix Structures Work Again , 2013, 1303.0353.
[41] Chris Eliasmith,et al. Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks , 2019, NeurIPS.
[42] Matthew W. Hoffman,et al. Improving the Gating Mechanism of Recurrent Neural Networks , 2019, ICML.
[43] R. Socher,et al. Scalable Language Modeling: WikiText-103 on a Single GPU in 12 hours , 2018 .
[44] Terry Lyons,et al. Neural Controlled Differential Equations for Irregular Time Series , 2020, NeurIPS.