暂无分享,去创建一个
Stan Matwin | Mohammad Havaei | Nicolas Chapados | Thomas Vincent | Xiang Jiang | Hassan Chouaib | Gabriel Chartrand | Andrew Jesson | S. Matwin | Nicolas Chapados | G. Chartrand | A. Jesson | Mohammad Havaei | Xiang Jiang | Hassan Chouaib | Thomas Vincent
[1] Joshua B. Tenenbaum,et al. One shot learning of simple visual concepts , 2011, CogSci.
[2] Minmin Chen,et al. Efficient Vector Representation for Documents through Corruption , 2017, ICLR.
[3] Sergey Levine,et al. Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm , 2017, ICLR.
[4] Yoshua Bengio,et al. On the Optimization of a Synaptic Learning Rule , 2007 .
[5] Hang Li,et al. Meta-SGD: Learning to Learn Quickly for Few Shot Learning , 2017, ArXiv.
[6] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[7] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[8] John Pestian,et al. Effect of small sample size on text categorization with support vector machines , 2012, BioNLP@HLT-NAACL.
[9] Yiming Yang,et al. RCV1: A New Benchmark Collection for Text Categorization Research , 2004, J. Mach. Learn. Res..
[10] Vladlen Koltun,et al. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling , 2018, ArXiv.
[11] Ricardo Vilalta,et al. A Perspective View and Survey of Meta-Learning , 2002, Artificial Intelligence Review.
[12] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[13] Kuldip K. Paliwal,et al. Bidirectional recurrent neural networks , 1997, IEEE Trans. Signal Process..
[14] Bartunov Sergey,et al. Meta-Learning with Memory-Augmented Neural Networks , 2016 .
[15] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[16] Heiga Zen,et al. WaveNet: A Generative Model for Raw Audio , 2016, SSW.
[17] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[18] Sebastian Thrun,et al. Explanation-Based Neural Network Learning for Robot Control , 1992, NIPS.
[19] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[20] Marcin Andrychowicz,et al. Learning to learn by gradient descent by gradient descent , 2016, NIPS.
[21] Joshua B. Tenenbaum,et al. Human-level concept learning through probabilistic program induction , 2015, Science.
[22] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[23] Sergey Levine,et al. One-Shot Visual Imitation Learning via Meta-Learning , 2017, CoRL.
[24] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[25] Pieter Abbeel,et al. Meta-Learning with Temporal Convolutions , 2017, ArXiv.
[26] Sebastian Thrun,et al. Lifelong Learning Algorithms , 1998, Learning to Learn.
[27] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[28] E. Shneidman,et al. “CLUES TO SUICIDE.” , 1956, Public health reports.
[29] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[30] Eduardo P. Wiechmann,et al. Active learning for clinical text classification: is it better than random sampling? , 2012, J. Am. Medical Informatics Assoc..
[31] Gregory R. Koch,et al. Siamese Neural Networks for One-Shot Image Recognition , 2015 .
[32] Martin G. Levine,et al. The Effect Of Background Knowledge On The Reading Comprehension Of Second Language Learners , 1985 .
[33] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[34] Jakob Uszkoreit,et al. A Decomposable Attention Model for Natural Language Inference , 2016, EMNLP.
[35] Jürgen Schmidhuber,et al. Evolving Modular Fast-Weight Networks for Control , 2005, ICANN.
[36] Chengqi Zhang,et al. Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling , 2018, ICLR.
[37] Jason Weston,et al. End-To-End Memory Networks , 2015, NIPS.
[38] Sepp Hochreiter,et al. Learning to Learn Using Gradient Descent , 2001, ICANN.
[39] Jitendra Malik,et al. Learning to Optimize Neural Nets , 2017, ArXiv.
[40] Bowen Zhou,et al. A Structured Self-attentive Sentence Embedding , 2017, ICLR.
[41] Alex Graves,et al. Neural Turing Machines , 2014, ArXiv.
[42] Daniel P. W. Ellis,et al. Feed-Forward Networks with Attention Can Solve Some Long-Term Memory Problems , 2015, ArXiv.
[43] Pietro Perona,et al. One-shot learning of object categories , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.