暂无分享,去创建一个
[1] G. Marcus. Rethinking Eliminative Connectionism , 1998, Cognitive Psychology.
[2] Jürgen Schmidhuber,et al. Simplifying Neural Nets by Discovering Flat Minima , 1994, NIPS.
[3] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[4] Thomas Demeester,et al. Adversarial Sets for Regularising Neural Link Predictors , 2017, UAI.
[5] B. Silverman,et al. Spline Smoothing: The Equivalent Variable Kernel Method , 1984 .
[6] Zenon W. Pylyshyn,et al. Connectionism and cognitive architecture , 1993 .
[7] Marco Baroni,et al. Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks , 2017, ICML.
[8] Eric R. Ziegel,et al. The Elements of Statistical Learning , 2003, Technometrics.
[9] Gary Marcus,et al. Innateness, AlphaZero, and Artificial Intelligence , 2018, ArXiv.
[10] Jakob Uszkoreit,et al. A Decomposable Attention Model for Natural Language Inference , 2016, EMNLP.
[11] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[12] Omer Levy,et al. Linguistic Regularities in Sparse and Explicit Word Representations , 2014, CoNLL.
[13] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[14] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[15] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[16] Pedro M. Domingos,et al. Symmetry-Based Semantic Parsing , 2015 .
[17] Gary Marcus,et al. Deep Learning: A Critical Appraisal , 2018, ArXiv.
[18] A Tikhonov,et al. Solution of Incorrectly Formulated Problems and the Regularization Method , 1963 .