Correlating Neural and Symbolic Representations of Language
暂无分享,去创建一个
[1] Alex Clarke,et al. Syntactic Computations in the Language Network: Characterizing Dynamic Network Properties Using Representational Similarity Analysis , 2013, Front. Psychol..
[2] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[3] Adam Lopez,et al. Understanding Learning Dynamics Of Language Models with SVCCA , 2018, NAACL.
[4] Grzegorz Chrupala,et al. Encoding of phonology in a recurrent neural model of grounded speech , 2017, CoNLL.
[5] Alex Wang,et al. What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.
[6] Samuel R. Bowman,et al. A Gold Standard Dependency Corpus for English , 2014, LREC.
[7] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[8] Samuel R. Bowman,et al. Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis , 2018, BlackboxNLP@EMNLP.
[9] Holger Schwenk,et al. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data , 2017, EMNLP.
[10] Roberto Basili,et al. Explaining non-linear Classifier Decisions within Kernel-based Deep Architectures , 2018, BlackboxNLP@EMNLP.
[11] Roberto Basili,et al. Deep Learning in Semantic Kernel Spaces , 2017, ACL.
[12] Willem H. Zuidema,et al. Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure , 2017, J. Artif. Intell. Res..
[13] Honglak Lee,et al. An efficient framework for learning sentence representations , 2018, ICLR.
[14] Dietrich Klakow,et al. Closing Brackets with Recurrent Neural Networks , 2018, BlackboxNLP@EMNLP.
[15] J. DiCarlo,et al. Using goal-driven deep learning models to understand sensory cortex , 2016, Nature Neuroscience.
[16] Michael Collins,et al. Convolution Kernels for Natural Language , 2001, NIPS.
[17] Samuel R. Bowman,et al. Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning , 2017, ArXiv.
[18] Yonatan Belinkov,et al. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks , 2016, ICLR.
[19] Robert C. Berwick,et al. Evaluating the Ability of LSTMs to Learn Context-Free Grammars , 2018, BlackboxNLP@EMNLP.
[20] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[21] Tomas Mikolov,et al. Enriching Word Vectors with Subword Information , 2016, TACL.
[22] Grzegorz Chrupala,et al. Symbolic Inductive Bias for Visually Grounded Learning of Spoken Language , 2018, ACL.
[23] Guillaume Lample,et al. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties , 2018, ACL.
[24] Noah D. Goodman,et al. DisSent: Sentence Representation Learning from Explicit Discourse Relations , 2017, ArXiv.
[25] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[26] Marco Baroni,et al. How agents see things: On visual representations in an emergent language game , 2018, EMNLP.
[27] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[28] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[29] Nikolaus Kriegeskorte,et al. Frontiers in Systems Neuroscience Systems Neuroscience , 2022 .
[30] Yonatan Belinkov,et al. Analysis Methods in Neural Language Processing: A Survey , 2018, TACL.
[31] Alessandro Moschitti,et al. Making Tree Kernels Practical for Natural Language Learning , 2006, EACL.
[32] Dick,et al. Limitations in learning an interpreted language with recurrent models , .
[33] Jeffrey L. Elman,et al. Finding Structure in Time , 1990, Cogn. Sci..