暂无分享,去创建一个
Joshua B. Tenenbaum | Michael Henry Tessler | Brenden M. Lake | Maxwell Nye | J. Tenenbaum | B. Lake | Maxwell Nye
[1] Percy Liang,et al. Learning executable semantic parsers for natural language understanding , 2016, Commun. ACM.
[2] Joshua B. Tenenbaum,et al. Building machines that learn and think like people , 2016, Behavioral and Brain Sciences.
[3] Pushmeet Kohli,et al. RobustFill: Neural Program Learning under Noisy I/O , 2017, ICML.
[4] Loizos Michael,et al. Neural-Symbolic Integration: A Compositional Perspective , 2020, AAAI.
[5] Jonathan Evans. In two minds: dual-process accounts of reasoning , 2003, Trends in Cognitive Sciences.
[6] Monica S. Lam,et al. AutoQA: From Databases to Q&A Semantic Parsers with Only Synthetic Training Data , 2020, EMNLP.
[7] J. Weston,et al. Recipes for Safety in Open-domain Chatbots , 2020, ArXiv.
[8] Yejin Choi,et al. Learning to Write with Cooperative Discriminators , 2018, ACL.
[9] A Benchmark for Systematic Generalization in Grounded Language Understanding , 2020, NeurIPS.
[10] Artur S. d'Avila Garcez,et al. Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge , 2016, NeSy@HLAI.
[11] Armando Solar-Lezama,et al. DreamCoder: growing generalizable, interpretable knowledge with wake–sleep Bayesian program learning , 2020, Philosophical Transactions of the Royal Society A.
[12] Brenden M. Lake,et al. Word meaning in minds and machines , 2020, Psychological review.
[13] Armando Solar-Lezama,et al. Learning Compositional Rules via Neural Program Synthesis , 2020, NeurIPS.
[14] Adam Wierman,et al. Thinking Fast and Slow , 2017, SIGMETRICS Perform. Evaluation Rev..
[15] Chin-Hui Lee,et al. A speech understanding system based on statistical representation of semantics , 1992, [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[16] Nikolaj Bjørner,et al. Z3: An Efficient SMT Solver , 2008, TACAS.
[17] Christina Heinze-Deml,et al. Think before you act: A simple baseline for compositional generalization , 2020, ArXiv.
[18] Jason Weston,et al. Dialogue Natural Language Inference , 2018, ACL.
[19] Chuang Gan,et al. The Neuro-Symbolic Concept Learner: Interpreting Scenes Words and Sentences from Natural Supervision , 2019, ICLR.
[20] Ilya Sutskever,et al. Zero-Shot Text-to-Image Generation , 2021, ICML.
[21] Kyunghyun Cho,et al. Generative Language-Grounded Policy in Vision-and-Language Navigation with Bayes' Rule , 2020, ICLR.
[22] Charles Blundell,et al. Neural Production Systems , 2021, ArXiv.
[23] Myle Ott,et al. Residual Energy-Based Models for Text Generation , 2020, ICLR.
[24] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[25] Leslie G. Valiant,et al. A First Experimental Demonstration of Massive Knowledge Infusion , 2008, KR.
[26] Dan Klein,et al. Constrained Language Models Yield Few-Shot Semantic Parsers , 2021, EMNLP.
[27] Y-Lan Boureau,et al. Controlling Style in Generated Dialogue , 2020, ArXiv.
[28] Mark O. Riedl,et al. Event Representations for Automated Story Generation with Deep Neural Nets , 2017, AAAI.
[29] Katherine D. Kinzler,et al. Core knowledge. , 2007, Developmental science.
[30] Mohit Bansal,et al. I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling , 2020, ACL.
[31] Jacob Andreas,et al. Implicit Representations of Meaning in Neural Language Models , 2021, ACL.
[32] Chen Liang,et al. Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision , 2016, ACL.
[33] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[34] Dan Klein,et al. Pragmatically Informative Text Generation , 2019, NAACL.
[35] S. Frederick. Journal of Economic Perspectives—Volume 19, Number 4—Fall 2005—Pages 25–42 Cognitive Reflection and Decision Making , 2022 .
[36] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[37] Xifeng Yan,et al. Cross-domain Semantic Parsing via Paraphrasing , 2017, EMNLP.
[38] Hang Li,et al. Coupling Distributed and Symbolic Execution for Natural Language Queries , 2016, ICML.
[39] Jonathan Berant,et al. Value-based Search in Execution Space for Mapping Instructions to Programs , 2018, NAACL.
[40] Jürgen Schmidhuber,et al. Learning to Reason with Third-Order Tensor Products , 2018, NeurIPS.
[41] Xinyu Hua,et al. PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation , 2020, EMNLP.
[42] Jacob Andreas,et al. Unnatural Language Processing: Bridging the Gap Between Synthetic and Natural Language Data , 2020, ArXiv.
[43] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[44] Jacob Andreas,et al. Task-Oriented Dialogue as Dataflow Synthesis , 2020, Transactions of the Association for Computational Linguistics.
[45] Jonathan Berant,et al. Span-based Semantic Parsing for Compositional Generalization , 2020, ACL.
[46] Lei Li,et al. CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling , 2018, AAAI.
[47] Yoshua Bengio,et al. Inductive Biases for Deep Learning of Higher-Level Cognition , 2020, ArXiv.
[48] Joelle Pineau,et al. CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text , 2019, EMNLP.
[49] Xu Sun,et al. A Skeleton-Based Model for Promoting Coherence Among Sentences in Narrative Story Generation , 2018, EMNLP.
[50] Yejin Choi,et al. NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints , 2020, NAACL.
[51] Jason Weston,et al. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks , 2015, ICLR.
[52] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .