暂无分享,去创建一个
[1] Thomas G. Dietterich. Steps Toward Robust Artificial Intelligence , 2017, AI Mag..
[2] Jürgen Schmidhuber,et al. World Models , 2018, ArXiv.
[3] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[4] Peter M. Vishton,et al. Rule learning by seven-month-old infants. , 1999, Science.
[5] Sergey Levine,et al. Recurrent Independent Mechanisms , 2019, ICLR.
[6] Marcin Andrychowicz,et al. Solving Rubik's Cube with a Robot Hand , 2019, ArXiv.
[7] Alan Fern,et al. Learning Finite State Representations of Recurrent Policy Networks , 2018, ICLR.
[8] Dan Roth,et al. Neural Module Networks for Reasoning over Text , 2020, ICLR.
[9] 永福 智志. The Organization of Learning , 2005, Journal of Cognitive Neuroscience.
[10] Jianfeng Gao,et al. Basic Reasoning with Tensor Product Representations , 2016, ArXiv.
[11] S Pinker,et al. Overregularization in language acquisition. , 1992, Monographs of the Society for Research in Child Development.
[12] S. Carey. The Origin of Concepts , 2000 .
[13] Sameer Singh,et al. Memory Augmented Recursive Neural Networks , 2019, ArXiv.
[14] Masakazu Konishi,et al. Mechanisms of sound localization in the barn owl (Tyto alba) , 1979, Journal of comparative physiology.
[15] Richard Evans,et al. Learning Explanatory Rules from Noisy Data , 2017, J. Artif. Intell. Res..
[16] A. Tate. A measure of intelligence , 2012 .
[17] Matthew Richardson,et al. Markov logic networks , 2006, Machine Learning.
[18] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[19] S. Kolassa. Two Cheers for Rebooting AI: Building Artificial Intelligence We Can Trust , 2020 .
[20] Jason Weston,et al. Tracking the World State with Recurrent Entity Networks , 2016, ICLR.
[21] Allan R. Jones,et al. Comprehensive transcriptional map of primate brain development , 2016, Nature.
[22] Charles R. Gallistel,et al. Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience , 2009 .
[23] Ron Sun,et al. Hybrid Connectionist-Symbolic Modules: A Report from the IJCAI-95 Workshop on Connectionist-Symbolic Integration , 1996, AI Mag..
[24] Emily M. Bender,et al. Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics , 2019, Linguistic Fundamentals for Natural Language Processing II.
[25] M. Larkum,et al. Dendritic action potentials and computation in human layer 2/3 cortical neurons , 2020, Science.
[26] Quoc V. Le,et al. Towards a Human-like Open-Domain Chatbot , 2020, ArXiv.
[27] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[28] Luc De Raedt,et al. Statistical Relational Artificial Intelligence: Logic, Probability, and Computation , 2016, Statistical Relational Artificial Intelligence.
[29] David Marr,et al. VISION A Computational Investigation into the Human Representation and Processing of Visual Information , 2009 .
[30] Ernest Davis. The Use of Deep Learning for Symbolic Integration: A Review of (Lample and Charton, 2019) , 2019, ArXiv.
[31] Noah D. Goodman,et al. Pyro: Deep Universal Probabilistic Programming , 2018, J. Mach. Learn. Res..
[32] Rolf Morel,et al. Learning higher-order logic programs , 2019, Machine Learning.
[33] KunkleDaniel,et al. Solving Rubik's Cube , 2008 .
[34] Omar Fawzi,et al. Learning dynamic polynomial proofs , 2019, NeurIPS.
[35] Allen Newell,et al. Physical Symbol Systems , 1980, Cogn. Sci..
[36] Allan R. Jones,et al. Transcriptional Landscape of the Prenatal Human Brain , 2014, Nature.
[37] E. Spelke. Initial knowledge: six suggestions , 1994, Cognition.
[38] G. Marcus,et al. Roots, stems, and the universality of lexical representations: Evidence from Hebrew , 2007, Cognition.
[39] J. Mandler. How to build a baby: II. Conceptual primitives. , 1992, Psychological review.
[40] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[41] Philip Bachman,et al. Deep Reinforcement Learning that Matters , 2017, AAAI.
[42] Christos H. Papadimitriou,et al. Assembly pointers for variable binding in networks of spiking neurons , 2016 .
[43] Demis Hassabis,et al. Mastering Atari, Go, chess and shogi by planning with a learned model , 2019, Nature.
[44] Leon van der Torre,et al. Reasoning in Non-probabilistic Uncertainty: Logic Programming and Neural-Symbolic Computing as Examples , 2017, Minds and Machines.
[45] Bhaskara Marthi,et al. A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs , 2017, Science.
[46] P. Johnson-Laird,et al. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness , 1985 .
[47] Peter M. Aronow,et al. The Book of Why: The New Science of Cause and Effect , 2020, Journal of the American Statistical Association.
[48] David M. Sobel,et al. Detecting blickets: how young children use information about novel causal powers in categorization and induction. , 2000, Child development.
[49] Julie C. Sedivy,et al. Subject Terms: Linguistics Language Eyes & eyesight Cognition & reasoning , 1995 .
[50] H. Francis Song,et al. Machine Theory of Mind , 2018, ICML.
[51] Demis Hassabis,et al. MEMO: A Deep Network for Flexible Combination of Episodic Memories , 2020, ICLR.
[52] Demis Hassabis,et al. Mastering the game of Go without human knowledge , 2017, Nature.
[53] Geoffrey E. Hinton. Preface to the Special Issue on Connectionist Symbol Processing , 1990 .
[54] Edward Grefenstette,et al. Differentiable Reasoning on Large Knowledge Bases and Natural Language , 2019, Knowledge Graphs for eXplainable Artificial Intelligence.
[55] Jianfeng Gao,et al. Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving , 2019, ArXiv.
[56] Steven M Frankland,et al. Concepts and Compositionality: In Search of the Brain's Language of Thought. , 2020, Annual review of psychology.
[57] B. Riley. The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought , 2006 .
[58] Aaron van den Oord,et al. Shaping Belief States with Generative Environment Models for RL , 2019, NeurIPS.
[59] Mohit Bansal,et al. Adversarial NLI: A New Benchmark for Natural Language Understanding , 2020, ACL.
[60] Chuang Gan,et al. The Neuro-Symbolic Concept Learner: Interpreting Scenes Words and Sentences from Natural Supervision , 2019, ICLR.
[61] Peter Norvig. A Unified Theory of Inference for Text Understanding , 1986 .
[62] Jude W. Shavlik,et al. Combining Symbolic and Neural Learning , 1994, Machine Learning.
[63] Zhitao Gong,et al. Strike (With) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[64] Jason Weston,et al. Large-scale Simple Question Answering with Memory Networks , 2015, ArXiv.
[65] G. Marcus. Rethinking Eliminative Connectionism , 1998, Cognitive Psychology.
[66] Sumit Gulwani,et al. FlashMeta: a framework for inductive program synthesis , 2015, OOPSLA.
[67] Joel Z. Leibo,et al. Unsupervised Predictive Memory in a Goal-Directed Agent , 2018, ArXiv.
[68] Jiajun Wu,et al. A Comparative Evaluation of Approximate Probabilistic Simulation and Deep Neural Networks as Accounts of Human Physical Scene Understanding , 2016, CogSci.
[69] L. Gleitman,et al. Language and Experience: Evidence from the Blind Child , 1988 .
[70] Christopher Joseph Pal,et al. A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms , 2019, ICLR.
[71] Thomas L. Dean,et al. The atoms of neural computation , 2014, Science.
[72] Andreas K. Maier,et al. Precision Learning: Towards Use of Known Operators in Neural Networks , 2018, 2018 24th International Conference on Pattern Recognition (ICPR).
[73] Douglas B. Lenat,et al. CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge Acquisition Bottlenecks , 1986, AI Mag..
[74] J. Fodor,et al. Connectionism and cognitive architecture: A critical analysis , 1988, Cognition.
[75] A. Leslie. The Perception of Causality in Infants , 1982, Perception.
[76] Jiajun Wu,et al. Entity Abstraction in Visual Model-Based Reinforcement Learning , 2019, CoRL.
[77] F. Dyer,et al. Development of sun compensation by honeybees: how partially experienced bees estimate the sun's course. , 1994, Proceedings of the National Academy of Sciences of the United States of America.
[78] Fan Yang,et al. Differentiable Learning of Logical Rules for Knowledge Base Reasoning , 2017, NIPS.
[79] F. Keil. Concepts, Kinds, and Cognitive Development , 1989 .
[80] Yann LeCun,et al. Generalization and network design strategies , 1989 .
[81] Stephen Clark,et al. Emergent Systematic Generalization in a Situated Agent , 2019, ICLR 2020.
[82] Sergey Levine,et al. Reasoning About Physical Interactions with Object-Oriented Prediction and Planning , 2018, ICLR.
[83] Matthew Botvinick,et al. MONet: Unsupervised Scene Decomposition and Representation , 2019, ArXiv.
[84] Michael McCloskey,et al. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .
[85] Ernest Davis,et al. Commonsense reasoning about containers using radically incomplete information , 2017, Artif. Intell..
[86] Artur S. d'Avila Garcez,et al. Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge , 2016, NeSy@HLAI.
[87] Dov M. Gabbay,et al. Neural-Symbolic Cognitive Reasoning , 2008, Cognitive Technologies.
[88] Guillaume Lample,et al. Deep Learning for Symbolic Mathematics , 2019, ICLR.
[89] Gary Marcus,et al. Deep Learning: A Critical Appraisal , 2018, ArXiv.
[90] Oren Etzioni,et al. From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project , 2019, AI Mag..
[91] Roger C. Schank,et al. Scripts, plans, goals and understanding: an inquiry into human knowledge structures , 1978 .
[92] Percy Liang,et al. Compositional Semantic Parsing on Semi-Structured Tables , 2015, ACL.
[93] Terry Winograd,et al. Procedures As A Representation For Data In A Computer Program For Understanding Natural Language , 1971 .
[94] C. Welin. Scripts, plans, goals and understanding, an inquiry into human knowledge structures: Roger C. Schank and Robert P. Abelson Hillsdale: Lawrence Erlbaum Associates, 1977. 248 pp. £ 10.60 hardcover , 1979 .
[95] Selmer Bringsjord,et al. Kluge: The Haphazard Construction of the Human Mind , 2012 .
[96] Dileep George,et al. Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics , 2017, ICML.
[97] Shirley Ho,et al. Learning Symbolic Physics with Graph Networks , 2019, ArXiv.
[98] Matthew F. Glasser,et al. Parcellations and Connectivity Patterns in Human and Macaque Cerebral Cortex , 2016 .
[99] G. Marcus. The Algebraic Mind: Integrating Connectionism and Cognitive Science , 2001 .
[100] James L. McClelland. Integrating New Knowledge into a Neural Network without Catastrophic Interference: Computational and Theoretical Investigations in a Hierarchically Structured Environment , 2019 .
[101] G. Marcus,et al. The scope of linguistic generalizations: evidence from Hebrew word formation , 2002, Cognition.
[102] Mélanie Frappier,et al. The Book of Why: The New Science of Cause and Effect , 2018, Science.
[103] Ingmar Posner,et al. GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations , 2019, ICLR.
[104] Marco Baroni,et al. Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks , 2017, ICLR 2018.