Artificial Neural Networks and Natural Language Processing
暂无分享,去创建一个
[1] Bart Selman. Connectionist systems for natural language understanding , 2004, Artificial Intelligence Review.
[2] Werner Winiwarter,et al. Knowledge acquisition in concept and document spaces by using self-organizing neural networks , 1995, Learning for Natural Language Processing.
[3] A Prince,et al. Optimality: From Neural Networks to Universal Grammar , 1997, Science.
[4] S. Pinker,et al. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition , 1988, Cognition.
[5] J. Fodor. Connectionism and the problem of systematicity (continued): why Smolensky's solution still doesn't work , 1997, Cognition.
[6] Alexander Shustorovich,et al. Neural network positioning and classification of handwritten characters , 1996, Neural Networks.
[7] Padhraic Smyth,et al. Discrete recurrent neural networks for grammatical inference , 1994, IEEE Trans. Neural Networks.
[8] Jerome A. Feldman,et al. Structured connectionist models and language learning , 1993, Artificial Intelligence Review.
[9] James L. McClelland,et al. An interactive activation model of context effects in letter perception: Part 2. The contextual enhancement effect and some tests and extensions of the model. , 1982, Psychological review.
[10] Jerome A. Feldman,et al. Connectionist Models and Their Properties , 1982, Cogn. Sci..
[11] Dominic Palmer-Brown,et al. (S)RAAM: An Analytical Technique for Fast and Reliable Derivation of Connectionist Symbol Structure Representations , 1997, Connect. Sci..
[12] Michael Gasser,et al. Transfer in a Connectionist Model of the Acquisition of Morphology , 1995, ArXiv.
[13] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[14] Mark S. Seidenberg,et al. Language Acquisition and Use: Learning and Applying Probabilistic Constraints , 1997, Science.
[15] Alex Waibel,et al. Parsing with Connectionist Networks , 1991 .
[16] V. Marchman,et al. From rote learning to system building: acquiring verb morphology in children and connectionist nets , 1993, Cognition.
[17] Garrison W. Cottrell,et al. Time-delay neural networks: representation and induction of finite-state machines , 1997, IEEE Trans. Neural Networks.
[18] Tim van Gelder,et al. Compositionality: A Connectionist Variation on a Classical Theme , 1990, Cogn. Sci..
[19] Samuel Kaski,et al. Self organization of a massive document collection , 2000, IEEE Trans. Neural Networks Learn. Syst..
[20] Stefan Wermter,et al. SCREEN: learning a flat syntactic and semantic spoken language analysis using artificial neural networks , 1997 .
[21] Dieter Merkl,et al. Text classification with self-organizing maps: Some lessons learned , 1998, Neurocomputing.
[22] C. Lee Giles,et al. Learning a class of large finite state machines with a recurrent neural network , 1995, Neural Networks.
[23] Noel E. Sharkey,et al. Separating learning and representation , 1995, Learning for Natural Language Processing.
[24] Michael G. Dyer,et al. Perceptually Grounded Language Learning: Part 1—A Neural Network Architecture for Robust Sequence Association , 1993 .
[25] Paul Smolensky,et al. Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems , 1990, Artif. Intell..
[26] Jeffrey L. Elman,et al. Finding Structure in Time , 1990, Cogn. Sci..
[27] Robert F. Hadley. Systematicity in Connectionist Language Learning , 1994 .
[28] T. Gelder,et al. On Being Systematically Connectionist , 1994 .
[29] James Henderson,et al. A Connectionist Architecture for Learning to Parse , 1998, ACL.
[30] Nick Chater,et al. Connectionist natural language processing: the state of the art , 1999 .
[31] B. MacWhinney,et al. Implementations are not conceptualizations: Revising the verb learning model , 1991, Cognition.
[32] G. Dell,et al. Language production and serial order: a functional analysis and a model. , 1997, Psychological review.
[33] Timo Honkela,et al. WEBSOM - Self-organizing maps of document collections , 1998, Neurocomputing.
[34] James L. McClelland,et al. Graded state machines: The representation of temporal contingencies in simple recurrent networks , 1991, Machine Learning.
[35] Michael J. Bennett,et al. The Historical Background , 1997 .
[36] Sandiway Fong,et al. Natural language grammatical inference: a comparison of recurrent neural networks and machine learning methods , 1995, Learning for Natural Language Processing.
[37] Tony A. Plate,et al. Holographic reduced representations , 1995, IEEE Trans. Neural Networks.
[38] P. Frasconi,et al. Representation of Finite State Automata in Recurrent Radial Basis Function Networks , 1996, Machine Learning.
[39] Jordan B. Pollack,et al. Recursive Distributed Representations , 1990, Artif. Intell..
[40] G. Marcus. The acquisition of the English past tense in children and multilayered connectionist networks , 1995, Cognition.
[41] Stefan Wermter,et al. Neural Network Agents for Learning Semantic Text Classification , 2000, Information Retrieval.
[42] Igor Aleksander,et al. Successful naïve representation grounding , 2004, Artificial Intelligence Review.
[43] James L. McClelland,et al. An interactive activation model of context effects in letter perception: I. An account of basic findings. , 1981 .
[44] A. Konig. Interactive visualization and analysis of hierarchical neural projections for data mining , 2000 .
[45] J. Elman,et al. Learning and morphological change , 1995, Cognition.
[46] Noel E. Sharkey,et al. Connectionist representation techniques , 1991, Artificial Intelligence Review.
[47] V. Marchman,et al. U-shaped learning and frequency effects in a multi-layered perception: Implications for child language acquisition , 1991, Cognition.
[48] Michael K. Tanenhaus,et al. Parsing in a Dynamical System: An Attractor-based Account of the Interaction of Lexical and Structural Constraints in Sentence Processing , 1997 .
[49] Jung-Hsien Chiang,et al. A hybrid neural network model in handwritten word recognition , 1998, Neural Networks.
[50] Noel E. Sharkey,et al. Grounding computational engines , 1996, Artificial Intelligence Review.
[51] P. Smolensky. On the proper treatment of connectionism , 1988, Behavioral and Brain Sciences.
[52] T. Horgan,et al. Connectionism and the Philosophy of Mind , 1991 .
[53] Peter Tiňo,et al. Finite State Machines and Recurrent Neural Networks -- Automata and Dynamical Systems Approaches , 1995 .
[54] J. Fodor,et al. Connectionism and cognitive architecture: A critical analysis , 1988, Cognition.
[55] Stefan Wermter,et al. A Novel Modular Neural Architecture for Rule-Based and Similarity-Based Reasoning , 1998, Hybrid Neural Systems.
[56] James L. McClelland,et al. Learning and Applying Contextual Constraints in Sentence Comprehension , 1990, Artif. Intell..
[57] C. Lee Giles,et al. Stable Encoding of Large Finite-State Automata in Recurrent Neural Networks with Sigmoid Discriminants , 1996, Neural Computation.
[58] Stefan Wermter,et al. A Hybrid Symbolic/Connectionist Model for Noun Phrase Understanding , 1989 .
[59] Raymond L. Watrous,et al. Induction of Finite-State Languages Using Second-Order Recurrent Networks , 1992, Neural Computation.
[60] Ronan G. Reilly. A connectionist model of some aspects of anaphor resolution , 1984 .
[61] Risto Miikkulainen,et al. Integrated connectionist models: building AI systems on subsymbolic foundations , 1994, Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94.
[62] Garrison W. Cottrell,et al. Acquiring the Mapping from Meaning to Sounds , 1994, Connect. Sci..
[63] J. Elman. Learning and development in neural networks: the importance of starting small , 1993, Cognition.
[64] Gabriele Scheler,et al. Generating English plural determiners from semantic representations: a neural network learning approach , 1995, Learning for Natural Language Processing.
[65] Michael G. Dyer,et al. Perceptually Grounded Language Learning: Part 2 - DETE: A Neural/Procedural Model , 1994, Connect. Sci..
[66] Sankar K. Pal,et al. A connectionist system for learning and recognition of structures: Application to handwritten characters , 1995, Neural Networks.
[67] Giovanni Soda,et al. Recurrent neural networks and prior knowledge for sequence processing: a constrained nondeterministic approach , 1995, Knowl. Based Syst..
[68] Allen Newell,et al. Physical Symbol Systems , 1980, Cogn. Sci..
[69] Rohini K. Srihari,et al. Computational models for integrating linguistic and visual information: A survey , 2004, Artificial Intelligence Review.
[70] Martin J. Adamson,et al. B-RAAM: A Connectionist Model which Develops Holistic Internal Representations of Symbolic Structures , 1999, Connect. Sci..
[71] Jordan B. Pollack,et al. No harm intended: Marvin L. Minsky and Seymour A. Papert. Perceptrons: An Introduction to Computational Geometry, Expanded Edition. Cambridge, MA: MIT Press, 1988. Pp. 292. $12.50 (paper) , 1989 .
[72] Patrick Juola,et al. A connectionist model of English past tense and plural morphology , 1999 .
[73] Andreas Stolcke,et al. L0-The first five years of an automated language acquisition project , 1996, Artificial Intelligence Review.