Symbolic Artificial Intelligence, Connectionist Networks & Beyond.

The goal of Artificial Intelligence, broadly defined, is to understand and engineer intelligent systems. This entails building theories and models of embodied minds and brains -both natural as well as artificial. The advent of digital computers and the parallel development of the theory of computation since the 1950s provided a new set of tools with which to approach this problem -through analysis, design, and evaluation of computers and programs that exhibit aspects of intelligent behavior -such as the ability to recognize and classify patterns; to reason from premises to logical conclusions; and to learn from experience. The early years of artificial intelligence saw some people writing programs that they executed on serial stored--program computers (e.g., Newell, Shaw and Simon, 1963; Feigenbaum, 1963); Others (e.g., Rashevsky, 1960; McCulloch and Pitts, 1943; Selfridge and Neisser, 1963; Uhr and Vossler, 1963) worked on more or less precise specifications of more parallel, brain--like networks of simple processors (reminiscent of today's connectionist networks) for modelling minds/brains; and a few took the middle ground (Uhr, 1973; Holland, 1975; Minsky, 1963; Arbib, 1972; Grossberg, 1982; Klir, 1985). It is often suggested that two major approaches have emerged -symbolic artificial intelligence (SAI) and artificial neural networks or connectionist networks (CN) and some (Norman, 1986; Schneider, 1987) have even suggested that they are fundamentally and perhaps irreconcilably different. Others have argued that CN models have little to contribute to our efforts to understand cognitive processes (Fodor and Pylyshyn, 1988). A critical examination of the popular conceptions of SAI and CN models suggests that neither of these extreme positions is justified (Boden, 1994; Honavar and Uhr, 1990a; Honavar, 1994b; Uhr and Honavar, 1994). Recent attempts at reconciling SAI and CN approaches to modelling cognition and engineering intelligent systems (Honavar and Uhr, 1994; Sun and Bookman, 1994; Levine and Aparicioiv, 1994; Goonatilake and Khebbal, 1994; Medsker, 1994) are strongly suggestive of the potential benefits of exploring computational models that judiciously integrate aspects of both. The rich and interesting space of designs that combine concepts, constructs, techniques and technologies drawn from both SAI and CN invite systematic theoretical as well as experimental exploration in the context of a broad range of problems in perception, knowledge representation and inference, robotics, language, and learning, and ultimately, integrated systems that display what might be considered human--like general intelligence. This chapter examines how today's CN models can be extended to provide a framework for such an exploration. Disciplines Artificial Intelligence and Robotics This article is available at Iowa State University Digital Repository: http://lib.dr.iastate.edu/cs_techreports/76 Symbolic Artificial Intelligence, Connectionist Networks and Beyond TR94-16 Vasant Honavar and Leonard Uhr

[1]  N. Rashevsky Mathematical Biophysics , 1935, Nature.

[2]  Terrence J. Sejnowski,et al.  The Computational Brain , 1996, Artif. Intell..

[3]  H. R. Quillian In semantic information processing , 1968 .

[4]  Ronald R. Yager,et al.  Fuzzy sets, neural networks, and soft computing , 1994 .

[5]  Leonard Uhr Parallel-Serial Production Systems with Many Working Memories , 1979, IJCAI.

[6]  G. Shepherd The Synaptic Organization of the Brain , 1979 .

[7]  Larry R. Medsker,et al.  Hybrid Intelligent Systems , 1995, Springer US.

[8]  Lawrence A. Bookman A Framework for Integrating Relational and Associational Knowledge for Comprehension , 1995 .

[9]  Allan Gottlieb,et al.  Highly parallel computing , 1989, Benjamin/Cummings Series in computer science and engineering.

[10]  S. Tanimoto,et al.  Structured computer vision: Machine perception through hierarchical computation structures , 1980 .

[11]  Marvin Minsky,et al.  A framework for representing knowledge , 1974 .

[12]  King-Sun Fu,et al.  Syntactic Pattern Recognition And Applications , 1968 .

[13]  Carl Hewitt,et al.  Viewing Control Structures as Patterns of Passing Messages , 1977, Artif. Intell..

[14]  Jordan B. Pollack,et al.  Recursive Distributed Representations , 1990, Artif. Intell..

[15]  Edward P. K. Tsang,et al.  Foundations of constraint satisfaction , 1993, Computation in cognitive science.

[16]  Azriel Rosenfeld,et al.  Computer Vision , 1988, Adv. Comput..

[17]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems , 1988 .

[18]  Vasant G Honavar Perceptual Development and Learning: From Behavioral, Neurophysiological, and Morphological Evidence To Computational Models , 1989 .

[19]  Vasant Honavar,et al.  Brain-structured Connectionist Networks that Perceive and Learn , 1989 .

[20]  Roger R. Jenness Analog computation and simulation : laboratory approach , 1965 .

[21]  Stephen José Hanson,et al.  What connectionist models learn: Learning and representation in connectionist networks , 1990, Behavioral and Brain Sciences.

[22]  Vasant Honavar,et al.  Inductive learning using generalized distance measures , 1992, Defense, Security, and Sensing.

[23]  Charles L. Forgy,et al.  Rete: a fast algorithm for the many pattern/many object pattern match problem , 1991 .

[24]  Vasant Honavar,et al.  Generative learning structures and processes for generalized connectionist networks , 1993, Inf. Sci..

[25]  David C. Wilkins,et al.  Readings in Knowledge Acquisition and Learning: Automating the Construction and Improvement of Expert Systems , 1992 .

[26]  Vasant Honavar,et al.  Generative learning structures for generalized connectionist networks , 1990 .

[27]  George J. Klir,et al.  Architecture of Systems Problem Solving , 1985, Springer US.

[28]  Ryszard S. Michalski,et al.  Toward a unified theory of learning: multistrategy task-adaptive learning , 1993 .

[29]  Vasant Honavar,et al.  Generation, Local Receptive Fields and Global Convergence Improve Perceptual Learning in Connectionist Networks , 1989, IJCAI.

[30]  Leonard Uhr,et al.  A pattern recognition program that generates, evaluates, and adjusts its own operators , 1961, IRE-AIEE-ACM '61 (Western).

[31]  Vasant Honavar,et al.  Symbolic Artificial Intelligence and Numeric Artificial Neural Networks: Towards A Resolution of the Dichotomy , 1995 .

[32]  W. Daniel Hillis,et al.  The connection machine , 1985 .

[33]  W. Schneider Connectionism: Is it a paradigm shift for psychology? , 1987 .

[34]  Raju S. Bapi,et al.  Artificial intelligence and neural networks: Steps toward principled integration , 1996 .

[35]  Jude Shavlik,et al.  A Framework for Combining Symbolic and Neural Learning , 1992 .

[36]  Laurent Miclet,et al.  Structural Methods in Pattern Recognition , 1986 .

[37]  Bruce J. MacLennan,et al.  Continuous Computation and the Emergence of the Discrete , 1993, Origins.

[38]  Leonard M. Uhr Algorithm-structured computer arrays and networks , 1984 .

[39]  Jerome A. Feldman,et al.  Connectionist Models and Their Properties , 1982, Cogn. Sci..

[40]  Oliver Braddick The metaphorical brain 2: Neural networks and beyond. Michael A. Arbib. Wiley‐Interscience, New York, 1989. no. of pages: 458. Price: £43.20 , 1991 .

[41]  Stephen Grossberg,et al.  Studies of mind and brain , 1982 .

[42]  Carver Mead,et al.  Analog VLSI and neural systems , 1989 .

[43]  Robert A. Kowalski,et al.  Predicate Logic as Programming Language , 1974, IFIP Congress.

[44]  B. Chandrasekaran,et al.  Architecture of intelligence : the problems and current approaches to solutions , 1993 .

[45]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[46]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[47]  Gordon M. Shepherd,et al.  The significance of real neuron architectures for neural network simulations , 1993 .

[48]  D. Signorini,et al.  Neural networks , 1995, The Lancet.

[49]  Gadi Pinkas A Fault Tolerant Connectionist Architecture for Construction of Logic Proofs , 1993 .

[50]  Risto Miikkdainen Integrated Connectionist Models: Building AI Systems On Subsymbolic Foundations , 1994 .

[51]  Larry R. Medsker,et al.  Hybrid Neural Network and Expert Systems , 1994, Springer US.

[52]  L. Shastri,et al.  A Connectionist System for Rule Based Reasoning With Multi-Place Predicates and Variables , 1989 .

[53]  J. Fodor,et al.  Connectionism and cognitive architecture: A critical analysis , 1988, Cognition.

[54]  Vasant Honavar,et al.  Books-Received - Artificial Intelligence and Neural Networks - Steps Toward Principled Integration , 1994 .

[55]  Donald A. Norman,et al.  Reflections on cognition and parallel distributed processing , 1986 .

[56]  Robert F. Port,et al.  Beyond Symbolic: Prolegomena to a Kama-Sutra of Compositionality1 , 1993 .

[57]  M. Arbib Brain theory and cooperative computation. , 1985, Human neurobiology.

[58]  Vasant Honavar,et al.  Toward learning systems that integrate multiple strategies and representations , 1994 .

[59]  Ron Sun,et al.  Computational Architectures Integrating Neural And Symbolic Processes , 1994 .

[60]  Vasant Honavar,et al.  Some Biases for EfficientLearning of Spatial, Temporal, and Spatio-Temporal Patterns , 1992 .

[61]  Vasant Honavar,et al.  Coordination and control structures and processes: possibilities for connectionist networks (CN) , 1990, J. Exp. Theor. Artif. Intell..

[62]  Geoffrey E. Hinton Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems , 1991 .

[63]  Oliver G. Selfridge,et al.  Pattern recognition by machine , 1960 .

[64]  A. A. Mullin,et al.  Principles of neurodynamics , 1962 .

[65]  Jerome A. Feldman,et al.  Connectionist Models and Their Properties , 1982, Cogn. Sci..

[66]  Charles P. Dolan,et al.  Tensor Product Production System: a Modular Architecture and Representation , 1989 .

[67]  P. Wortman,et al.  Pattern Recognition, Learning, and Thought , 1974 .

[68]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[69]  Stephen Grossberg,et al.  Integrating Symbolic and Neural Processing in a Self-Organizing Architechture for Pattern Recognition and Prediction , 1993 .

[70]  Leonard Uhr,et al.  Increasing the Power of Connectionist Networks (CN) by Improving Structures, Processes, Learning , 1990 .

[71]  Alex Goodall,et al.  The guide to expert systems , 1985 .

[72]  John F. Sowa,et al.  Conceptual Structures: Information Processing in Mind and Machine , 1983 .

[73]  G. Reeke Marvin Minsky, The Society of Mind , 1991, Artif. Intell..

[74]  S. Hanson,et al.  Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding , 1995 .

[75]  C. H. Bailey,et al.  The anatomy of a memory: convergence of results across a diversity of tests , 1988, Trends in Neurosciences.

[76]  Leonard Uhr,et al.  Parallel computer vision , 1987 .

[77]  Sun-Yuan Kung,et al.  Digital neural networks , 1993, Prentice Hall Information and System Sciences Series.