Behavior of an Adaptive Self-organizing Autonomous Agent Working with Cues and Competing Concepts

A brain model-based alternative to reinforcement learning is presented that integrates artificial neural networks and knowledge-based systems into one unit or agent for goal-oriented problem solving. The agent may possess inherited and learned artificial neural networks and knowledge-based subsystems. The agent has and develops ANN cues to the environment for dimensionality reduction (data compression) to ease the problem of combinatorial explosion. Here, a dynamical concept model is put forward that builds cue models of the phenomena in the world, designs dynamical action sets (concepts), and makes them compete in a spreading-activation neural stage to reach decision. The agent works under closed-loop control. Here we examine a simple robotlike object in a two-dimensional conditionally probabilistic space.

[1]  S Grossberg,et al.  Some nonlinear networks capable of learning a spatial pattern of arbitrary complexity. , 1968, Proceedings of the National Academy of Sciences of the United States of America.

[2]  Bernard Widrow,et al.  Punish/Reward: Learning with a Critic in Adaptive Threshold Systems , 1973, IEEE Trans. Syst. Man Cybern..

[3]  Allan Collins,et al.  A spreading-activation theory of semantic processing , 1975 .

[4]  F. Cloak Is a cultural ethology possible? , 1975 .

[5]  V Csányi,et al.  General theory of evolution. , 1980, Acta biologica Academiae Scientiarum Hungaricae.

[6]  D. Munson Circuits and systems , 1982, Proceedings of the IEEE.

[7]  Tad Hogg,et al.  Phase Transitions in Artificial Intelligence Systems , 1987, Artif. Intell..

[8]  Charles W. Anderson,et al.  Strategy Learning with Multilayer Connectionist Representations , 1987 .

[9]  Amnon Yariv,et al.  Semiparallel microelectronic implementation of neural network models using CCD technology , 1987 .

[10]  Stephen Grossberg,et al.  A massively parallel architecture for a self-organizing neural pattern recognition machine , 1988, Comput. Vis. Graph. Image Process..

[11]  C. Watkins Learning from delayed rewards , 1989 .

[12]  James A. Reggia,et al.  A connectionist model for diagnostic problem solving , 1989, IEEE Trans. Syst. Man Cybern..

[13]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[14]  Richard E. Korf,et al.  Real-Time Heuristic Search , 1990, Artif. Intell..

[15]  J. Stephen Judd,et al.  Neural network design and the complexity of learning , 1990, Neural network modeling and connectionism.

[16]  Alan Bundy Catalogue of Artificial Intelligence Techniques , 1990, Symbolic Computation.

[17]  Amnon Yariv,et al.  The CCD neural processor: a neural network integrated circuit with 65536 programmable analog synapses , 1990 .

[18]  Peter Földiák,et al.  Learning Invariance from Transformation Sequences , 1991, Neural Comput..

[19]  Leslie Pack Kaelbling,et al.  Input Generalization in Delayed Reinforcement Learning: An Algorithm and Performance Comparisons , 1991, IJCAI.

[20]  Long-Ji Lin,et al.  Self-improving reactive agents: case studies of reinforcement learning frameworks , 1991 .

[21]  F. Varela,et al.  Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life , 1992 .

[22]  E. Capaldi,et al.  The organization of behavior. , 1992, Journal of applied behavior analysis.

[23]  Kunihiko Fukushima,et al.  Character recognition with neural networks , 1992, Neurocomputing.

[24]  P. Thagard,et al.  Explanatory coherence , 1993 .

[25]  Piero Mussio,et al.  Toward a Practice of Autonomous Systems , 1994 .

[26]  R. Palmer,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[27]  András Lörincz,et al.  Topology Learning Solved by Extended Objects: A Neural Network Model , 1994, Neural Computation.

[28]  Tom Heskes,et al.  A Neural Model of Visual Attention , 1995, SNN Symposium on Neural Networks.