Symbolic Learning for Adaptive Agents

This paper investigates an approach to designing and building adaptive agents. The main contribution is the use of a symbolic machine learning system for approximating the policy and Q functions that are at the heart of the agent. Under the assumption that sufficient knowledge of the application domain is available, it is shown how this knowledge can be provided to the agent in the form of symbolic hypothesis languages for the policy and Q functions, and the advantages of such an approach. A series of experiments concerning the performance of an agent employing this architecture in the blocks world domain is presented and some general conclusions drawn.

[1]  J. W. Lloyd,et al.  Logic for Learning , 2003, Cognitive Technologies.

[2]  Amy L. Lansky,et al.  Reactive Reasoning and Planning , 1987, AAAI.

[3]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[4]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[5]  John K. Slaney,et al.  Blocks World revisited , 2001, Artif. Intell..

[6]  John N. Tsitsiklis,et al.  Neuro-Dynamic Programming , 1996, Encyclopedia of Machine Learning.

[7]  Kurt Driessens,et al.  Relational Reinforcement Learning , 1998, Machine-mediated learning.

[8]  Craig Boutilier,et al.  Decision-Theoretic Planning: Structural Assumptions and Computational Leverage , 1999, J. Artif. Intell. Res..

[9]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[10]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .