A Bayesian approach to conceptualization using reinforcement learning

Abstraction provides cognition economy and generalization skill in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction which maps the continuous state and action spaces into entities called concepts. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the Bayesian framework is proposed. This approach exploits and extends the mirror neuron's role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, an agent sequentially learns the concepts from both of its successes and its failures through interaction with the environment. These characteristics as a whole distinguish the proposed learning algorithm from positive sample learning. Simulation results show the correct formation of concepts' distributions in perceptual space in addition to benefits of utilizing both successes and failures in terms of convergence speed as well as asymptotic behavior. Experimental results, on the other hand, show the applicability and effectiveness of our method for a real robotic task such as wall-following.