Reinforcement learning-hierarchical neuro-fuzzy politree model for autonomous agents - evaluation in a multi-obstacle environment

This work presents an extension of the hybrid reinforcement learning-hierarchical neuro-fuzzy politree model (RL-HNFP) and presents its performance in a multi-obstacle environment. The main objective of the RL-HNFP model is to provide an agent with intelligence, making it capable, by interacting with its environment, to acquire and retain knowledge for reasoning (infer an action). The original RL-HNFP applies hierarchical partitioning methods, together with the reinforcement learning (RL) methodology, which permits the autonomous agent to automatically learn its structure and its necessary action in each position in the environment. The improved version of the RL-HNFP model implements a better defuzzification method, improving the agent's behaviour. The extended RL-HNFP model was evaluated in a multi-obstacle environment, providing good performance and demonstrating the agent's autonomy.