Reinforcement Learning by Chaotic Exploration Generator in Target Capturing Task

The exploration, that is a process of trial and error, plays a very important role in reinforcement learning. As a generator for exploration, it seems to be familiar to use the uniform pseudorandom number generator. However, it is known that chaotic source also provides a random-like sequence as like as stochastic source. Applying this random-like feature of deterministic chaos for a generator of the exploration, we already found that the deterministic chaotic generator for the exploration based on the logistic map gives better performances than the stochastic random exploration generator in a nonstationary shortcut maze problem. In this research, in order to make certain such a difference of the performance, we examine target capturing as another nonstationary task. The simulation result in this task approves the result in our previous work.