Simultaneous Learning of Moving and Active Perceptual Policies for Autonomous Robot

Humans can move their bodies and eyes actively to perceive the state of the environment they are surrounded by. Autonomous robots are needed to learn this ability, so called active perception, to behave as humans do. In this paper, we propose a reinforcement learning algorithm to make the robots have the perceptual ability. In our algorithm, we simultaneously train two agents which control the robot and its sensor on the robot to achieve a task. We conducted experiments on navigation tasks in a 3D environment where useful information for the task achievement is partially occluded. The experimental results show that our algorithm can obtain better perceptual behavior and achieve higher success rates than conventional reinforcement learning algorithms.