Leveraging Imitation Learning on Pose Regulation Problem of a Robotic Fish.

In this article, the pose regulation control problem of a robotic fish is investigated by formulating it as a Markov decision process (MDP). Such a typical task that requires the robot to arrive at the desired position with the desired orientation remains a challenge, since two objectives (position and orientation) may be conflicted during optimization. To handle the challenge, we adopt the sparse reward scheme, i.e., the robot will be rewarded if and only if it completes the pose regulation task. Although deep reinforcement learning (DRL) can achieve such an MDP with sparse rewards, the absence of immediate reward hinders the robot from efficient learning. To this end, we propose a novel imitation learning (IL) method that learns DRL-based policies from demonstrations with inverse reward shaping to overcome the challenge raised by extremely sparse rewards. Moreover, we design a demonstrator to generate various trajectory demonstrations based on one simple example from a nonexpert helper, which greatly reduces the time consumption of collecting robot samples. The simulation results evaluate the effectiveness of our proposed demonstrator and the state-of-the-art (SOTA) performance of our proposed IL method. Furthermore, we deploy the trained IL policy on a physical robotic fish to perform pose regulation in a swimming tank without/with external disturbances. The experimental results verify the effectiveness and robustness of our proposed methods in real world. Therefore, we believe this article is a step forward in the field of biomimetic underwater robot learning.