Cartesian To Joint Space Mapping Using Q-Learning

Inverse Kinematics (IK) is a mapping from manipulating robot's end-effector (EE) space to its actuator space. Calculating the joint angles from given EE coordinates is a difficult task and different types of geometrical and analytical methods have been proposed to calculate the joint angles. However, such solutions are robot kinematic structure specific and sometimes to obtain analytical solutions Pieper's constraint is imposed, which severally put kinematic constraints which may not be acceptable for humanoid social robots. To help solve this problem, the current research proposes a methodology to calculate the joint angles from given EE coordinates using Reinforcement Learning (RL) for an n-link planar manipulator. Firstly, the workspace of the robot is evaluated using Monte Carlo methods and the states and actions are converted from continuous domain to the discrete domain. Subsequently, the Q-table is updated using State Action Reward State Action (SARSA) algorithm. Conventional RL works well when the number of links is less, like two or three. As the number of links increases the computational cost increases exponentially and the conventional RL Algorithm takes lots of time to learn. In our proposed modified RL algorithm the size of Q-table is significantly less as compared to the conventional RL Algorithm and hence the computational costs are reduced. The proposed algorithm also provides obstacle avoidance capability to the robot for the static obstacles present in robots workspace. The results show an encouraging trend towards substituting IK with learning based models to design and develop social robots of various kinematic shapes free from Pieper's and other constraints [1].