Wide Field of View Kinect Undistortion for Social Navigation Implementation

In planning navigation schemes for social robots, distinguishing between humans and other obstacles is crucial for obtaining a safe and comfortable motion. A Kinect camera is capable of fulfilling such a task but unfortunately can only deliver a limited field of view (FOV). Recently a lens that is capable of improving the Kinect’s FOV has become commercially available from Nyko. However, this lens causes a distortion in the RGB-D data, including the depth values. To address this issue, we propose a two-staged undistortion strategy. Initially, pixel locations in both RGB and depth images are corrected using an inverse radial distortion model. Next, the depth data is post-filtered using 3D point cloud analysis to diminish the noise as a result of the undistorting process and remove the ground/ceiling information. Finally, the depth values are rectified using a neural network filter based on laser-assisted training. Experimental results demonstrate the feasibility of the proposed approach for fixing distorted RGB-D data.