Autonomous Depth Perception of Humanoid Robot Using Binocular Vision System Through Sensorimotor Interaction with Environment

In this paper, we explore how a humanoid robot having two cameras can learn to improve depth perception by itself. We propose an approach that can autonomously improve depth estimation of the humanoid robot. This approach can tune parameters that are required for binocular vision system of the humanoid robot and improve depth perception automatically through interaction with environment. To set parameters of binocular vision system of the humanoid robot, the robot utilizes sensory invariant driven action SIDA. The sensory invariant driven action SIDA gives identical sensory stimulus to the robot even though actions are not same. These actions are autonomously generated by the humanoid robot without the external control in order to improve depth perception. The humanoid robot can gather training data so as to tune parameters of binocular vision system from the sensory invariant driven action SIDA. Object size invariance OSI is used to examine whether or not current depth estimation is correct. If the current depth estimation is reliable, the robot tunes the parameters of binocular vision system based on object size invariance OSI again. The humanoid robot interacts with environment so as to understand a relation between the size of the object and distance to the object from the robot. Our approach shows that action plays an important role in the perception. Experimental results show that the proposed approach can successfully and automatically improve depth estimation of the humanoid robot.

[1]  D. McCready On size, distance, and visual angle perception , 1985, Perception & psychophysics.

[2]  R. Held,et al.  MOVEMENT-PRODUCED STIMULATION IN THE DEVELOPMENT OF VISUALLY GUIDED BEHAVIOR. , 1963, Journal of comparative and physiological psychology.

[3]  Mark Nawrot,et al.  The development of depth perception from motion parallax in infancy , 2009, Attention, perception & psychophysics.

[4]  Minho Lee,et al.  Biologically motivated vergence control system using human-like selective attention model , 2006, Neurocomputing.

[5]  A. S. Gilinsky Perceived size and distance in visual space. , 1951, Psychological review.

[6]  S. Sokol,et al.  Measurement of infant visual acuity from pattern reversal evoked potentials , 1978, Vision Research.

[7]  Darrell Whitley,et al.  A genetic algorithm tutorial , 1994, Statistics and Computing.

[8]  Daniel Chern-Yeow Eng,et al.  Autonomous Learning of the Semantics of Internal Sensory States Based on Motor Exploration , 2007, Int. J. Humanoid Robotics.

[9]  Benjamin Kuipers,et al.  Autonomous Learning of High-Level States and Actions in Continuous Environments , 2012, IEEE Transactions on Autonomous Mental Development.

[10]  Minho Lee,et al.  Autonomous and Interactive Improvement of Binocular Visual Depth Estimation through Sensorimotor Interaction , 2013, IEEE Transactions on Autonomous Mental Development.

[11]  Benjamin Kuipers,et al.  Bootstrap learning of foundational representations , 2006, Connect. Sci..

[12]  Olivier Stasse,et al.  Visually-Guided Grasping while Walking on a Humanoid Robot , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.