Modeling dynamic scenes by one-shot 3D acquisition system for moving humanoid robot

For mobile robots, 3D acquisition is required to model the environment. Particularly for humanoid robots, a modeled environment is necessary to plan the walking control. This environment can include both static objects, such as a ground surface with obstacles, and dynamic objects, such as a person moving around the robot. This paper proposes a system for a robot to obtain a sufficiently accurate shape of the environment for walking on a ground surface with obstacles and a method to detect dynamic objects in the modeled environment, which is necessary for the robot to react to sudden changes in the scene. The 3D acquisition is achieved by a projector-camera system mounted on the robot head that uses a structured-light method to reconstruct the shapes of moving objects from a single frame. The acquired shapes are aligned and merged into a common coordinate system using the simultaneous localization and mapping method. Dynamic objects are detected as shapes that are inconsistent with the previous frames. Experiments were performed to evaluate the accuracy of the 3D acquisition and the robustness with regard to detecting dynamic objects when serving as the vision system of a humanoid robot.

[1]  Daniel P. Huttenlocher,et al.  Efficient Belief Propagation for Early Vision , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[2]  Marc Levoy,et al.  A volumetric method for building complex models from range images , 1996, SIGGRAPH.

[3]  Denis Laurendeau,et al.  A General Surface Approach to the Integration of a Set of Range Views , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Daniel Cremers,et al.  Volumetric 3D mapping in real-time on a CPU , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Ryusuke Sagawa,et al.  Dense one-shot 3D reconstruction by detecting continuous regions with parallel line projection , 2011, 2011 International Conference on Computer Vision.

[6]  John J. Leonard,et al.  Robust real-time visual odometry for dense RGB-D mapping , 2013, 2013 IEEE International Conference on Robotics and Automation.

[7]  日向 俊二 Kinect for Windowsアプリを作ろう , 2012 .

[8]  Marc Levoy,et al.  Zippered polygon meshes from range images , 1994, SIGGRAPH.

[9]  Adrian Hilton,et al.  Reliable Surface Reconstructiuon from Multiple Range Images , 1996, ECCV.

[10]  Kazuhito Yokoi,et al.  Balance control based on Capture Point error compensation for biped walking on uneven terrain , 2012, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012).

[11]  Satoshi Kagami,et al.  Autonomous navigation of a humanoid robot over unknown rough terrain using a laser range sensor , 2012, Int. J. Robotics Res..

[12]  Dieter Fox,et al.  RGB-(D) scene labeling: Features and algorithms , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Gabriel Taubin,et al.  One-shot scanning using De Bruijn spaced grids , 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops.

[14]  Gérard G. Medioni,et al.  Object modelling by registration of multiple range images , 1992, Image Vis. Comput..

[15]  Tim Weyrich,et al.  Real-Time 3D Reconstruction in Dynamic Scenes Using Point-Based Fusion , 2013, 2013 International Conference on 3D Vision.

[16]  Yasushi Yagi,et al.  Dynamic scene shape reconstruction using a single structured light pattern , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[17]  Yasushi Yagi,et al.  Dense 3D reconstruction method using a single pattern for fast moving object , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[18]  Dieter Fox,et al.  RGB-D flow: Dense 3-D motion estimation using color and depth , 2013, 2013 IEEE International Conference on Robotics and Automation.

[19]  Katsushi Ikeuchi,et al.  Adaptively merging large-scale range data with reflectance properties , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  Yasushi Yagi,et al.  Grid-Based Active Stereo with Single-Colored Wave Pattern for Dense One-shot 3D Scan , 2012, 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission.

[21]  Andrew W. Fitzgibbon,et al.  KinectFusion: Real-time dense surface mapping and tracking , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[22]  Fumio Kanehiro,et al.  Humanoid robot HRP-2 , 2008, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[23]  Gérard G. Medioni,et al.  Object modeling by registration of multiple range images , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[24]  Joaquim Salvi,et al.  Pattern codification strategies in structured light systems , 2004, Pattern Recognit..

[25]  Masahiro Fujita,et al.  Real-Time Path Planning for Humanoid Robot Navigation , 2005, IJCAI.

[26]  Sebastian Thrun,et al.  A Personal Account of the Development of Stanley, the Robot That Won the DARPA Grand Challenge , 2006, AI Mag..

[27]  Bastian Leibe,et al.  Dense 3D semantic mapping of indoor scenes from RGB-D images , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[28]  G. Schmidt,et al.  Vision-Guided Walking in a Structured Indoor Scenario , 2005 .