Active manipulator motion planning for planetary landform awareness

This paper presents an active motion planning approach for a robotic manipulator operating in planetary surface exploration missions. A monocular camera is employed to examine the visually salient regions in an image, from which landforms of potential interests are extracted, and two key metrics are established to evaluate the information richness of the landforms. A next-best view manipulator motion planning is proposed, in which motions of the manipulator are actively planned to reach a better viewpoint to view the target landform, the safety of such operation is guaranteed by real-time estimating the relative distance using ORriented Brief (ORB)–simultaneous localization and mapping. The proposed active motion planning method is validated by an experimental trial, results from which demonstrate that it is able to safely acquire a better viewing of a just-detected planetary landform with full autonomy.

[1]  Aiguo Song,et al.  Modeling and experimental validation of sawing based lander anchoring and sampling methods for asteroid exploration , 2018 .

[2]  Toki Tahmid Inan,et al.  Explorer-0100: An autonomous next generation Mars rover , 2017, 2017 20th International Conference of Computer and Information Technology (ICCIT).

[3]  Jacopo Aleotti,et al.  Contour-based next-best view planning from point cloud segmentation of unknown objects , 2018, Auton. Robots.

[4]  Michel Drouin,et al.  Automatic observation for 3D reconstruction of unknown objects using visual servoing , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Zoltan-Csaba Marton,et al.  Combining object modeling and recognition for active scene exploration , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Sabine Süsstrunk,et al.  Frequency-tuned salient region detection , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  C. Ian Connolly,et al.  The determination of next best views , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[9]  Alex Ellery,et al.  An engineering approach to the dynamic control of space robotic on-orbit servicers , 2004 .

[10]  Brett Kennedy,et al.  Gravity‐independent Rock‐climbing Robot and a Sample Acquisition Tool with Microspine Grippers , 2013, J. Field Robotics.

[11]  Gerd Hirzinger,et al.  A surface-based Next-Best-View approach for automated 3D model completion of unknown objects , 2011, 2011 IEEE International Conference on Robotics and Automation.

[12]  Yongmin Kim,et al.  Active contour model with gradient directional information: directional snake , 2001, IEEE Trans. Circuits Syst. Video Technol..

[13]  Luis Enrique Sucar,et al.  View planning for 3D object reconstruction with a mobile manipulator robot , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Daniel P. Huttenlocher,et al.  Efficient Graph-Based Image Segmentation , 2004, International Journal of Computer Vision.

[15]  Steve Chien,et al.  Review on space robotics: Toward top-level science through space exploration , 2017, Science Robotics.

[16]  Meng Yu,et al.  A novel 3D feature detection and matching approach for autonomous planetary landing mission , 2019 .

[17]  Mubarak Shah,et al.  Visual attention detection in video sequences using spatiotemporal cues , 2006, MM '06.

[18]  John C. Bridges,et al.  Planetary Protection and Mars Sample Return , 2011 .

[19]  Masayuki Inaba,et al.  On-line next best grasp selection for in-hand object 3D modeling with dual-arm coordination , 2012, 2012 IEEE International Conference on Robotics and Automation.

[20]  Marco Pavone,et al.  Spacecraft Autonomy Challenges for Next-Generation Space Missions , 2016 .

[21]  Juan D. Tardós,et al.  ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.

[22]  Lihi Zelnik-Manor,et al.  Context-Aware Saliency Detection , 2012, IEEE Trans. Pattern Anal. Mach. Intell..

[23]  Shi-Min Hu,et al.  Global contrast based salient region detection , 2011, CVPR 2011.

[24]  Markus Vincze,et al.  Viewpoint Evaluation for Online 3-D Active Object Classification , 2016, IEEE Robotics and Automation Letters.

[25]  Eric Hand,et al.  Planetary Science. Philae probe makes bumpy touchdown on a comet. , 2014, Science.

[26]  Karl Iagnemma,et al.  Self‐supervised terrain classification for planetary surface exploration rovers , 2012, J. Field Robotics.