Semantic and Geometric Modeling with Neural Message Passing in 3D Scene Graphs for Hierarchical Mechanical Search

Searching for objects in indoor organized environments such as homes or offices is part of our everyday activities. When looking for a desired object, we reason about the rooms and containers the object is likely to be in; the same type of container will have a different probability of containing the target depending on which room it is in. We also combine geometric and semantic information to infer what container is best to search, or what other objects are best to move, if the target object is hidden from view. We use a 3D scene graph representation to capture the hierarchical, semantic, and geometric aspects of this problem. To exploit this representation in a search process, we introduce Hierarchical Mechanical Search (HMS), a method that guides an agent’s actions towards finding a target object specified with a natural language description. HMS is based on a novel neural network architecture that uses neural message passing of vectors with visual, geometric, and linguistic information to allow HMS to process data across layers of the graph while combining semantic and geometric cues. HMS is trained on 1000 3D scene graphs and evaluated on a novel dataset of 500 3D scene graphs with dense placements of semantically related objects in storage locations, and is shown to be significantly better than several baselines at finding objects. It is also close to the oracle policy in terms of the median number of actions required. Additional qualitative results can be found at https://ai.stanford.edu/mech-search/hms

[1]  Patric Jensfelt,et al.  Topological spatial relations for active visual search , 2012, Robotics Auton. Syst..

[2]  Oliver Brock,et al.  Prior-assisted propagation of spatial information for object search , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[4]  Evangelos Kalogerakis,et al.  SceneGraphNet: Neural Message Passing for 3D Indoor Scene Augmentation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[5]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Leslie Pack Kaelbling,et al.  Manipulation-based active search for occluded objects , 2013, 2013 IEEE International Conference on Robotics and Automation.

[7]  Roland Siegwart,et al.  Object Finding in Cluttered Scenes Using Interactive Perception , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Chen Wang,et al.  A Survey on Visual Navigation for Artificial Agents With Deep Reinforcement Learning , 2020, IEEE Access.

[9]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[10]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[11]  Kate Revoredo,et al.  Probabilistic Relational Reasoning in Semantic Robot Navigation , 2014, URSW.

[12]  Silvio Savarese,et al.  3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[13]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[14]  Óscar Martínez Mozos,et al.  Semantic Information for Robot Navigation: A Survey , 2020, Applied Sciences.

[15]  Yuandong Tian,et al.  Bayesian Relational Memory for Semantic Visual Navigation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[16]  Arpit Agarwal,et al.  Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions , 2018, CoRL.

[17]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[18]  Christopher Amato,et al.  Online Planning for Target Object Search in Clutter under Partial Observability , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[19]  Il Hong Suh,et al.  Active object search in an unknown large-scale environment using commonsense knowledge and spatial relations , 2019, Intell. Serv. Robotics.

[20]  Changjoo Nam,et al.  Planning for target retrieval using a robotic manipulator in cluttered and occluded environments , 2019, ArXiv.

[21]  Oliver Brock,et al.  Cross-modal interpretation of multi-modal sensor streams in interactive perception based on coupled recursion , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[22]  Luc De Raedt,et al.  Occluded object search by relational affordances , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[23]  R. Sarpong,et al.  Bio-inspired synthesis of xishacorenes A, B, and C, and a new congener from fuscol† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c9sc02572c , 2019, Chemical science.

[24]  John Folkesson,et al.  Search in the real world: Active visual object search based on spatial relations , 2011, 2011 IEEE International Conference on Robotics and Automation.

[25]  Chris Burbridge,et al.  Bootstrapping Probabilistic Models of Qualitative Spatial Relations for Active Visual Object Search , 2014, AAAI Spring Symposia.

[26]  P. Alam ‘A’ , 2021, Composites Engineering: An A–Z Guide.

[27]  Ali Farhadi,et al.  AI2-THOR: An Interactive 3D Environment for Visual AI , 2017, ArXiv.

[28]  Odest Chadwicke Jenkins,et al.  Semantic Linking Maps for Active Visual Object Search , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[29]  Silvio Savarese,et al.  Mechanical Search: Multi-Step Retrieval of a Target Object Occluded by Clutter , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[30]  Siddhartha S. Srinivasa,et al.  Object search by manipulation , 2013, 2013 IEEE International Conference on Robotics and Automation.

[31]  Asako Kanezaki,et al.  Visual Object Search by Learning Spatial Context , 2020, IEEE Robotics and Automation Letters.

[32]  Silvio Savarese,et al.  Visuomotor Mechanical Search: Learning to Retrieve Target Objects in Clutter , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[33]  Ali Farhadi,et al.  Visual Semantic Navigation using Scene Priors , 2018, ICLR.

[34]  Ken Goldberg,et al.  X-Ray: Mechanical Search for an Occluded Object by Minimizing Support of Learned Occupancy Distributions , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[35]  Henrik I. Christensen,et al.  Learning hierarchical relationships for object-goal navigation , 2020, ArXiv.

[36]  Silvio Savarese,et al.  Interactive Gibson: A Benchmark for Interactive Navigation in Cluttered Environments , 2019, ArXiv.

[37]  David Hsu,et al.  Act to See and See to Act: POMDP planning for objects search in clutter , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).