Integrated Intelligence for Human-Robot Teams

With recent advances in robotics technologies and autonomous systems, the idea of human-robot teams is gaining ever-increasing attention. In this context, our research focuses on developing an intelligent robot that can autonomously perform non-trivial, but specific tasks conveyed through natural language. Toward this goal, a consortium of researchers develop and integrate various types of intelligence into mobile robot platforms, including cognitive abilities to reason about high-level missions, perception to classify regions and detect relevant objects in an environment, and linguistic abilities to associate instructions with the robot’s world model and to communicate with human teammates in a natural way. This paper describes the resulting system with integrated intelligence and reports on the latest assessment.

[1]  Jean Oh,et al.  RCTA capstone assessment , 2015, Defense + Security Symposium.

[2]  Matthew R. Walter,et al.  Learning models for following natural language directions in unknown environments , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[4]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Jean Oh,et al.  Inferring Maps and Behaviors from Natural Language Instructions , 2015, ISER.

[6]  Jean Oh,et al.  Grounding spatial relations for outdoor robot navigation , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Florian Jentsch,et al.  Cognitive Models of Decision Making Processes for Human-Robot Interaction , 2013, HCI.

[8]  Dan Klein,et al.  A Game-Theoretic Approach to Generating Spatial Descriptions , 2010, EMNLP.

[9]  Matthew R. Walter,et al.  On the performance of hierarchical distributed correspondence graphs for efficient symbol grounding of robot instructions , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[10]  Luke S. Zettlemoyer,et al.  Learning to Parse Natural Language Commands to a Robot Control System , 2012, ISER.

[11]  Jean Oh,et al.  Assessment of Navigation Using a Hybrid Cognitive/Metric World Model , 2015 .

[12]  Daniel Munoz,et al.  Inference Machines: Parsing Scenes via Iterated Predictions , 2013 .

[13]  H. Mannila,et al.  Computing Discrete Fréchet Distance ∗ , 1994 .

[14]  Matthew R. Walter,et al.  Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation , 2011, AAAI.

[15]  Jean Oh,et al.  Toward Mobile Robots Reasoning Like Humans , 2015, AAAI.

[16]  Robert Dean Common world model for unmanned systems , 2013, Defense, Security, and Sensing.

[17]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[18]  David A. McAllester,et al.  Object Detection with Discriminatively Trained Part Based Models , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Yi Yang,et al.  Articulated pose estimation with flexible mixtures-of-parts , 2011, CVPR 2011.

[20]  John R Anderson,et al.  An integrated theory of the mind. , 2004, Psychological review.

[21]  George J. Pappas,et al.  Active Deformable Part Models Inference , 2014, ECCV.

[22]  John D. Kelleher,et al.  Towards a Cognitive System that Can Recognize Spatial Regions Based on Context , 2012, AAAI.

[23]  Matthew R. Walter,et al.  A multimodal interface for real-time soldier-robot teaming , 2016, SPIE Defense + Security.