Perception and mobility research at Defence R&D Canada for UGVs in complex terrain

The Autonomous Intelligent Systems Section at Defence R&D Canada - Suffield envisions autonomous systems contributing to decisive operations in the urban battle space. In this vision, teams of unmanned ground, air, and marine vehicles, and unattended ground sensors will gather and coordinate information, formulate plans, and complete tasks. The mobility requirement for ground-based mobile systems operating in urban settings must increase significantly if robotic technology is to augment human efforts in military relevant roles and environments. In order to achieve its objective, the Autonomous Intelligent Systems Section is pursuing research that explores the use of intelligent mobility algorithms designed to improve robot mobility. Intelligent mobility uses sensing and perception, control, and learning algorithms to extract measured variables from the world, control vehicle dynamics, and learn by experience. These algorithms seek to exploit available world representations of the environment and the inherent dexterity of the robot to allow the vehicle to interact with its surroundings and produce locomotion in complex terrain. However, a disconnect exists between the current state-of-the-art in perception systems and the information required for novel platforms to interact with their environment to improve mobility in complex terrain. The primary focus of the paper is to present the research tools, topics, and plans to address this gap in perception and control research. This research will create effective intelligence to improve the mobility of ground-based mobile systems operating in urban settings to assist the Canadian Forces in their future urban operations.

[1]  Simon Lacroix,et al.  The Autonomous Blimp Project of LAAS-CNRS: Achievements in Flight Control and Terrain Mapping , 2004, Int. J. Robotics Res..

[2]  Yolanda González Cid,et al.  Real-time 3d SLAM with wide-angle vision , 2004 .

[3]  Michael Trentini,et al.  Adaptive representation for dynamic environment, vehicle, and mission complexity , 2004, SPIE Defense + Commercial Sensing.

[4]  James J. Little,et al.  Vision-based global localization and mapping for mobile robots , 2005, IEEE Transactions on Robotics.

[5]  Piotr Jasiobedzki,et al.  Instant Scene Modeler for Crime Scene Reconstruction , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops.

[6]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[7]  Sebastian Thrun,et al.  Stanley: The robot that won the DARPA Grand Challenge , 2006, J. Field Robotics.

[8]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[9]  Christopher Rasmussen,et al.  Combining laser range, color, and texture cues for autonomous road following , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[10]  Andres Huertas,et al.  Daytime Water Detection by Fusing Multiple Cues for Autonomous Off-Road Navigation , 2006 .

[11]  Martial Hebert,et al.  Finding organized structures in 3-D ladar data , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).