Pavilion: Bridging Photo-Realism and Robotics

Simulation environments play a centric role in the research of sensor fusion and robot control. This paper presents Pavilion, a novel open-source simulation system, for robot perception and kinematic control based on the Unreal Engine and the Robot Operating System (ROS). The novelty of this work includes threefold: (1) developing a shader-based method to generate optical flow ground-truth data with the Unreal Engine, (2) developing a toolset to remove binary incompatibility between ROS and the Unreal Engine to enable real-time interaction, and (3) developing a method to directly import Simulation Description Format (SDF) robot models into the Unreal Engine at runtime. Finally, a Gazebo-compatible real-time simulation system is developed to enable training and evaluation of a large number of sensor fusion, planning, decision and control algorithms. The system can be implemented on both Linux and macOS, with the latest version of ROS. Various experiments have been performed to validate the superior performance of the proposed simulation environment over other state-of-the-art simulators in terms of number of modalities, simulation accuracy, latency and degree of integration difficulty.

[1]  Bernard Ghanem,et al.  Sim4CV: A Photo-Realistic Simulator for Computer Vision Applications , 2017, International Journal of Computer Vision.

[2]  Ashish Kapoor,et al.  AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles , 2017, FSR.

[3]  Andrew Howard,et al.  Design and use paradigms for Gazebo, an open-source multi-robot simulator , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[4]  Richard Szeliski,et al.  A Database and Evaluation Methodology for Optical Flow , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[5]  Germán Ros,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[6]  David J. Fleet,et al.  Performance of optical flow techniques , 1994, International Journal of Computer Vision.

[7]  Andreas Geiger,et al.  Object scene flow for autonomous vehicles , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Michael J. Black,et al.  Lessons and Insights from Creating a Synthetic Optical Flow Benchmark , 2012, ECCV Workshops.

[9]  Ji Zhang,et al.  Visual-lidar odometry and mapping: low-drift, robust, and fast , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[10]  Michael Beetz,et al.  Action recognition and interpretation from virtual demonstrations , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).