The ParallelEye-CS Dataset: Constructing Artificial Scenes for Evaluating the Visual Intelligence of Intelligent Vehicles

Offline training and testing are playing an essential role in design and evaluation of intelligent vehicle vision algorithms. Nevertheless, long-term inconvenience concerning traditional image datasets is that manually collecting and annotating datasets from real scenes lack testing tasks and diverse environmental conditions. For that virtual datasets can make up for these regrets. In this paper, we propose to construct artificial scenes for evaluating the visual intelligence of intelligent vehicles and generate a new virtual dataset called “ParallelEye-CS”. First of all, the actual track map data is used to build 3D scene model of Chinese Flagship Intelligent Vehicle Proving Center Area, Changshu. Then, the computer graphics and virtual reality technologies are utilized to simulate the virtual testing tasks according to the Chinese Intelligent Vehicles Future Challenge (IVFC) tasks. Furthermore, the Unity3D platform is used to generate accurate ground-truth labels and change environmental conditions. As a result, we present a viable implementation method for constructing artificial scenes for traffic vision research. The experimental results show that our method is able to generate photorealistic virtual datasets with diverse testing tasks.

[1]  Nanning Zheng,et al.  Artificial intelligence test: a case study of intelligent vehicles , 2018, Artificial Intelligence Review.

[2]  Fei-Yue Wang,et al.  $M^{4}CD$ : A Robust Change Detection Method for Intelligent Visual Surveillance , 2018, IEEE Access.

[3]  Fei-Yue Wang,et al.  Scene-Specific Pedestrian Detection Based on Parallel Vision , 2017, ArXiv.

[4]  Xuan Li,et al.  Training and testing object detectors with virtual images , 2017, IEEE/CAA Journal of Automatica Sinica.

[5]  Xuan Li,et al.  The ParallelEye Dataset: Constructing Large-Scale Artificial Scenes for Traffic Vision Research , 2017, ArXiv.

[6]  Jan Kautz,et al.  High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7]  Yin Zhou,et al.  VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[8]  V. Koltun,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[9]  James M. Rehg,et al.  Parallel vision for perception and understanding of complex scenes: methods, framework, and perspectives , 2017, Artificial Intelligence Review.

[10]  Wang Fei-Yue,et al.  Parallel imaging: A unified theoretical framework for image generation , 2017, 2017 Chinese Automation Congress (CAC).

[11]  Vladlen Koltun,et al.  Playing for Benchmarks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[12]  Fei-Yue Wang,et al.  Generative adversarial networks: introduction and outlook , 2017, IEEE/CAA Journal of Automatica Sinica.

[13]  Qiang Ji,et al.  A joint cascaded framework for simultaneous eye detection and eye state estimation , 2017, Pattern Recognit..

[14]  Peter I. Corke,et al.  High-fidelity simulation for evaluating robotic vision performance , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[15]  Vladlen Koltun,et al.  Playing for Data: Ground Truth from Computer Games , 2016, ECCV.

[16]  Eder Santana,et al.  Learning a Driving Simulator , 2016, ArXiv.

[17]  Antonio M. López,et al.  The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Daniel H. Biedermann,et al.  Evaluating visual ADAS components on the COnGRATS dataset , 2016, 2016 IEEE Intelligent Vehicles Symposium (IV).

[19]  Qiao Wang,et al.  VirtualWorlds as Proxy for Multi-object Tracking Analysis , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  Davide Scaramuzza,et al.  Benefit of large field-of-view cameras for visual odometry , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[21]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[22]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[24]  Jianxiong Xiao,et al.  DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[25]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[26]  Jürgen Schmidhuber,et al.  Evolving large-scale neural networks for vision-based reinforcement learning , 2013, GECCO '13.

[27]  Fei-Yue Wang,et al.  Parallel Control and Management for Intelligent Transportation Systems: Concepts, Architectures, and Applications , 2010, IEEE Transactions on Intelligent Transportation Systems.

[28]  Micha Sharir,et al.  A Survey of Motion Planning and Related Geometric Algorithms , 1988, Artificial Intelligence.

[29]  Wang Fei-Yue,et al.  Parallel Control: A Method for Data-Driven and Computational Control , 2013 .

[30]  Christos Dimitrakakis,et al.  TORCS, The Open Racing Car Simulator , 2005 .

[31]  Oliver Treib,et al.  Complying with Europe: Theorising the domestic impact of EU law: the state of the art and beyond , 2005 .

[32]  Dean Pomerleau,et al.  ALVINN, an autonomous land vehicle in a neural network , 2015 .