Parallel Vision and Learning for Intelligent Perception in Smart Driving

Modern vision models are built upon large-scale diversified dataset, which is labor-intensive and cost-ineffective to collect and label. They are usually effective under simple constrained conditions, while the performance gets worse under actual traffic scenes. To address aforementioned problems, we propose a visual analysis framework based on parallel vision and learning for intelligent perception in smart driving. Specifically, our framework first generate and select data exploiting artificial image system. Then, computational experiments are conducted with predictive learning for model design, learning, and evaluation. Our framework is powered with virtual-actual interactive ability through parallel execution, integrating information from different scenes. The difficulty of the visual model in the actual scene can be applied to guide the model training in the artificial scene. We also carried out a case study to preliminarily demonstrate the effectiveness of the proposed framework.