Multisensorial Vision For Autonomous Vehicle Driving

A multisensorial vision system for autonomous vehicle driving is presented, that operates in outdoor natural environments. The system, currently under development in our laboratories, will be able to integrate data provided by different sensors in order to achieve a more reliable description of a scene and to meet safety requirements. We chose to perform a high-level symbolic fusion of the data to better accomplish the recognition task. A knowledge-based approach is followed, which provides a more accurate solution; in particular, it will be possible to integrate both physical data, furnished by each channel, and different fusion strategies, by using an appropriate control structure. The high complexity of data integration is reduced by acquiring, filtering, segmenting and extracting features from each sensor channel. Production rules, divided into groups according to specific goals, drive the fusion process, linking to a symbolic frame all the segmented regions characterized by similar properties. As a first application, road and obstacle detection is performed. A particular fusion strategy is tested that integrates results separately obtained by applying the recognition module to each different sensor according to the related model description. Preliminary results are very promising and confirm the validity of the proposed approach.