Real-time Multi-modal Image Fusion for Flight Vision System

In the field of intelligent avionics, Flight Vision System is a kind of advanced cockpit human-machine interface including Enhanced Vision System, Enhanced Flight Vision System and Combined Vision System, etc. The display output of Flight Vision System selectively integrates long/short wave, visible image sensor, Lidar/Radar, digital map images. With the help of multimodal sensor images which have the visibility see-through capacity in different visual condition, pilots achieve equivalent visual operations. However, in airborne computing platform, due to limited computing resources, only simple methods such as α-blending are used to obtain real-time performance. Usually the fusion result is often not satisfactory. In this paper, we improved the image layer separation based fusion method. On one hand, we employed the down sample version of fast Guided Filter as core image filter, on the other hand, we modified the main sections of original fusion algorithm in parallel fashion to improve the computing efficiency. Experimental results show that the proposed method keeps the good fusion appearance and reaches real-time performance with more than 30 fps, which is quite satisfactory for the application requirement in Flight Vision System.

[1]  Alejandro C. Frery How To Successfully Make a Scientific Contribution Through IEEE Geoscience and Remote Sensing Letters , 2015, IEEE Geoscience and Remote Sensing Letters.

[2]  Myeong-Ryong Nam,et al.  Fusion of multispectral and panchromatic Satellite images using the curvelet transform , 2005, IEEE Geoscience and Remote Sensing Letters.

[3]  Shutao Li,et al.  Image Fusion With Guided Filtering , 2013, IEEE Transactions on Image Processing.

[4]  Yi Shen,et al.  Region level based multi-focus image fusion using quaternion wavelet and normalized cut , 2014, Signal Process..

[5]  Jiayi Ma,et al.  Infrared and visible image fusion methods and applications: A survey , 2018, Inf. Fusion.

[6]  Glenn D. Hines,et al.  Real-time enhancement, registration, and fusion for a multi-sensor enhanced vision system , 2006, SPIE Defense + Commercial Sensing.

[7]  Roberto Manduchi,et al.  Bilateral filtering for gray and color images , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[8]  S. Yu. Zheltov,et al.  A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision , 2014 .

[9]  Hui-Liang Shen,et al.  Normalized Total Gradient: A New Measure for Multispectral Image Registration , 2017, IEEE Transactions on Image Processing.

[10]  Lei Yuan,et al.  A prototype of Enhanced Synthetic Vision System using short-wave infrared , 2018, 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC).

[11]  Alexander Toet,et al.  Image fusion by a ration of low-pass pyramid , 1989, Pattern Recognit. Lett..

[12]  Xiaofeng Li,et al.  A Fusion of Real and Virtual Information for Aiding Aircraft Pilotage in Low Visibility , 2013, J. Comput..

[13]  R. Reulke,et al.  Remote Sensing and Spatial Information Sciences , 2005 .

[14]  Michael Roth,et al.  Capability comparison of pilot assistance systems based solely on terrain databases versus sensor DB fused data systems , 2016, SPIE Defense + Security.

[15]  H.-U. Doehler,et al.  ALLFlight: multisensor data fusion for helicopter operations , 2010, Defense + Commercial Sensing.

[16]  Qingxiong Yang,et al.  Recursive Bilateral Filtering , 2012, ECCV.

[17]  Jian Sun,et al.  Guided Image Filtering , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.