Feature-level and pixel-level fusion routines when coupled to infrared night-vision tracking scheme

This manuscript evaluates the feature-based and the pixel-based fusion schemes quantitatively when applied to fuse infrared LWIR and visible TV sequences. The input sequence is from a commercial night-vision module dedicated for automotive applications. The text presents an in-house feature-level fusion routine that applies three fusing relationships; intersection, disjointing and inclusion, in addition to a new objects tracking routine. The processing is done for two specific night driving scenarios; a passing vehicle and an approaching vehicle with glare. The study presents the feature-level fusion details that include; a registration done at the hardware-level, a Gaussian-based preprocessing, a feature extraction subroutine, and finally the fusing logic. The evaluation criteria are based on the retrieved objects morphology and the number of features extracted. Presented comparison show that feature-level is more robust over variations in intensity of input channels and provides higher signal to noise ratio; 6.18 compared to 4.72 for the pixel-level case. Additionally, this study indicates that the pixel-level extracts more information from the channel with higher intensity while the feature-level highlights the input with higher number of features.