FLIR Image Analysis and Image Fusion support Pilots in Low Visibility

Enhanced Vision Systems (EVS) are aiming to alleviate restrictions in airspace and airport capacity in low-visibility conditions. EVS relies on weather-penetrating forward-looking sensors that augment the naturally existing visual cues in the environment and provide a real-time image of prominent topographical objects that may be identified by the pilot. The basic idea behind this technology is to allow VMC operations under IMC. The recently released final rule of the FAA for Enhanced Flight Vision Systems (EFVS) clearly acknowledges the operational benefits of such a technology by stating the following: “Use of an EFVS with a Head-Up-Display (HUD) may improve the level of safety by improving position awareness, providing visual cues to maintain a stabilized approach, and minimizing missed approach situations“. An obvious method for showing the image information is to “overlay” it onto the head-up-display as transparent raster image. Due to its simplicity this method has been applied in several enhanced vision projects in the past. In the US American SE-Vision program, which was finished with several emonstration flights on a Boeing 727 in 2005, a transparent inset method has been investigated, too. Beside several project partners, such as the FAA, Rockwell-Collins and Max-Viz, the German Aerospace Center (DLR) had participated. Max-Viz provided a bi-FLIR camera, consisting of a long wave IR-sensor (LWIR, 8-13 micron) and a short wave IR-sensor (SWIR, 1-3 micron). The LWIR “sees” the thermal contrast between the concrete of the runway and the surrounding grass area, and the SIWR image captures runway lamps at the runway border and some other visual navigation aids like VASI or PAPI systems at the left or right side of the runway. DLR should demonstrate a more intelligent way of overlaying the information from this two IR-cameras, to reduce cluttering of the HUD and to provide a much better “look through”, so that pilots can recognize clearly the outside-world shortly before the finally touchdown. The applied method consists of the following steps: A separate analysis process extracts hypothesis on the location of the runway structure for both image sources. This extraction process itself is driven by some structure grouping algorithm based on image features, such as contour lines, contour blobs and so on. For each detected runway structure a hypothesis on the aircraft position relative to the runway is computed. Thereafter a fusion process, which is carried in parallel to the image analysis processes, combines these hypotheses from the different image sources over time and space, so that finally, after two or three seconds of consistent data, a valid relative aircraft position relative to the runway is available.