Implementation of the Automatic Processing Chain for ARES

DLR (Deutsches Zentrum fur Luft- und Raumfahrt) and GFZ (Geo-Forschungszentrum Potsdam) are acquiring the airborne hyperspectral scanner ARES to be operating from autumn 2005. In order to ensure the first fully automatic processing environment for hyperspectral airborne scanner data working without any user interaction, the whole processing chain is embedded into the 'Data Information and Management System' (DIMS), an operational processing and archiving environment installed at DLR. The following objectives were pursued: automation and standardization of the processing environment and the resulting data products, ensuring a high quality data standard, utilization of a professional web interface making the data stock and status transparent to the user, plus offering access to a professional archiving environment. DLR's 'Data Information and Management System' (DIMS) provides an archiving subsystem as well as a generic processing environment and a searchable database including a public access via the WWW. The processing steps embedded into the DIMS include system correction, orthographic rectification and atmospheric correction. During system correction system artefacts due to the sensor characteristics are removed and the data is calibrated to at-sensor-radiance based on the calibration coefficients attained during laboratory calibration. In the course of the orthographic rectification the attitude and position data recorded during datatake are synchronized with the scanner data and geo-referenced by the use of a Digital Terrain Model. The task of the atmospheric correction is the calibration of each pixel to ground reflectance. This is done by the implementation of the software package ATCOR, which is based on the radiative transfer model MODTRAN, and integrates the digital terrain model and meteorologic data. Each processing step is followed by a quality control giving evidence if the quality standards are met. For each processing level a data model is described, including the components available at each processing level plus a set of parameters describing the dataset. This description of parameters on the one hand facilitates the traceability of the processing steps, while on the other hand enables the user to view all relevant parameters, when inspecting the data catalogue. Based upon the data model the design and implementation of the processing system is explained, where the interaction of the specific processing components between each other and with the generic processing environment are shown.