An elevation-guided annotation tool for flood extent mapping on earth imagery (demo paper)

Accurate and timely mapping of flood extent plays a crucial role in disaster management such as damage assessment and relief activities. In recent years, high-resolution optical imagery becomes increasingly available with the wide deployment of satellites and drones. However, analyzing such imagery data to extract flood extent poses unique challenges due to noises such as obstacles (e.g., tree canopies, clouds). In this paper, we propose an elevation-guided annotation tool for flood extent mapping, which allows annotators to provide the flooded/dry labels for just a few pixels to cover a large area where the labels of most other pixels are automatically inferred. The physical rule we use here to guide the automatic label inference is that if a location is flooded (resp. dry), then its adjacent locations with a lower (resp. higher) elevation must also be flooded (resp. dry). In this way, annotators just need to label the pixels that they are confident with, and the true labels of many ambiguous pixels such as tree-canopy ones can be automatically inferred. We demonstrate the usage of our annotation tool using high-resolution aerial imagery from National Oceanic and Atmospheric Administration (NOAA) National Geodetic Survey (NGS) together with the corresponding Digital Elevation Model (DEM) data. The annotated data can be used to train machine learning models for flood extent mapping, and we train U-Net models to infer the flood map for an unseen region and achieve a high accuracy. Our annotation tool is open-sourced at https://github.com/SaugatAdhikari/Flood-Annotation-Tool.