The snowfall-cloud at Syowa Station identified by convolutional neural network
暂无分享,去创建一个
Here, we attempt to estimate the surface mass balance (SMB) of Antarctica by accounting of the snowfall values based on the spatial synoptic patterns among some elements (e.g. geopotential height, relative humidity, sea ice concentration, and so on) for several decades. Since the snowfall amount provided by several reanalysis data and regional climate models show a lot of disagreements with the observed snowfall event around Syowa Station (Agosta et al. 2019), we should verify the elements from reanalysis data are sufficient for interpreting of snowfall events. For this subject, we investigate the relationship among the atmospheric synoptic patterns and cloud patterns from satellite data. The characteristic spatial patterns between atmospheric elements and clouds can be defined based on the observation data at Syowa Station (snow depth, weather condition, cloud amount, and so on). Regarding these patterns as a training set, we apply to the machine learning techniques to find similar patterns automatically. We build an image classifier based on Convolution Neural Network (CNN) to apply this method for the cloud images with blizzard events in 2009. The image data are Ch.4 of NOAA/AVHRR and figure 1 shows samples for the snowfall condition. There was a trained cloud in ‘good’ image but no significant cloud in ‘not good’ image. Based on the snow depth data at Syowa Station, we compared a spatial characteristic of the clouds between heavy and light snow amounts. The cloud area (in pixels) in heavy snow amounts were different about one order from the cloud in light snow amounts. The trial learning of CNN was over-fitted because of the lack of sample numbers. To improve this result, we did construct a new CNN design (fig. 2) and added NOAA/AVHRR images for 10 years with several channels to calculate the difference of brightness. The ‘good’ cloud with snowfall were tagged in case of the snow weather condition and cloud amount over eight (max to ten) based on Syowa Station observation. The new CNN is based on VGG16 (Simonyan and Zisserman, 2014) and the concatenate layers (3-layers; First layer: 1x1 Convolution, second layer: 1x1 Convolution and 5x5 Convolution, 3rd layer: 1x1 Convolution, 3x3 Convolution and 3x3 Convolution) have been added as Inception module. For the visual explanations for decisions from a large class of CNN, we take Grad-CAM (Selvaraju et al. 2016). Grad-CAM uses the gradients of any target concept to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Figure 3 indicates the sample result of CNN in case of ‘good’ and we can find CNN focuses on the cloud area not over Antarctica. The CNN learning is still under the improvement, but we will gain an automatic identifiable network for the Antarctic cloud with snowfall condition soon.