A Novel Disaster Image Data-set and Characteristics Analysis using Attention Model

The advancement of deep learning technology has enabled us to develop systems that outperform any other classification technique. However, success of any empirical system depends on the quality and diversity of the data available to train the proposed system. In this research, we have carefully accumulated a relatively challenging dataset that contains images collected from various sources for three different disasters: fire, water and land. Besides this, we have also collected images for various damaged infrastructure due to natural or man made calamities and damaged human due to war or accidents. We have also accumulated image data for a class named non-damage that contains images with no such disaster or sign of damage in them. There are 13,720 manually annotated images in this dataset, each image is annotated by three individuals. We are also providing discriminating image class information annotated manually with bounding box for a set of 200 test images. Images are collected from different news portals, social media, and standard datasets made available by other researchers. A three layer attention model (TLAM) is trained and average five fold validation accuracy of 95.88% is achieved. Moreover, on the 200 unseen test images this accuracy is 96.48%. We also generate and compare attention maps for these test images to determine the characteristics of the trained attention model.

[1]  Alessia Saggese,et al.  Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts Based on Color, Shape, and Motion , 2015, IEEE Transactions on Circuits and Systems for Video Technology.

[2]  Yiannis Kompatsiaris,et al.  Visual and Textual Analysis of Social Media and Satellite Images for Flood Detection @ Multimedia Satellite Task MediaEval 2017 , 2017, MediaEval.

[3]  Philip H. S. Torr,et al.  Learn To Pay Attention , 2018, ICLR.

[4]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[5]  José Fernando Rodrigues,et al.  BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis , 2015, 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images.

[6]  Muhammad Imran,et al.  Damage Assessment from Social Media Imagery Data During Disasters , 2017, 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).

[7]  Hasan Demirel,et al.  Fire detection in video sequences using a generic color model , 2009 .

[8]  Sung Wook Baik,et al.  Early fire detection using convolutional neural networks during surveillance for effective disaster management , 2017, Neurocomputing.

[9]  Firoj Alam,et al.  Image4Act: Online Social Media Image Processing for Disaster Response , 2017, 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).

[10]  Amin Ahsan Ali,et al.  A Comparative Study on Disaster Detection from Social Media Images Using Deep Learning , 2020 .

[11]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[12]  Yiannis Kompatsiaris,et al.  People and Vehicles in Danger - A Fire and Flood Detection System in Social Media , 2018, 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP).

[13]  Vineeth N. Balasubramanian,et al.  Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks , 2017, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).

[14]  Mariette Awad,et al.  A computationally efficient multi-modal classification approach of disaster-related Twitter images , 2019, SAC.

[15]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[16]  Mariette Awad,et al.  Damage Identification in Social Media Posts using Multimodal Deep Learning , 2018, ISCRAM.

[17]  Bolei Zhou,et al.  Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Steven Verstockt,et al.  Video driven fire spread forecasting (f) using multi-modal LWIR and visual flame and smoke data , 2013, Pattern Recognit. Lett..

[19]  ByoungChul Ko,et al.  Modeling and Formalization of Fuzzy Finite Automata for Detection of Irregular Fire Flames , 2011, IEEE Transactions on Circuits and Systems for Video Technology.