The rapid evolution of the Internet of Things (IoT) and Big Data technology has been generating a large amount and variety of sensing contents, including numeric measured values (e.g., timestamps, geolocations, or sensor logs) and multimedia (e.g., images, audios, and videos). In analyzing and understanding heterogeneous types of IoT-generated contents better, data visualization is an essential component of exploratory data analyses to facilitate information perception and knowledge extraction. This study introduces a holistic approach of storing, processing, and visualizing IoT-generated contents to support context-aware spatiotemporal insight by combining deep learning techniques with a geographical map interface. Visualization is provided under an interactive web-based user interface to help the an efficient visual exploration considering both time and geolocation by easy spatiotemporal query user interface1.
[1]
Xiaowei Zhou,et al.
Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank Representation
,
2011,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[2]
Maria Seton,et al.
The GPlates Portal: Cloud-Based Interactive 3D Visualization of Global Geophysical and Geological Data in a Web Browser
,
2016,
PloS one.
[3]
Jin Young Choi,et al.
Visual Path Prediction in Complex Scenes with Crowded Moving Objects
,
2016,
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4]
Bernd Resch,et al.
Web-based 4D visualization of marine geo-data using WebGL
,
2014
.
[5]
Dongmin Kim,et al.
Stinuum: A Holistic Visual Analysis of Moving Objects with Open Source Software
,
2017,
SIGSPATIAL/GIS.