Integrating deep learning for 360-degree video applying to space user flow and behavior patterns

Modern city means the densely populated and developed industrial and commercial areas. Based on the influence of different types of urban space, time, and industry categories, pedestrians will create different space use patterns. It has always been an important topic for the development of smart cities in the future to understand the user's behavior, movement line, direction, observation of different aggregation modes, and data collection in urban space. This study proposes to use 360° camera to capture videos of human space interaction in different spaces, and integrate data pre-processing, image optimization processing, and image recognition deep learning steps, trying to analyze human position in space, basic data reading, walking path and directional interpretation, and aggregation mode. It is hoped that through the collection and analysis of a large amount of data from 360° panoramic videos, we can develop an application system for real time and semi-automatic recognition of user characteristics and behavior patterns in cognitive space in the future.