Abstract Human action recognition in video analytics has been widely studied in recent years. Yet, most of these methods assign a single action label to video after either analyzing a complete video or using classifier for each frame. But when compared to human vision strategy, it can be deduced that we (human) require just an instance of visual data for recognition of scene. It turns out that small group of frames or even single frame from the video are enough for precise recognition. In this paper, we present an approach to detect, localize and recognize actions of interest in almost real-time from frames obtained by a continuous stream of video data that can be captured from a surveillance camera. The model takes input frames after a specified period and is able to give action label based on a single frame. Combining results over specific time we predicted the action label for the stream of video. We demonstrate that YOLO is effective method and comparatively fast for recognition and localization in Liris Human Activities dataset.
[1]
Mubarak Shah,et al.
UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild
,
2012,
ArXiv.
[2]
Petros Daras,et al.
Real-Time Skeleton-Tracking-Based Human Action Recognition Using Kinect Data
,
2014,
MMM.
[3]
Snehasis Mukherjee,et al.
Human Action Recognition Using Dominant Motion Pattern
,
2015,
ICVS.
[4]
Gonen Eren,et al.
Evaluation of video activity localizations integrating quality and quantity measurements
,
2014,
Comput. Vis. Image Underst..
[5]
Andrew Zisserman,et al.
Two-Stream Convolutional Networks for Action Recognition in Videos
,
2014,
NIPS.