Intelligent Video Monitoring System with the Functionality of Online Recognition of People's Behavior and Interactions Between People

The intelligent video monitoring system SAVA has been implemented as a prototype at the 9th Technology Readiness Level. The source of data are video cameras located in the public space that provide HD video streaming. The aim of the study is to present an overview of the SAVA system enabling identification and classification in the real time of such behaviors as: walking, running, sitting down, jumping, lying, getting up, bending, squatting, waving, and kicking. It also can identify interactions between persons, such as: greeting, passing, hugging, pushing, and fighting. The system has module-based architecture and is combined of the following modules: acquisition, compression, path detection, path analysis, motion description, action recognition. The effect of the modules operation is a recognized behavior or interaction. The system achieves a classification correctness level of 80% when there are more than ten classes.

[1]  Adam Gudys,et al.  VMASS: Massive Dataset of Multi-camera Video for Learning, Classification and Recognition of Human Actions , 2014, ACIIDS.

[2]  Adam Gudys,et al.  Camera Calibration and Navigation in Networks of Rotating Cameras , 2015, ACIIDS.

[3]  Alfredo De Santis,et al.  Towards a Lawfully Secure and Privacy Preserving Video Surveillance System , 2010, EC-Web.

[4]  G. Charith K. Abhayaratne,et al.  Video-based activity level recognition for assisted living using motion features , 2015, ICDSC.

[5]  Alexandros André Chaaraoui,et al.  A review on vision techniques applied to Human Behaviour Analysis for Ambient-Assisted Living , 2012, Expert Syst. Appl..

[6]  Mehrtash Tafazzoli Harandi,et al.  Going deeper into action recognition: A survey , 2016, Image Vis. Comput..

[7]  Adam Gudys,et al.  Tracking People in Video Sequences by Clustering Feature Motion Paths , 2014, ICCVG.

[8]  J.K. Aggarwal,et al.  Human activity analysis , 2011, ACM Comput. Surv..

[9]  Cordelia Schmid,et al.  Dense Trajectories and Motion Boundary Descriptors for Action Recognition , 2013, International Journal of Computer Vision.

[10]  Ivan Laptev,et al.  Efficient Feature Extraction, Encoding, and Classification for Action Recognition , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Feng Shi,et al.  Gradient Boundary Histograms for Action Recognition , 2015, 2015 IEEE Winter Conference on Applications of Computer Vision.

[12]  Thomas Serre,et al.  HMDB: A large video database for human motion recognition , 2011, 2011 International Conference on Computer Vision.

[13]  Jakub Segen,et al.  Optical Flow Based Face Anonymization in Video Sequences , 2017, ACIIDS.

[14]  Alexandros André Chaaraoui,et al.  Visual privacy protection methods: A survey , 2015, Expert Syst. Appl..

[15]  Mubarak Shah,et al.  UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild , 2012, ArXiv.

[16]  Gangolf Hirtz,et al.  Automated real-time surveillance for ambient assisted living using an omnidirectional camera , 2014, 2014 IEEE International Conference on Consumer Electronics (ICCE).

[17]  Di Wu,et al.  Recent advances in video-based human action recognition using deep learning: A review , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[18]  Mykola Pechenizkiy,et al.  A survey on using domain and contextual knowledge for human activity recognition in video streams , 2016, Expert Syst. Appl..

[19]  Michael Milford,et al.  Action recognition: From static datasets to moving robots , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[20]  Xiaogang Wang,et al.  Intelligent multi-camera video surveillance: A review , 2013, Pattern Recognit. Lett..

[21]  Feng Shi,et al.  Local part model for action recognition , 2016, Image Vis. Comput..

[22]  Jakub Segen,et al.  Video Editor for Annotating Human Actions and Object Trajectories , 2016, ACIIDS.

[23]  Gordon Cheng,et al.  Transferring skills to humanoid robots by extracting semantic representations from observations of human activities , 2017, Artif. Intell..

[24]  Antonio Fernández-Caballero,et al.  A survey of video datasets for human action and activity recognition , 2013, Comput. Vis. Image Underst..

[25]  Dariu Gavrila,et al.  The Visual Analysis of Human Movement: A Survey , 1999, Comput. Vis. Image Underst..

[26]  Chris D. Nugent,et al.  From Activity Recognition to Intention Recognition for Assisted Living Within Smart Homes , 2017, IEEE Transactions on Human-Machine Systems.

[27]  L. Dagum,et al.  OpenMP: an industry standard API for shared-memory programming , 1998 .

[28]  Hema Swetha Koppula,et al.  Anticipating Human Activities Using Object Affordances for Reactive Robotic Response , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[29]  Faisal Z. Qureshi,et al.  Negotiating Privacy Preferences in Video Surveillance Systems , 2011, IEA/AIE.

[30]  Ronald Poppe,et al.  A survey on vision-based human action recognition , 2010, Image Vis. Comput..

[31]  Mahesh M. Bundele,et al.  Research on Intelligent Video Surveillance techniques for suspicious activity detection critical review , 2016, 2016 International Conference on Recent Advances and Innovations in Engineering (ICRAIE).

[32]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[33]  Peng Zhang,et al.  Privacy enabled video surveillance using a two state Markov tracking algorithm , 2011, Multimedia Systems.

[34]  Na Li,et al.  Review of intelligent video surveillance technology research , 2011, Proceedings of 2011 International Conference on Electronic & Mechanical Engineering and Information Technology.

[35]  Jakub Segen,et al.  Selected Space-Time Based Methods for Action Recognition , 2016, ACIIDS.

[36]  Yun Q. Shi,et al.  A privacy-preserving content-based image retrieval method in cloud environment , 2017, J. Vis. Commun. Image Represent..

[37]  Christian Damsgaard Jensen,et al.  Security and Privacy in Video Surveillance: Requirements and Challenges , 2014, SEC.

[38]  Jakub Segen,et al.  A camera-based system for tracking people in real time , 1996, Proceedings of 13th International Conference on Pattern Recognition.

[39]  I. Jolliffe Principal Component Analysis and Factor Analysis , 1986 .

[40]  Jakub Segen,et al.  Recent Developments on 2D Pose Estimation From Monocular Images , 2016, ACIIDS.

[41]  Michal Staniszewski,et al.  Recent Developments in Tracking Objects in a Video Sequence , 2016, ACIIDS.