The efficient processing of video streams is a key component in many emerging Internet of Things (IoT) and edge applications, such as Virtual and Augmented Reality (V/AR) and self-driving cars. These applications require real-time high-throughput video processing. This can be attained via a collaborative processing model between the edge and the cloud—called an Edge-Cloud model. To this end, many approaches were proposed to optimize the latency and bandwidth consumption of Edge-Cloud video processing, especially for Neural Networks (NN)-based methods. In this demonstration. We investigate the efficiency of these NN techniques, how they can be combined, and whether combining them leads to better performance. Our demonstration invites participants to experiment with the various NN techniques, combine them, and observe how the underlying NN changes with different techniques and how these changes affect accuracy, latency and bandwidth consumption. PVLDB Reference Format: Philipp M. Grulich and Faisal Nawab. Collaborative Edge and Cloud Neural Networks for Real-Time Video Processing. PVLDB, 11 (12): 2046-2049, 2018. DOI: https://doi.org/10.14778/3229863.3236256
[1]
Neil Fraser.
Differential synchronization
,
2009,
DocEng '09.
[2]
Peter Lindstrom,et al.
Fixed-Rate Compressed Floating-Point Arrays
,
2014,
IEEE Transactions on Visualization and Computer Graphics.
[3]
Carlo Curino,et al.
WANalytics: Geo-Distributed Analytics for a Data Intensive World
,
2015,
SIGMOD Conference.
[4]
Trevor N. Mudge,et al.
Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge
,
2017,
ASPLOS.
[5]
Matei Zaharia,et al.
NoScope: Optimizing Deep CNN-Based Queries over Video Streams at Scale
,
2017,
Proc. VLDB Endow..
[6]
P. KaewTrakulPong,et al.
An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection
,
2002
.