One of the thrusts of mobile and pervasive computing is supporting vision-based perception applications. Vision-based applications, such as augmented reality, are those that help users augment their understanding of the physical world through the camera(s) on their mobile devices. Such applications need to provide a seamless experience and hence require minimal end-to-end latency. However, these applications cannot be executed entirely on the devices. The recognition algorithms that these applications utilize, such as feature extraction and matching, need intensive computation and access to "big data", such as large labeled image datasets, for them to be fast and accurate. Such data and computational resources are not available locally on the device. Hence, they rely on offloading intensive tasks to the cloud. The devices send captured images to the cloud, which then executes the recognition algorithms using its computational resources and access to big data. However, the heavy computation and the added communication latency still deter seamless interaction, which is desired for such applications. Thus, there is a need to accelerate the performance of vision-based mobile applications. One suggested approach towards fulfilling this need has been to place more compute resources at the edge. We propose to efficiently utilize these edge-servers, complement them with mobile edge-clouds and vertically integrate mobile, edge and cloud, through dynamic edge-caching to deliver low-latency vision-based perception applications.
[1]
Justin Manweiler,et al.
OverLay: Practical Mobile Augmented Reality
,
2015,
MobiSys.
[2]
Mahadev Satyanarayanan,et al.
The Impact of Mobile Multimedia Applications on Data Center Consolidation
,
2013,
2013 IEEE International Conference on Cloud Engineering (IC2E).
[3]
Paramvir Bahl,et al.
Advancing the state of mobile cloud computing
,
2012,
MCS '12.
[4]
Rajeev Gandhi,et al.
Krowd: A Key-Value Store for Crowded Venues
,
2015,
MobiArch.