Light-weight adaptive task offloading from smartphones to nearby computational resources

Applications on smartphones are extremely popular as users can download and install them very easily from a service provider's application repository. Most of the applications are thoroughly tested and verified on a target smartphone platform; however, some applications could be very computationally intensive and overload the smartphone's resource capability. In this paper, we describe a method to predict the total processing time when offloading part of an application from smartphones to nearby servers. In our method, if an application developercan (1) define a basic model of the problem (e. g., f(x)=ax+b) and (2) implement an algorithm to update the model (e. g., least squares method), the application quickly adjusts the parameters of the model and minimizes the difference between predicted and measured performance adaptively. This accurate prediction helps dynamically determine whether or not it is worth offloading tasks and the expected performance improvement. Since this model's simplicity greatly reduces the time required for profiling the performance of the application at run-time, it enables users to start using an application without pre-computing a performance profile. Our experiments show that our update parameter protocol for the performance prediction functions works sufficiently well for a face detection problem. The protocol requires on average 7.8 trials to update prediction parameters, and the prediction error stays less than 10% for the rest of the trials. By offloading the face detection task to a nearby server for an image of 1.2Mbytes, the performance improved from 19 seconds to 4 seconds. This research opens up the possibility of new applications in real-time smartphone data processing by harnessing nearby computational resources.