An efficient computation offloading architecture for the Internet of Things (IoT) devices

Proliferation of the connected Internet of things (IoT) devices and applications like augmented reality have resulted in a paradigm shift in computation requirement and power management of these devices. Furthermore, processing enormous amounts of data generated by ubiquitous IoT devices and meeting real-time deadline requirements of novel IoT applications exacerbate the challenges in IoT design. To address these challenges, in this paper, we propose a computation offloading architecture to process the huge amount of data generated by IoT devices while simultaneously meeting the real-time deadlines of IoT applications. In our proposed architecture, a resource-constrained IoT device requests a relatively resourceful computing device (e.g., a personal computer) in the same local network for computation offloading. Additionally, in our proposed computation offloading architecture, both client and server devices tune their tunable parameters, such as operating frequency and number of active cores, to meet the application's real-time deadline requirements. We compare our proposed computation offloading architecture with contemporary computation offloading models that use cloud computing. Experimental results verify that our proposed architecture provides a performance improvement of 21.4% on average as compared to cloud-based computation offloading schemes.