Q-learning Supplemented Crowdsensing Framework for Resource Constrained Devices
暂无分享,去创建一个
In mobile crowdsensing the most significant challenge is to motivate smart devices to perform various sensing tasks for diverse goal oriented applications. This can be realized by the interaction between task owner with smart devices via a specific platform (application interface) to influence their acceptance for the task completion; employing various incentive schemes and techniques mentioned in existing literatures. However, it becomes a critical issue to handle distinct energy restrictions of participating devices, and appropriately assigning task loads based upon their capabilities that has been overlooked mostly, and even more in an unknown interaction environment. In this paper we address this issue at first by evaluating an optimal task load assignment that maximizes participating resource constraint node’s utility at a resourceful node (broker), and then model a distributed Q-learning framework of crowdsensing to improve cumulative reward for participating nodes. Simulation results show the proposed algorithm converges quickly for the designed framework, and is very efficient to employ.
[1] Merkourios Karaliopoulos,et al. First learn then earn: optimizing mobile crowdsensing campaigns through data-driven user profiling , 2016, MobiHoc.
[2] Choong Seon Hong,et al. User Profile Based Fair Incentive Management for Participation Maximization Using Learning Mechanism , 2017 .
[3] Fan Ye,et al. Mobile crowdsensing: current state and future challenges , 2011, IEEE Communications Magazine.