Achieving Coverage through Distributed Reinforcement Learning in Wireless Sensor Networks

With the extensive implementations of wireless sensor networks in many areas, it is imperative to have better management of the coverage and energy consumption of such networks. These networks consist of large number of sensor nodes and therefore a multi-agent system approach needs to be taken in order for a more accurate model. Three coordination algorithms are being put to the test in this paper: (i) fully distributed Q-learning which we refer to as independent learner (IL), (ii) distributed value function (DVF) and (iii) an algorithm we developed which is a variation of the IL, coordinated algorithm (COORD). The results show that the IL and DVF algorithm performed for higher sensor node densities but at low sensor node densities, the three algorithms have similar performance.