RL-MAC: A QoS-Aware Reinforcement Learning based MAC Protocol for Wireless Sensor Networks

This paper introduces RL-MAC, a novel adaptive media access control (MAC) protocol for wireless sensor networks (WSN) that employs a reinforcement learning framework. Existing schemes center around scheduling the nodes' sleep and active periods as means of minimizing the energy consumption. Recent protocols employ adaptive duty cycles as means of further optimizing the energy utilization (W. Ye et al., 2004)(T.V. Dam and K. Langendoen, 2003). However, in most cases each node determines the duty cycle as a function of its own traffic load. In this paper, nodes actively infer the state of other nodes, using a reinforcement learning based control mechanism, thereby achieving high throughput and low power consumption for a wide range of traffic conditions. Moreover, the computational complexity of the proposed scheme is moderate rendering it pragmatic for practical deployments. Quality of service can easily be implemented in the proposed framework as well

[1]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[2]  Paul J.M. Havinga,et al.  Energy-efficient TDMA medium access control protocol scheduling , 2000 .

[3]  Deborah Estrin,et al.  An energy-efficient MAC protocol for wireless sensor networks , 2002, Proceedings.Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies.

[4]  Koen Langendoen,et al.  An adaptive energy-efficient MAC protocol for wireless sensor networks , 2003, SenSys '03.

[5]  Deborah Estrin,et al.  Medium access control with coordinated adaptive sleeping for wireless sensor networks , 2004, IEEE/ACM Transactions on Networking.

[6]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[7]  Sridhar Radhakrishnan,et al.  PMAC: an adaptive energy-efficient MAC protocol for wireless sensor networks , 2005, 19th IEEE International Parallel and Distributed Processing Symposium.