Near-optimal reinforcement learning framework for energy-aware sensor communications

We consider the problem of average throughput maximization per total consumed energy in packetized sensor communications. Our study results in a near-optimal transmission strategy that chooses the optimal modulation level and transmit power while adapting to the incoming traffic rate, buffer condition, and the channel condition. We investigate the point-to-point and multinode communication scenarios. Many solutions of the previous works require the state transition probability, which may be hard to obtain in a practical situation. Therefore, we are motivated to propose and utilize a class of learning algorithms [called reinforcement learning (RL)] to obtain the near-optimal policy in point-to-point communication and a good transmission strategy in multinode scenario. For comparison purpose, we develop the stochastic models to obtain the optimal strategy in the point-to-point communication. We show that the learned policy is close to the optimal policy. We further extend the algorithm to solve the optimization problem in a multinode scenario by independent learning. We compare the learned policy to a simple policy, where the agent chooses the highest possible modulation and selects the transmit power that achieves a predefined signal-to-interference ratio (SIR) given one particular modulation. The proposed learning algorithm achieves more than twice the throughput per energy compared with the simple policy, particularly, in high packet arrival regime. Beside the good performance, the RL algorithm results in a simple, systematic, self-organized, and distributed way to decide the transmission strategy.

[1]  Christos G. Cassandras,et al.  Introduction to Discrete Event Systems , 1999, The Kluwer International Series on Discrete Event Dynamic Systems.

[2]  Nicholas Bambos,et al.  Multimodal dynamic multiple access (MDMA) in wireless packet networks , 2001, Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No.01CH37213).

[3]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[4]  Saleem A. Kassam,et al.  Finite-state Markov model for Rayleigh fading channels , 1999, IEEE Trans. Commun..

[5]  Hong Shen Wang,et al.  Finite-state Markov channel-a useful model for radio communication channels , 1995 .

[6]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[7]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[8]  K. J. Ray Liu,et al.  Jointly optimized bit-rate/delay control policy for wireless packet networks with fading channels , 2002, IEEE Trans. Commun..

[9]  Andrea J. Goldsmith,et al.  Design challenges for energy-constrained ad hoc wireless networks , 2002, IEEE Wirel. Commun..

[10]  Anthony Ephremides,et al.  Energy concerns in wireless networks , 2002, IEEE Wirel. Commun..

[11]  Cem U. Saraydar,et al.  Efficient power control via pricing in wireless data networks , 2002, IEEE Trans. Commun..

[12]  Wayne E. Stark,et al.  Low-energy wireless communication network design , 2002, IEEE Wirel. Commun..

[13]  10 emerging technologies that will change your world , 2004, IEEE Engineering Management Review.

[14]  David J. Goodman,et al.  Power control for wireless data , 2000, IEEE Wirel. Commun..

[15]  Nicholas Bambos,et al.  Power controlled multiple access (PCMA) in wireless communication networks , 2000, Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064).

[16]  Andrea J. Goldsmith,et al.  Wireless link adaptation policies: QoS for deadline constrained traffic with imperfect channel estimates , 2002, 2002 IEEE International Conference on Communications. Conference Proceedings. ICC 2002 (Cat. No.02CH37333).