Energy efficient multiple target tracking in sensor networks

Classical tracking methods are not concerned with energy efficiency and require precise localisation. We addressed these in our previous work through HMTT (hierarchical Markov decision process for target tracking) that tracks single targets at location granularity. HMTT conserves energy by reducing the rate of sensing but preserves acceptable tracking accuracy through trajectory prediction. In this paper, HMTT is extended for the multiple targets case where the state of clusters could be affected by multiple incoming targets and where multiple updates are required at the lower level. The theoretical performance of HMTT in the multiple targets case is derived and simulations demonstrate its effectiveness against 2 other predictive tracking algorithms with up to 200% improvement.

[1]  Bruno O. Shubert,et al.  Random variables and stochastic processes , 1979 .

[2]  Wang-Chien Lee,et al.  Prediction-based strategies for energy saving in object tracking sensor networks , 2004, IEEE International Conference on Mobile Data Management, 2004. Proceedings. 2004.

[3]  Peter Dayan,et al.  Technical Note: Q-Learning , 2004, Machine Learning.

[4]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[5]  Chen-Khong Tham,et al.  Modular on-line function approximation for scaling up reinforcement learning , 1994 .

[6]  Leonidas J. Guibas,et al.  Collaborative signal and information processing: an information-directed approach , 2003 .

[7]  Sridhar Mahadevan,et al.  Recent Advances in Hierarchical Reinforcement Learning , 2003, Discret. Event Dyn. Syst..

[8]  John G. Proakis,et al.  Probability, random variables and stochastic processes , 1985, IEEE Trans. Acoust. Speech Signal Process..

[9]  Tong Liu,et al.  Mobility modeling, location tracking, and trajectory prediction in wireless ATM networks , 1998, IEEE J. Sel. Areas Commun..

[10]  Doina Precup,et al.  Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning , 1999, Artif. Intell..

[11]  Rong Zheng,et al.  Asynchronous wakeup for ad hoc networks , 2003, MobiHoc '03.

[12]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[13]  S. Marcus,et al.  Multi-time Scale Markov Decision Processes , 2005 .

[14]  Majid Sarrafzadeh,et al.  Optimal Energy Aware Clustering in Sensor Networks , 2002 .

[15]  Bhaskar Krishnamachari,et al.  Energy-Quality Tradeoffs for Target Tracking in Wireless Sensor Networks , 2003, IPSN.

[16]  Mark A. Shayman,et al.  Multitime scale Markov decision processes , 2003, IEEE Trans. Autom. Control..

[17]  D. Salmond,et al.  Target tracking: introduction and Kalman tracking filters , 2001 .

[18]  Satish Kumar,et al.  Next century challenges: scalable coordination in sensor networks , 1999, MobiCom.

[19]  Richard W. Prager,et al.  A Modular Q-Learning Architecture for Manipulator Task Decomposition , 1994, ICML.

[20]  Zygmunt J. Haas,et al.  Predictive distance-based mobility management for PCS networks , 1999, IEEE INFOCOM '99. Conference on Computer Communications. Proceedings. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. The Future is Now (Cat. No.99CH36320).

[21]  Chen-Khong Tham,et al.  A novel target movement model and energy efficient target tracking in sensor networks , 2005, 2005 IEEE 61st Vehicular Technology Conference.