COORDINATED SENSING OF NETWORKED BODY SENSORS USING MARKOV DECISION PROCESSES

This article describes a Markov decision process (MDP) framework for coordinated sensing between correlated sensors in a body-area network. The technique is designed to extend the life of mobile continuous health-monitoring systems based on energy-constrained wearable sensors. The technique enables distributed sensors in a body-area network to adapt their sampling rates in response to changing criticality of the detected data and the limited energy reserve at each sensor node. The relationship between energy consumption, sampling rates, and utility of coordinated measurements is formulated as an MDP. This MDP is solved to generate a globally optimal policy that specifies the sampling rates for each sensor for all possible states of the system. This policy is computed offline before deployment and only the resulting policy is stored within each sensor node. We also present a method of executing the global policy without requiring continuous communication between the sensors. Each sensor node maintains a local estimate of the global state. Communication occurs only when an information-theoretic model of the uncertainty in the local state estimates exceeds a predefined threshold. We show results on simulated data that demonstrate the efficacy of this distributed-control framework and compare the performance of the proposed controller with other policies.

[1]  Davide Brunelli,et al.  Wireless Sensor Networks , 2012, Lecture Notes in Computer Science.

[2]  Shlomo Zilberstein,et al.  Decision-Theoretic Control of Planetary Rovers , 2001, Advances in Plan-Based Control of Robotic Agents.

[3]  Neil Immerman,et al.  The Complexity of Decentralized Control of Markov Decision Processes , 2000, UAI.

[4]  Bhaskar Krishnamachari,et al.  The Tradeoff between Energy Efficiency and User State Estimation Accuracy in Mobile Sensing , 2009, MobiCASE.

[5]  Claudia V. Goldman,et al.  Solving Transition Independent Decentralized Markov Decision Processes , 2004, J. Artif. Intell. Res..

[6]  Anand V. Panangadan,et al.  Markov Decision Processes for Control of a Sensor Network-based Health Monitoring System , 2005, AAAI.

[7]  Claudia V. Goldman,et al.  Optimizing information exchange in cooperative multi-agent systems , 2003, AAMAS '03.

[8]  R. Pidva,et al.  Clinical Evaluation of a Novel Interstitial Fluid Sensor System for Remote Continuous Alcohol Monitoring , 2008, IEEE Sensors Journal.

[9]  Makoto Yokoo,et al.  Communications for improving policy computation in distributed POMDPs , 2004, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004..

[10]  Bhaskar Krishnamachari,et al.  Markov-optimal sensing policy for user state estimation in mobile devices , 2010, IPSN '10.

[11]  Andreas Krause,et al.  Trading off prediction accuracy and power consumption for context-aware wearable computing , 2005, Ninth IEEE International Symposium on Wearable Computers (ISWC'05).

[12]  Jeff G. Schneider,et al.  Approximate solutions for partially observable stochastic games with common payoffs , 2004, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004..

[13]  K. Westerterp,et al.  Physical Activity Assessment With Accelerometers: An Evaluation Against Doubly Labeled Water , 2007, Obesity.

[14]  Priti Aghera Energy Management in Wireless Healthcare Systems Using Dynamic Task Assignment , 2010 .

[15]  François Charpillet,et al.  MAA*: A Heuristic Search Algorithm for Solving Decentralized POMDPs , 2005, UAI.

[16]  Victor R. Lesser,et al.  Communication decisions in multi-agent cooperation: model and experiments , 2001, AGENTS '01.

[17]  Nicholas R. Jennings,et al.  A principled information valuation for communications during multi-agent coordination , 2008 .