Design and Implementation of a Simulation System Based on Deep Q-Network for Mobile Actor Node Control in Wireless Sensor and Actor Networks

A Wireless Sensor and Actor Network (WSAN) is a group of wireless devices with the ability to sense physical events (sensors) or/and to perform relatively complicated actions (actors), based on the sensed data shared by sensors. This paper presents design and implementation of a simulation system based on Deep Q-Network (DQN) for mobile actor node control in WSANs. DQN is a deep neural network structure used for estimation of Q-value of the Q-learning method. In this work, we implement the proposed simulating system by Rust programming language. We describe the design and implementation of the simulation system, and show some simulation results to evaluate its performance.

[1]  Michael M. Zavlanos,et al.  Communication-aware coverage control for robotic sensor networks , 2014, 53rd IEEE Conference on Decision and Control.

[2]  Leonard Barolli,et al.  SAMI: A Sensor Actor Network Matlab Implementation , 2015, 2015 18th International Conference on Network-Based Information Systems.

[3]  Damla Turgut,et al.  Local positioning for environmental monitoring in wireless sensor and actor networks , 2010, IEEE Local Computer Network Conference.

[4]  Liu Ming,et al.  A robot exploration strategy based on Q-learning network , 2016 .

[5]  Long-Ji Lin,et al.  Reinforcement learning for robots using neural networks , 1992 .

[6]  Ameer Ahmed Abbasi,et al.  Movement-Assisted Connectivity Restoration in Wireless Sensor and Actor Networks , 2009, IEEE Transactions on Parallel and Distributed Systems.

[7]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[8]  Martin A. Riedmiller,et al.  Deep auto-encoder neural networks in reinforcement learning , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[9]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[10]  Damla Turgut,et al.  APAWSAN: Actor positioning for aerial wireless sensor and actor networks , 2011, 2011 IEEE 36th Conference on Local Computer Networks.

[11]  Valeria Loscrì,et al.  Nodes self-deployment for coverage maximization in mobile robot networks using an evolving neural network , 2012, Comput. Commun..

[12]  Kemal Akkaya,et al.  Distributed Recovery from Network Partitioning in Movable Sensor/Actor Networks via Controlled Mobility , 2010, IEEE Transactions on Computers.

[13]  Dario Pompili,et al.  Communication and Coordination in Wireless Sensor and Actor Networks , 2007, IEEE Transactions on Mobile Computing.

[14]  Hai Liu,et al.  Simple movement control algorithm for bi-connectivity in robotic sensor networks , 2010, IEEE Journal on Selected Areas in Communications.

[15]  Martin A. Riedmiller Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method , 2005, ECML.

[16]  Yongcheng Li,et al.  Neural-based control of a mobile robot: A test model for merging biological intelligence into mechanical system , 2014, 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference.

[17]  Mustafa Y. Sir,et al.  An optimization-based approach for connecting partitioned mobile sensor/Actuator Networks , 2011, 2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).

[18]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[19]  Octavian Curea,et al.  Application of Wireless Sensor and Actuator Networks to Achieve Intelligent Microgrids: A Promising Approach towards a Global Smart Grid Deployment , 2016 .

[20]  Bugong Xu,et al.  Node coordination mechanism based on distributed estimation and control in wireless sensor and actuator networks , 2013 .

[21]  V. C. Gungor,et al.  A Real-Time and Reliable Transport (RT)$^{2}$ Protocol for Wireless Sensor and Actor Networks , 2008, IEEE/ACM Transactions on Networking.

[22]  Özgür B. Akan,et al.  A real-time and reliable transport (RT) 2 protocol for wireless sensor and actor networks , 2008, IEEE/ACM Trans. Netw..

[23]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[24]  Dario Pompili,et al.  Handling Mobility in Wireless Sensor and Actor Networks , 2010, IEEE Transactions on Mobile Computing.

[25]  Randal T. Abler,et al.  Intelligent actor mobility in wireless sensor and actor networks , 2007, WSAN.

[26]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[27]  Mohamed F. Younis,et al.  Strategies and techniques for node placement in wireless sensor networks: A survey , 2008, Ad Hoc Networks.

[28]  Jaco Kruger,et al.  An Open Simulator Architecture for Heterogeneous Self-Organizing Networks , 2006, 2006 Canadian Conference on Electrical and Computer Engineering.

[29]  Ian F. Akyildiz,et al.  Wireless sensor and actor networks: research challenges , 2004, Ad Hoc Networks.

[30]  C. Siva Ram Murthy,et al.  Energy-efficient directional routing between partitioned actors in wireless sensor and actor networks , 2010, IET Commun..