Performance Evaluation of a Deep Q-Network Based Simulation System for Actor Node Mobility Control in Wireless Sensor and Actor Networks Considering Three-Dimensional Environment

A Wireless Sensor and Actor Network (WSAN) is a group of wireless devices with the ability to sense physical events (sensors) or/and to perform relatively complicated actions (actors), based on the sensed data shared by sensors. This paper presents design and implementation of a simulation system based on Deep Q-Network (DQN) for actor node mobility control in WSANs. DQN is a deep neural network structure used for estimation of Q-value of the Q-learning method. We implemented the proposed simulating system by Rust programming language. We evaluated the performance of proposed system for normal distribution of events considering three-dimensional environment. For this scenario, the simulation results show that for normal distribution of events and the best episode all actor nodes are connected but one event is not covered.

[1]  Liu Ming,et al.  A robot exploration strategy based on Q-learning network , 2016 .

[2]  Damla Turgut,et al.  APAWSAN: Actor positioning for aerial wireless sensor and actor networks , 2011, 2011 IEEE 36th Conference on Local Computer Networks.

[3]  Kemal Akkaya,et al.  Distributed Recovery from Network Partitioning in Movable Sensor/Actor Networks via Controlled Mobility , 2010, IEEE Transactions on Computers.

[4]  Octavian Curea,et al.  Application of Wireless Sensor and Actuator Networks to Achieve Intelligent Microgrids: A Promising Approach towards a Global Smart Grid Deployment , 2016 .

[5]  Mohamed F. Younis,et al.  Strategies and techniques for node placement in wireless sensor networks: A survey , 2008, Ad Hoc Networks.

[6]  Dario Pompili,et al.  Communication and Coordination in Wireless Sensor and Actor Networks , 2007, IEEE Transactions on Mobile Computing.

[7]  Martin A. Riedmiller Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method , 2005, ECML.

[8]  Hai Liu,et al.  Simple movement control algorithm for bi-connectivity in robotic sensor networks , 2010, IEEE Journal on Selected Areas in Communications.

[9]  Michael M. Zavlanos,et al.  Communication-aware coverage control for robotic sensor networks , 2014, 53rd IEEE Conference on Decision and Control.

[10]  Yongcheng Li,et al.  Neural-based control of a mobile robot: A test model for merging biological intelligence into mechanical system , 2014, 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference.

[11]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[12]  Martin A. Riedmiller,et al.  Deep auto-encoder neural networks in reinforcement learning , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[13]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[14]  Valeria Loscrì,et al.  Nodes self-deployment for coverage maximization in mobile robot networks using an evolving neural network , 2012, Comput. Commun..

[15]  Bugong Xu,et al.  Node coordination mechanism based on distributed estimation and control in wireless sensor and actuator networks , 2013 .

[16]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[17]  Leonard Barolli,et al.  SAMI: A Sensor Actor Network Matlab Implementation , 2015, 2015 18th International Conference on Network-Based Information Systems.

[18]  Damla Turgut,et al.  Local positioning for environmental monitoring in wireless sensor and actor networks , 2010, IEEE Local Computer Network Conference.

[19]  Long-Ji Lin,et al.  Reinforcement learning for robots using neural networks , 1992 .

[20]  Ameer Ahmed Abbasi,et al.  Movement-Assisted Connectivity Restoration in Wireless Sensor and Actor Networks , 2009, IEEE Transactions on Parallel and Distributed Systems.

[21]  V. C. Gungor,et al.  A Real-Time and Reliable Transport (RT)$^{2}$ Protocol for Wireless Sensor and Actor Networks , 2008, IEEE/ACM Transactions on Networking.

[22]  Jaco Kruger,et al.  An Open Simulator Architecture for Heterogeneous Self-Organizing Networks , 2006, 2006 Canadian Conference on Electrical and Computer Engineering.

[23]  Leonard Barolli,et al.  Design and Implementation of a Simulation System Based on Deep Q-Network for Mobile Actor Node Control in Wireless Sensor and Actor Networks , 2017, 2017 31st International Conference on Advanced Information Networking and Applications Workshops (WAINA).

[24]  Ian F. Akyildiz,et al.  Wireless sensor and actor networks: research challenges , 2004, Ad Hoc Networks.

[25]  Mustafa Y. Sir,et al.  An optimization-based approach for connecting partitioned mobile sensor/Actuator Networks , 2011, 2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).

[26]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[27]  C. Siva Ram Murthy,et al.  Energy-efficient directional routing between partitioned actors in wireless sensor and actor networks , 2010, IET Commun..

[28]  Randal Abler,et al.  Intelligent Actor Mobility in Wireless Sensor and Actor Networks , 2007 .

[29]  Dario Pompili,et al.  Handling Mobility in Wireless Sensor and Actor Networks , 2010, IEEE Transactions on Mobile Computing.