This paper presents a methodology to generate task flow for conducting a surveillance mission using multiple UAVs, when the goal is to persistently maintain the uncertainty level of surveillance regions as low as possible. The mission planning problem is formulated as a Markov decision process (MDP), which is a infinite-horizon discrete stochastic optimal control formulation and often leads to a periodic task flows to be implemented in a persistent manner. The method specifically focuses on reducing the size of decision space without losing key feature of the problem in order to mitigate the curse of dimensionality of MDP; integrating a task allocator to identify admissible actions is demonstrate to effectively reduce the decision space. Numerical simulations verify the applicability of the proposed decision scheme.
[1]
I. Kroo,et al.
Persistent Surveillance Using Multiple Unmanned Air Vehicles
,
2008,
2008 IEEE Aerospace Conference.
[2]
Eric Sommerlade,et al.
Cooperative surveillance of multiple targets using mutual information
,
2008
.
[3]
Aleksandar Lazinica.
New Developments in Robotics Automation and Control
,
2008
.
[4]
Joshua D. Redding,et al.
Approximate multi-agent planning in dynamic and uncertain environments
,
2011
.
[5]
Shunsuke Ihara,et al.
Information theory - for continuous systems
,
1993
.