UAV Search: Maximizing Target Acquisition

In situations where a human operator is unable to perform tactical control of an unmanned aerial vehicle (UAV), it may be necessary to have the UAV make or suggest tactical decisions. The interaction of the UAV computers with those of the human decision makers requires that choices for the human decision maker be easy to interpret and intuitive to implement or approve. This paper provides closed-form solutions to maximize detection of a slow-moving ground target by a UAV. The output of these solutions is a height at which the UAV should fly in order to maximize probability of detection, which informs the operator whether a single vehicle is sufficient. We assume that the UAV can travel faster than the ground target with some bounded speed (but no certain direction). The ground target is detected when it is inside a field of view which is a function of the state of the UAV, so the controller for motion affects whether the target will be detected. We also provide avenues for future work where we consider the impact of results for multi-UAV search and alternative sensor accuracy models.

[1]  Andrew G. Barto,et al.  Reinforcement learning , 1998 .

[2]  Ümit Özgüner,et al.  Sliding Mode Control of a Quadrotor Helicopter , 2006, Proceedings of the 45th IEEE Conference on Decision and Control.

[3]  Steven Lake Waslander,et al.  Multi-agent quadrotor testbed control design: integral sliding mode vs. reinforcement learning , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Claire J. Tomlin,et al.  Quadrotor Helicopter Flight Dynamics and Control: Theory and Experiment , 2007 .

[5]  Naresh K. Sinha,et al.  Modern Control Systems , 1981, IEEE Transactions on Systems, Man, and Cybernetics.