Emission management for low probability intercept sensors in network centric warfare

Sensor platforms with active sensing equipment such as radars may betray their existence, by emitting energy that can be intercepted by enemy surveillance sensors thereby increasing the vulnerability of the whole combat system. To achieve the important tactical requirement of low probability of intercept (LPI) requires dynamically controlling the emission of platforms. In this paper we propose computationally efficient dynamic emission control and management algorithms for multiple networked heterogenous platforms. By formulating the problem as a partially observed Markov decision process (POMDP) with an on-going multi-armed bandit structure, near optimal sensor management algorithms are developed for controlling the active sensor emission to minimize the threat posed to all the platforms. Numerical examples are presented to illustrate these control/management algorithms.

[1]  William S. Lovejoy,et al.  Computationally Feasible Bounds for Partially Observed Markov Decision Processes , 1991, Oper. Res..

[2]  H. Kushner Weak Convergence Methods and Singularly Perturbed Stochastic Control and Filtering Problems , 1990 .

[3]  Edward J. Sondik,et al.  The Optimal Control of Partially Observable Markov Processes over a Finite Horizon , 1973, Oper. Res..

[4]  Olivier Tr,et al.  Bearings-Only Tracking for Maneuvering Sources , 1998 .

[5]  J. Bather,et al.  Multi‐Armed Bandit Allocation Indices , 1990 .

[6]  A. Cassandra,et al.  Exact and approximate algorithms for partially observable markov decision processes , 1998 .

[7]  Vikram Krishnamurthy,et al.  Time discretization of continuous-time filters and smoothers for HMM parameter estimation , 1996, IEEE Trans. Inf. Theory.

[8]  Xuan Kong,et al.  Adaptive Signal Processing Algorithms: Stability and Performance , 1994 .

[9]  P. Whittle Multi‐Armed Bandits and the Gittins Index , 1980 .

[10]  W. Lovejoy A survey of algorithmic methods for partially observed Markov decision processes , 1991 .

[11]  W. Fleming Book Review: Discrete-time Markov control processes: Basic optimality criteria , 1997 .

[12]  Harold J. Kushner,et al.  Stochastic Approximation Algorithms and Applications , 1997, Applications of Mathematics.

[13]  Neri Merhav,et al.  Hidden Markov processes , 2002, IEEE Trans. Inf. Theory.

[14]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[15]  P. Whittle Sequential Project Selection (Multi-Armed Bandits) and the Gittins Index , 1982 .

[16]  O. Hernández-Lerma,et al.  Discrete-time Markov control processes , 1999 .

[17]  Michael L. Littman,et al.  Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes , 1997, UAI.

[18]  Yakov Bar-Shalom,et al.  Multitarget-Multisensor Tracking: Principles and Techniques , 1995 .