Value-based control of the observation-decision process

Sensing in support of combat teams may be performed both by human elements and by UAVs. With the increasing pace of modern combat operations, and the developing networked-management of such, the problem of how to task sensor assets is entering the realm where automated decision-support tools should be applied. The objective by which the relative value of possible future sensor tasks should be compared is the value that they bring to the combat operation, with adjustment by the possible costs of these sensing operations. A mathematical theory allowing for the determination of the minimax value as a function of information state (where this information state is described by a probability distribution) has recently been obtained. Possible future sensing tasks are mapped into the (stochastic) observation outcomes, and these are further mapped into potential a posteriori probability distributions. The value to the combat operation is the expectation of the minimax value as a function of these potential future probability distributions. This is used as an objective function, according to which optimal sensing-platform tasking is computed. Both open-loop and observation-feedback sensing-platform tasking controllers are developed. A simple example is studied.