Automated scheduling of radar-cued camera system for optimizing visual inspection and detection of radar targets

A wide field-of-view and rapid response to threats are critical components of any surveillance system. Field of view is normally implemented by articulating a camera allowing it to swivel to pan and tilt, and actively zooming in on “interesting” locations. Since a single camera suffers from the “soda straw” problem, where only a small portion of the scene can be examined at any given time (leaving the rest of the scene unwatched), surveillance systems often employ a radar unit to direct the operator to likely targets. This provides direction to the search, but still poses a security risk, since potentially hazardous activities might be occurring in an unwatched portion of the field of view while the operator is investigating another incident (which can be either coincidentally or intentionally distracting). Today's systems all rely on a human operator to control the slewing of the camera to inspect the potential targets found by the radar. Automated schedulers have thus far been avoided by these systems, since it has always been assumed that the human would outperform the algorithm. This paper describes a method for automatic scheduling and control for a single or multi-camera radar-cued surveillance system that optimizes visual coverage and inspection of radar-detected-targets. The scheduling algorithm combines track life, track spatial density, and the camera slew angle and speed into a single metric to determine next slew and zoom of camera that maximizes the visual detection of all radar hits over a given period of time and can be run in real-time on a laptop or embedded hardware. The goal of this work is to enable the operator to visually inspect as many radar hits as possible over the course of the operator's shift.