An architecture for sensor modular autonomy for counter-UAS

This paper discusses a modular system architecture for detection, classification and localisation of Unmanned Aerial System (UAS) targets, consisting of intelligent Autonomous Sensor Modules (ASMs), a High-Level Decision Making Module (HLDMM), a middleware integration layer and an end-user GUI, under the previously-reported SAPIENT framework. This enables plug and play sensor integration and autonomous fusion, including prediction of the trajectory of the vehicle for sensor cueing, multi-modal sensor fusion and target hand-off. The SAPIENT Counter-UAS (C-UAS) system was successfully demonstrated in a live trial against a range of UAS targets flown in a variety of attack trajectories, using radar and Electro-Optic (EO) C-UAS ASMs. In addition, the trial also demonstrated the use of synthetic sensors, on their own and in combination with real sensors. Outputs of all the available sensors were tracked and fused by the Cubica SAPIENT HLDMM which then steered narrow field-of-view cameras onto the predicted 3D position of the UAS. The operator was provided with a map-based view showing alerts and tracks to provide situational awareness, together with snapshots from the EO sensor and video feeds from the steerable narrow field of view cameras. This demonstrates an effective C-UAS system operating entirely autonomously, with autonomous detection, localisation, classification, tracking, fusion and sensor management, leading to “eyes on” the aerial threat, and all happening with zero operator intervention and in real-time.