Learning Surveillance Tracking Models for the Self-Calibrated Ground Plane

Tracking strategies usually employ motion and appearance models to locate observations of the tracked object in successive frames. The subsequent model update procedure renders the approach highly sensitive to the inevitable observation and occlusion noise processes. In this work, two robust mechanisms are proposed which rely on knowledge about the ground plane. First a highly constrained bounding box appearance model is proposed which is determined solely from predicted image location and visual motion. Second, tracking is performed on the ground plane enabling global real-world observation and dynamic noise models to be defined. Finally, a novelauto-calibrationprocedureis developedto recoverthe imageto ground plane homographyby simply accumulating event observations.