Performance evaluating the evaluator

When evaluating the performance of a computer-based visual tracking system one often wishes to compare results with a standard human observer. It is a natural assumption that humans fully understand the relatively simple scenes we subject our computers to and because of this, two human observers would draw the same conclusions about object positions, tracks, size and even simple behaviour patterns. But is that actually the case? This paper provides a baseline for how computer-based tracking results can be compared to a standard human observer.

[1]  James L. Crowley,et al.  Perceptual Components for Context Aware Computing , 2002, UbiComp.

[2]  David S. Doermann,et al.  Tools and techniques for video performance evaluation , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[3]  Jitendra Malik,et al.  An empirical approach to grouping and segmentation , 2002 .

[4]  Jin Hyeong Park,et al.  Performance evaluation of object detection algorithms , 2002, Object recognition supported by user interaction for service robots.

[5]  Stewart Crawford-Hines Learning from the Expert: Improving Boundary Definitions in Biomedical Imagery , 2003, KES.

[6]  Robert B. Fisher,et al.  CVML - an XML-based computer vision markup language , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..

[7]  Robert B. Fisher,et al.  CVML - an XML-based computer vision markup language , 2004, ICPR 2004.

[8]  Jitendra Malik,et al.  Learning to detect natural image boundaries using local brightness, color, and texture cues , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Robert B. Fisher,et al.  The PETS04 Surveillance Ground-Truth Data Sets , 2004 .