Grouped-People Splitting Based on Face Detection and Body Proportion Constraints

This paper presents a method based on skin colormodel face detection and human body proportion constraints to estimate the number and position of people entering the monitored scene as a compact group. This helps to split the group into individual persons and solve a common lack in traditional background subtraction based detection and tracking methods: despite common systems, this algorithm can recognize the number and position of the people in a single change detection blob. The hypotheses are: standing subjects, face visibility from the point of view of the camera, a calibrated map to estimate objects' distance from the sensor and to estimate expected people's height on the image plane. These estimations allow dynamic thresholding of several shape parameters and lead to very interesting results.