Semantic Human 3D Shapes Annotation for Animation

The problem of identifying particular points or areas on 3D meshes is closely related to several outcomes in computer graphics: when animating virtual characters the animator must first identify which part of the 3D envelop can be assigned to which corresponding part of the animated skeleton, to obtain a visually coherent animated shape. Every shape has to be segmented to be usable after while. For instance, the CAESAR body database has been built using 3D scans plus a set of landmarks to identify body measurements. Unfortunately this is not the case when acquiring scanned data in general, and particularly for human scanned bodies. We demonstrate that it is possible to get rid of noisy and complex data so as to extract from any human body closed mesh its skeleton of animation. Assuming the joints are located where the shape has the more variations and based on a multi-scale analysis, we are able to deduce main joints positions. We also build a control skeleton and label automatically all detected joints relying on a priori knowledge on human anatomy, independently from body postures. We demonstrate our approach with several examples.