In this paper, we address the analysis of human actions by comparing different performances of the same action, i.e. walking. To achieve this goal, we define a proper human body model which maximizes the differences between human postures and, moreover, reflects the anatomical structure of the human beings. Subsequently, a human action space, called aSpace, is built in order to represent a performance, i.e., a predefined sequence of postures, as a parametric manifold. The final human action representation is called p–action, which is based on the most characteristic human body postures found during several walking performances. These postures are found automatically by means of a predefined distance function, and they are called key-frames. By using key-frames, we synchronize any performance with respect to the action model. Furthermore, by considering an arc length parameterization, independence from the speed at which performances are played is attained. Consequently, the style of human walking can be successfully analysed by establishing differences between a male and a female walkers.
[1]
Hiroshi Murase,et al.
Visual learning and recognition of 3-d objects from appearance
,
2005,
International Journal of Computer Vision.
[2]
Dana H. Ballard,et al.
Computer Vision
,
1982
.
[3]
James W. Davis,et al.
The representation and recognition of human movement using temporal templates
,
1997,
Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[4]
Richard E. Parent,et al.
Computing the arc length of parametric curves
,
1990,
IEEE Computer Graphics and Applications.
[5]
Xavier Varona,et al.
aSpaces : Action Spaces for Recognition and Synthesis of Human Actions
,
2002,
AMDO.
[6]
William H. Press,et al.
Numerical recipes in C
,
2002
.
[7]
José M. F. Moura,et al.
Capture and Representation of Human Walking in Live Video Sequences
,
1999,
IEEE Trans. Multim..
[8]
Lucas Paletta,et al.
Appearance-based active object recognition
,
2000,
Image Vis. Comput..