The recognition of continuous natural gestures is a complex and challenging problem due to the multi-modal nature of involved visual cues (e.g. fingers and lips movements, subtle facial expressions, body pose, etc.), as well as technical limitations such as spatial and temporal resolution and unreliable depth cues. In order to promote the research advance on this field, we organized a challenge on multi-modal gesture recognition. We made available a large video database of 13,858 gestures from a lexicon of 20 Italian gesture categories recorded with a Kinect™ camera, providing the audio, skeletal model, user mask, RGB and depth images. The focus of the challenge was on user independent multiple gesture learning. There are no resting positions and the gestures are performed in continuous sequences lasting 1-2 minutes, containing between 8 and 20 gesture instances in each sequence. As a result, the dataset contains around 1.720.800 frames. In addition to the 20 main gesture categories, "distracter" gestures are included, meaning that additional audio and gestures out of the vocabulary are included. The final evaluation of the challenge was defined in terms of the Levenshtein edit distance, where the goal was to indicate the real order of gestures within the sequence. 54 international teams participated in the challenge, and outstanding results were obtained by the first ranked participants.
[1]
Isabelle Guyon,et al.
Results and Analysis of the ChaLearn Gesture Challenge 2012
,
2012,
WDIA.
[2]
Luc Van Gool,et al.
The Pascal Visual Object Classes (VOC) Challenge
,
2010,
International Journal of Computer Vision.
[3]
Andrew W. Fitzgibbon,et al.
Real-time human pose recognition in parts from single depth images
,
2011,
CVPR 2011.
[4]
Sergio Escalera,et al.
Human limb segmentation in depth maps based on spatio-temporal Graph-cuts optimization
,
2012,
J. Ambient Intell. Smart Environ..
[5]
Gaël Varoquaux,et al.
Scikit-learn: Machine Learning in Python
,
2011,
J. Mach. Learn. Res..