Wearable vision based handled objects classification in human robot interaction

Human robot interaction highly desire natural and human-Oriented way in order to enable human and robots better work together, so robot vision should understand human or self activities in egocentric point of view. In this paper, we present a method for visual classification of objects in human hand captured from a wearable vision. The method tackles some challenges of object classification in egocentric perspective; Knowledge of the user's hand location and shape context assists the objects classifier based on previously learned in a labeled set of context. We present experiment on a dataset sellected within a cluttered environment, implementation validates its validity.

[1]  Xiaofeng Ren,et al.  Figure-ground segmentation improves handled object recognition in egocentric video , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[2]  Walterio W. Mayol-Cuevas,et al.  Egocentric Visual Event Classification with Location-Based Priors , 2010, ISVC.

[3]  Tâm Huynh,et al.  Human activity recognition with wearable sensors , 2008 .

[4]  Tracy Anne Hammond,et al.  Office activity recognition using hand posture cues , 2008, BCS HCI.

[5]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[6]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[7]  David W. Murray,et al.  Wearable hand activity recognition for event summarization , 2005, Ninth IEEE International Symposium on Wearable Computers (ISWC'05).

[8]  Matthai Philipose,et al.  Egocentric recognition of handled objects: Benchmark and analysis , 2009, 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[9]  James M. Rehg,et al.  A Scalable Approach to Activity Recognition based on Object Use , 2007, 2007 IEEE 11th International Conference on Computer Vision.