Cooking Behavior Recognition Using Egocentric Vision for Cooking Navigation

[1]  Ana Cristina Murillo,et al.  Experiments on an RGB-D Wearable Vision System for Egocentric Activity Recognition , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[2]  C. V. Jawahar,et al.  First Person Action Recognition Using Deep Learned Descriptors , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Takashi Morie,et al.  Human Detection and Face Recognition Using 3D Structure of Head and Face Surfaces Detected by RGB-D Sensor , 2015, J. Robotics Mechatronics.

[4]  Koen E. A. van de Sande,et al.  Segmentation as selective search for object recognition , 2011, 2011 International Conference on Computer Vision.

[5]  Furukawa Takuya,et al.  Action Recognition using ST-patch Features for First Person Vision , 2010 .

[6]  Takahiro Okabe,et al.  Fast unsupervised ego-action learning for first-person sports videos , 2011, CVPR 2011.

[7]  Mutsuo Sano,et al.  Cooking Support System Using Networked Robots and Sensors , 2014, 2014 International Conference on Computational Science and Computational Intelligence.

[8]  Mutsuo Sano,et al.  Social Skills Training Support of Cognitive Dysfunctions by Cooperative Cooking Navigation System , 2011, 2011 IEEE International Symposium on Multimedia.

[9]  Shu Watanabe,et al.  Estimated Prevalence of Higher Brain Dysfunction in Tokyo , 2009 .

[10]  Christian Szegedy,et al.  DeepPose: Human Pose Estimation via Deep Neural Networks , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Yasushi Nakauchi,et al.  Sequential Human Behavior Recognition for Cooking-Support Robots , 2005, J. Robotics Mechatronics.

[12]  Yoichi Sato,et al.  Coupling eye-motion and ego-motion features for first-person activity recognition , 2012, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[13]  Eren Erdal Aksoy,et al.  Model-free incremental learning of the semantics of manipulation actions , 2015, Robotics Auton. Syst..

[14]  Yasushi Nakauchi,et al.  Cooking Procedure Recognition and Support by Ubiquitous Sensors , 2009, J. Robotics Mechatronics.

[15]  Yasushi Nakauchi,et al.  Time sequence data mining for cooking support robot , 2005, 2005 International Symposium on Computational Intelligence in Robotics and Automation.

[16]  Hema Swetha Koppula,et al.  Learning human activities and object affordances from RGB-D videos , 2012, Int. J. Robotics Res..

[17]  Yoko Yamakata,et al.  A Method of Recipe to Cooking Video Mapping for Automated Cooking Content Construction , 2007 .

[18]  Gerhard Tröster,et al.  Eye Movement Analysis for Activity Recognition Using Electrooculography , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Shigeki Aoki,et al.  Learning and recognizing behavioral patterns using position and posture of human body and its application to detection of irregular states , 2005 .