Learning from Demonstration Based on a Mechanism to Utilize an Object’s Invisibility
暂无分享,去创建一个
[1] Akihiko Nagakubo,et al. A Cognitive Architecture for Flexible Imitative Interaction Using Tools and Objects , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.
[2] Philippe C. Cattin,et al. Tracking the invisible: Learning where the object might be , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[3] Mirko Wächter,et al. Hierarchical segmentation of manipulation actions based on object relations and motion characteristics , 2015, 2015 International Conference on Advanced Robotics (ICAR).
[4] Ryo Kurazume,et al. Detecting repeated patterns using Partly Locality Sensitive Hashing , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[5] Masayuki Inaba,et al. Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..
[6] Fulvio Mastrogiovanni,et al. Learning symbolic representations of actions from human demonstrations , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).
[7] G. Kanizsa. Margini Quasi-percettivi in Campi con Stimolazione Omogenea , 1955 .
[8] Michael Isard,et al. CONDENSATION—Conditional Density Propagation for Visual Tracking , 1998, International Journal of Computer Vision.
[9] Eren Erdal Aksoy,et al. Semantic parsing of human manipulation activities using on-line learned models for robot imitation , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[10] Rainer Palm,et al. Programming by Demonstration of Pick-and-Place Tasks for Industrial Manipulators using Task Primitives , 2007, 2007 International Symposium on Computational Intelligence in Robotics and Automation.
[11] Gordon Cheng,et al. Transferring skills to humanoid robots by extracting semantic representations from observations of human activities , 2017, Artif. Intell..
[12] J. Piaget. The construction of reality in the child , 1954 .
[13] Ken Ito,et al. Robust view-based visual tracking with detection of occlusions , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).
[14] Brett Browning,et al. A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..
[15] Kimitoshi Yamazaki,et al. Home-Assistant Robot for an Aging Society , 2012, Proceedings of the IEEE.
[16] Vincent Lepetit,et al. Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes , 2011, 2011 International Conference on Computer Vision.
[17] Richard Fikes,et al. STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving , 1971, IJCAI.
[18] David G. Lowe,et al. Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.
[19] Craig A. Knoblock,et al. PDDL-the planning domain definition language , 1998 .
[20] Kimitoshi Yamazaki,et al. Hierarchical estimation of multiple objects from proximity relationships arising from tool manipulation , 2012, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012).
[21] Brian Scassellati,et al. Discovering task constraints through observation and active learning , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[22] Ryo Kurazume,et al. Segmentation method of human manipulation task based on measurement of force imposed by a human hand on a grasped object , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[23] Brian J Scholl,et al. Dynamic Object Individuation in Rhesus Macaques , 2004, Psychological science.
[24] Masahide Kaneko,et al. Visual Tracking in Occlusion Environments by Autonomous Switching of Targets , 2008, IEICE Trans. Inf. Syst..
[25] Masayuki Inaba,et al. The Seednoid Robot Platform: Designing a Multipurpose Compact Robot From Continuous Evaluation and Lessons From Competitions , 2018, IEEE Robotics and Automation Letters.