ActionSense: A Multimodal Dataset and Recording Framework for Human Activities Using Wearable Sensors in a Kitchen Environment
暂无分享,去创建一个
A. Torralba | W. Matusik | Yunzhu Li | Yiyue Luo | D. Rus | Joseph DelPreto | Michael Foshey | Chao Liu | Daniela Rus
[1] James M. Rehg,et al. Ego4D: Around the World in 3,000 Hours of Egocentric Video , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Silvio Savarese,et al. What Matters in Learning from Offline Human Demonstrations for Robot Manipulation , 2021, CoRL.
[3] Silvio Savarese,et al. BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments , 2021, CoRL.
[4] A. Torralba,et al. Intelligent Carpet: Inferring 3D Human Pose from Tactile Signals , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Eric Horvitz,et al. Platform for Situated Intelligence , 2021, ArXiv.
[6] Oliver Kroemer,et al. Playing with Food: Learning Food Item Representations through Interactive Exploration , 2021, ISER.
[7] D. Damen,et al. Rescaling Egocentric Vision: Collection, Pipeline and Challenges for EPIC-KITCHENS-100 , 2020, International Journal of Computer Vision.
[8] S. Levine,et al. Learning Agile Robotic Locomotion Skills by Imitating Animals , 2020, Robotics: Science and Systems.
[9] Juan Carlos Niebles,et al. Action Genome: Actions As Compositions of Spatio-Temporal Scene Graphs , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[10] S. Levine,et al. Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning , 2019, CoRL.
[11] Bin Tong,et al. MMAct: A Large-Scale Dataset for Cross Modal Human Action Understanding , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[12] Gang Wang,et al. NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[13] Abhinav Gupta,et al. Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for Imitation , 2018, CoRL.
[14] Sanja Fidler,et al. VirtualHome: Simulating Household Activities Via Programs , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[15] Min Sun,et al. Anticipating Daily Intention Using On-wrist Motion Triggered Sensing , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[16] Li Fei-Fei,et al. Jointly Learning Energy Expenditures and Activities Using Egocentric Multimodal Signals , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Nanning Zheng,et al. Modeling 4D Human-Object Interactions for Joint Event Segmentation, Recognition, and Object Localization , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[18] Sergey Levine,et al. Time-Contrastive Networks: Self-Supervised Learning from Video , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[19] Jitendra Malik,et al. Learning to Poke by Poking: Experiential Learning of Intuitive Physics , 2016, NIPS.
[20] Sergey Levine,et al. Unsupervised Learning for Physical Interaction through Video Prediction , 2016, NIPS.
[21] Ali Farhadi,et al. Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding , 2016, ECCV.
[22] Sergey Levine,et al. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection , 2016, Int. J. Robotics Res..
[23] Michael S. Bernstein,et al. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations , 2016, International Journal of Computer Vision.
[24] Jie Lin,et al. Egocentric activity recognition with multimodal fisher vector , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[25] Nasser Kehtarnavaz,et al. UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor , 2015, 2015 IEEE International Conference on Image Processing (ICIP).
[26] Abhinav Gupta,et al. Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).
[27] Bernard Ghanem,et al. ActivityNet: A large-scale video benchmark for human activity understanding , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Meng Wang,et al. 3D Human Activity Recognition with Reconfigurable Convolutional Neural Networks , 2014, ACM Multimedia.
[29] Jake K. Aggarwal,et al. View invariant human action recognition using histograms of 3D joints , 2012, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.
[30] Wanqing Li,et al. Action recognition based on a bag of 3D points , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.
[31] Matei T. Ciocarlie,et al. The Columbia grasp database , 2009, 2009 IEEE International Conference on Robotics and Automation.
[32] Atsushi Nakazawa,et al. Learning from Observation Paradigm: Leg Task Models for Enabling a Biped Humanoid Robot to Imitate Human Dances , 2007, Int. J. Robotics Res..
[33] Ying Li,et al. A shape matching algorithm for synthesizing humanlike enveloping grasps , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..
[34] Jun Nakanishi,et al. Movement imitation with nonlinear dynamical systems in humanoid robots , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).
[35] Luca Iocchi,et al. RoboCup@Home: Scientific Competition and Benchmarking for Domestic Service Robots , 2009 .
[36] Jessica K. Hodgins,et al. Detailed Human Data Acquisition of Kitchen Activities: the CMU-Multimodal Activity Database (CMU-MMAC) , 2008 .