Language-Conditioned Imitation Learning with Base Skill Priors under Unstructured Data
暂无分享,去创建一个
Kai Huang | Zhenshan Bing | Alois Knoll | Xiangtong Yao | Chenguang Yang | Xiaojie Su | Hongkuan Zhou
[1] A. Koch,et al. Meta-Reinforcement Learning via Language Instructions , 2022, 2023 IEEE International Conference on Robotics and Automation (ICRA).
[2] Joseph J. Lim,et al. Skill-based Model-based Reinforcement Learning , 2022, CoRL.
[3] Oier Mees,et al. What Matters in Language Conditioned Robotic Imitation Learning Over Unstructured Data , 2022, IEEE Robotics and Automation Letters.
[4] Sergey Levine,et al. BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning , 2022, CoRL.
[5] W. Burgard,et al. CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks , 2021, IEEE Robotics and Automation Letters.
[6] Dieter Fox,et al. StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects , 2021, 2022 International Conference on Robotics and Automation (ICRA).
[7] Dieter Fox,et al. CLIPort: What and Where Pathways for Robotic Manipulation , 2021, CoRL.
[8] S. Savarese,et al. Learning Language-Conditioned Robot Behavior from Offline Data and Crowd-Sourced Annotation , 2021, CoRL.
[9] Joseph J. Lim,et al. Demonstration-Guided Reinforcement Learning with Learned Skills , 2021, CoRL.
[10] Chitta Baral,et al. Language-Conditioned Imitation Learning for Robot Manipulation Tasks , 2020, NeurIPS.
[11] Joseph J. Lim,et al. Accelerating Reinforcement Learning with Learned Skill Priors , 2020, CoRL.
[12] R. Mooney,et al. PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards , 2020, CoRL.
[13] Cho-Jui Hsieh,et al. What Does BERT with Vision Look At? , 2020, ACL.
[14] Corey Lynch,et al. Language Conditioned Imitation Learning Over Unstructured Data , 2020, Robotics: Science and Systems.
[15] Joseph J. Lim,et al. Learning to Coordinate Manipulation Skills via Skill Behavior Diversification , 2020, ICLR.
[16] Mohit Shridhar,et al. INGRESS: Interactive visual grounding of referring expressions , 2020, Int. J. Robotics Res..
[17] Jordi Pont-Tuset,et al. Connecting Vision and Language with Localized Narratives , 2019, ECCV.
[18] Sergey Levine,et al. Deep Dynamics Models for Learning Dexterous Manipulation , 2019, CoRL.
[19] Stefan Lee,et al. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks , 2019, NeurIPS.
[20] Hisashi Kawai,et al. Understanding Natural Language Instructions for Fetching Daily Objects Using GAN-Based Multimodal Target–Source Classification , 2019, IEEE Robotics and Automation Letters.
[21] S. Levine,et al. Learning Latent Plans from Play , 2019, CoRL.
[22] Pushmeet Kohli,et al. CompILE: Compositional Imitation Learning and Execution , 2018, ICML.
[23] John DeNero,et al. Guiding Policies with Language via Meta-Learning , 2018, ICLR.
[24] Yee Whye Teh,et al. Neural probabilistic motor primitives for humanoid control , 2018, ICLR.
[25] Pushmeet Kohli,et al. Learning to Follow Language Instructions with Adversarial Reward Induction , 2018, ArXiv.
[26] Karol Hausman,et al. Learning an Embedding Space for Transferable Robot Skills , 2018, ICLR.
[27] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[28] Honglak Lee,et al. Learning Structured Output Representation using Deep Conditional Generative Models , 2015, NIPS.