CARL
暂无分享,去创建一个
[1] Sergey Levine,et al. Continuous character control with low-dimensional embeddings , 2012, ACM Trans. Graph..
[2] Hyun Joon Shin,et al. Motion synthesis and editing in low‐dimensional spaces , 2006, Comput. Animat. Virtual Worlds.
[3] M. van de Panne,et al. Generalized biped walking control , 2010, ACM Trans. Graph..
[4] Zoran Popovic,et al. Optimal gait and form for animal locomotion , 2009, ACM Trans. Graph..
[5] C. Karen Liu,et al. Online control of simulated humanoids using particle belief propagation , 2015, ACM Trans. Graph..
[6] A. Karpathy,et al. Locomotion skills for simulated quadrupeds , 2011, SIGGRAPH 2011.
[7] Sergey Levine,et al. DeepMimic , 2018, ACM Trans. Graph..
[8] Jitendra Malik,et al. Recurrent Network Models for Human Dynamics , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[9] Jessica K. Hodgins,et al. Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.
[10] Philippe Beaudoin,et al. Robust task-based control policies for physics-based characters , 2009, ACM Trans. Graph..
[11] KangKang Yin,et al. SIMBICON: simple biped locomotion control , 2007, ACM Trans. Graph..
[12] Libin Liu,et al. Learning to schedule control fragments for physics-based characters using deep Q-learning , 2017, TOGS.
[13] J. Forbes,et al. DReCon: data-driven responsive control of physics-based characters , 2019, ACM Trans. Graph..
[14] Jungdam Won,et al. Aerobatics control of flying creatures via self-regulated learning , 2018, ACM Trans. Graph..
[15] Bharadwaj S. Amrutur,et al. Design, Development and Experimental Realization of A Quadrupedal Research Platform: Stoch , 2019, 2019 5th International Conference on Control, Automation and Robotics (ICCAR).
[16] Jung-Woo Ha,et al. StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[17] Yoonsang Lee,et al. Data-driven biped control , 2010, ACM Trans. Graph..
[18] Philippe Beaudoin,et al. Robust task-based control policies for physics-based characters , 2009, SIGGRAPH 2009.
[19] Zoran Popovic,et al. Generalizing locomotion style to new animals with inverse optimal regression , 2014, ACM Trans. Graph..
[20] Stefan Jeschke,et al. Physics-based motion capture imitation with deep reinforcement learning , 2018, MIG.
[21] Stefano Ermon,et al. Generative Adversarial Imitation Learning , 2016, NIPS.
[22] Sungkil Lee,et al. Iterative Depth Warping , 2018, ACM Trans. Graph..
[23] Jungdam Won,et al. How to train your dragon , 2017, ACM Trans. Graph..
[24] C. Karen Liu,et al. Optimal feedback control for character animation using an abstract model , 2010, SIGGRAPH 2010.
[25] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[26] Sergey Levine,et al. Learning Robust Rewards with Adversarial Inverse Reinforcement Learning , 2017, ICLR 2017.
[27] Taku Komura,et al. Phase-functioned neural networks for character control , 2017, ACM Trans. Graph..
[28] Jessica K. Hodgins,et al. Performance animation from low-dimensional control signals , 2005, SIGGRAPH 2005.
[29] Sergey Levine,et al. Trust Region Policy Optimization , 2015, ICML.
[30] Sebastian Starke,et al. Neural state machine for character-scene interactions , 2019, ACM Trans. Graph..
[31] Yuval Tassa,et al. Control-limited differential dynamic programming , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).
[32] Lucas Kovar,et al. Automated extraction and parameterization of motions in large data sets , 2004, ACM Trans. Graph..
[33] Yuval Tassa,et al. Synthesis and stabilization of complex behaviors through online trajectory optimization , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[34] Kumar Krishna Agrawal,et al. GANSynth: Adversarial Neural Audio Synthesis , 2019, ICLR.
[35] Jungdam Won,et al. Learning body shape variation in physics-based characters , 2019, ACM Trans. Graph..
[36] Sergey Levine,et al. Physically plausible simulation for character animation , 2012, SCA '12.
[37] M. V. D. Panne,et al. Sampling-based contact-rich motion control , 2010, ACM Trans. Graph..
[38] Sergey Levine,et al. MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies , 2019, NeurIPS.
[39] Jovan Popovic,et al. Simulation of Human Motion Data using Short‐Horizon Model‐Predictive Control , 2008, Comput. Graph. Forum.
[40] Glen Berseth,et al. Dynamic terrain traversal skills using reinforcement learning , 2015, ACM Trans. Graph..
[41] Wen-Chieh Lin,et al. Real‐time horse gait synthesis , 2013, Comput. Animat. Virtual Worlds.
[42] Alexei A. Efros,et al. Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[43] Eduardo F. Morales,et al. An Introduction to Reinforcement Learning , 2011 .
[44] Yee Whye Teh,et al. Neural probabilistic motor primitives for humanoid control , 2018, ICLR.
[45] Lucas Kovar,et al. Motion Graphs , 2002, ACM Trans. Graph..
[46] Yuval Tassa,et al. Emergence of Locomotion Behaviours in Rich Environments , 2017, ArXiv.
[47] Nicolas Heess,et al. Hierarchical visuomotor control of humanoids , 2018, ICLR.
[48] Baining Guo,et al. Improving Sampling‐based Motion Control , 2015, Comput. Graph. Forum.
[49] Nikolaos G. Tsagarakis,et al. On the Kinematic Motion Primitives (kMPs) – Theory and Application , 2012, Front. Neurorobot..
[50] Yuval Tassa,et al. MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[51] Joonho Lee,et al. Learning agile and dynamic motor skills for legged robots , 2019, Science Robotics.
[52] Glen Berseth,et al. Terrain-adaptive locomotion skills using deep reinforcement learning , 2016, ACM Trans. Graph..
[53] Taku Komura,et al. Mode-adaptive neural networks for quadruped motion control , 2018, ACM Trans. Graph..
[54] Jakub W. Pachocki,et al. Emergent Complexity via Multi-Agent Competition , 2017, ICLR.
[55] C. K. Liu,et al. Optimal feedback control for character animation using an abstract model , 2010, ACM Trans. Graph..
[56] Sergey Levine,et al. High-Dimensional Continuous Control Using Generalized Advantage Estimation , 2015, ICLR.
[57] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[58] J. Hodgins,et al. Learning to Schedule Control Fragments for Physics-Based Characters Using Deep Q-Learning , 2017, ACM Trans. Graph..
[59] 拓海 杉山,et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .
[60] Jessica K. Hodgins,et al. Construction and optimal search of interpolated motion graphs , 2007, ACM Trans. Graph..
[61] Taku Komura,et al. A Deep Learning Framework for Character Motion Synthesis and Editing , 2016, ACM Trans. Graph..
[62] Sunmin Lee,et al. Learning predict-and-simulate policies from unorganized human motion data , 2019, ACM Trans. Graph..
[63] Jehee Lee,et al. Interactive character animation by learning multi-objective control , 2018, ACM Trans. Graph..
[64] Yi Zhou,et al. Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis , 2017, ICLR.
[65] Wojciech Zaremba,et al. OpenAI Gym , 2016, ArXiv.