Learning to Generate Levels From Nothing
暂无分享,去创建一个
[1] Wojciech Zaremba,et al. OpenAI Gym , 2016, ArXiv.
[2] Julian Togelius,et al. Search-Based Procedural Content Generation: A Taxonomy and Survey , 2011, IEEE Transactions on Computational Intelligence and AI in Games.
[3] Julian Togelius,et al. PCGRL: Procedural Content Generation via Reinforcement Learning , 2020, AAAI 2020.
[4] Julian Togelius,et al. Procedural Content Generation via Machine Learning (PCGML) , 2017, IEEE Transactions on Games.
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] R. Sarpong,et al. Bio-inspired synthesis of xishacorenes A, B, and C, and a new congener from fuscol† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c9sc02572c , 2019, Chemical science.
[7] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[8] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Julian Togelius,et al. General Video Game AI: A Multitrack Framework for Evaluating Agents, Games, and Content Generation Algorithms , 2018, IEEE Transactions on Games.
[10] Julian Togelius,et al. DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution* , 2017, 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS).
[11] Julian Togelius,et al. Deep Reinforcement Learning for General Video Game AI , 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG).
[12] Julian Togelius,et al. Procedural Content Generation in Games , 2016, Computational Synthesis and Creative Systems.
[13] Simon M. Lucas,et al. Evolving mario levels in the latent space of a deep convolutional generative adversarial network , 2018, GECCO.
[14] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[15] Taehoon Kim,et al. Quantifying Generalization in Reinforcement Learning , 2018, ICML.
[16] Julian Togelius,et al. Deep Learning for Video Game Playing , 2017, IEEE Transactions on Games.
[17] Sergey Levine,et al. Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design , 2020, NeurIPS.
[18] Alex Graves,et al. Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.
[19] Julian Togelius,et al. Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation , 2018, 1806.10729.
[20] Julian Togelius,et al. An experiment in automatic game design , 2008, 2008 IEEE Symposium On Computational Intelligence and Games.
[21] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[22] Samy Bengio,et al. A Study on Overfitting in Deep Reinforcement Learning , 2018, ArXiv.
[23] Thomas Brox,et al. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks , 2016, NIPS.
[24] W. Hager,et al. and s , 2019, Shallow Water Hydraulics.
[25] Konrad Tollmar,et al. Adversarial Reinforcement Learning for Procedural Content Generation , 2021, 2021 IEEE Conference on Games (CoG).
[26] W. Marsden. I and J , 2012 .
[27] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[28] Kenneth O. Stanley,et al. POET: open-ended coevolution of environments and their optimized solutions , 2019, GECCO.