Heuristic Search Techniques for Real-Time Strategy Games
暂无分享,去创建一个
[1] Julian Togelius,et al. Script- and cluster-based UCT for StarCraft , 2014, 2014 IEEE Conference on Computational Intelligence and Games.
[2] Jonathan Schaeffer,et al. Monte Carlo Planning in RTS Games , 2005, CIG.
[3] Nathan R. Sturtevant,et al. Benchmarks for Grid-Based Pathfinding , 2012, IEEE Transactions on Computational Intelligence and AI in Games.
[4] Froduald Kabanza,et al. Opponent Behaviour Recognition for Real-Time Strategy Games , 2010, Plan, Activity, and Intent Recognition.
[5] Johan Hagelbäck,et al. Potential-field based navigation in StarCraft , 2012, 2012 IEEE Conference on Computational Intelligence and Games (CIG).
[6] Santiago Ontañón,et al. Walling in Strategy Games via Constraint Optimization , 2014, AIIDE.
[7] Michael Buro,et al. Efficient Triangulation-Based Pathfinding , 2006, AAAI.
[8] Pierre Bessière,et al. Special tactics: A Bayesian approach to tactical decision-making , 2012, 2012 IEEE Conference on Computational Intelligence and Games (CIG).
[9] Glenn A. Iba,et al. A heuristic approach to the discovery of macro-operators , 2004, Machine Learning.
[10] Héctor Muñoz-Avila,et al. CLASSQ-L: A Q-Learning Algorithm for Adversarial Real-Time Strategy Games , 2012, Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment.
[11] M. Buro,et al. StarCraft Unit Motion : Analysis and Search Enhancements , 2015 .
[12] Santiago Ontañón,et al. Learning from Demonstration and Case-Based Planning for Real-Time Strategy Games , 2008, Soft Computing Applications in Industry.
[13] Martin Certický,et al. Case-Based Reasoning for Army Compositions in Real-Time Strategy Games , 2022 .
[14] Marc J. V. Ponsen,et al. Improving Adaptive Game Ai with Evolutionary Learning , 2004 .
[15] Michael Buro,et al. Building Placement Optimization in Real-Time Strategy Games , 2014, Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment.
[16] Hector Muñoz-Avila,et al. Hierarchical Plan Representations for Encoding Strategic Game AI , 2005, AIIDE.
[17] Santiago Ontañón,et al. Situation Assessment for Plan Retrieval in Real-Time Strategy Games , 2008, ECCBR.
[18] Gabriel Synnaeve,et al. A Bayesian model for opening prediction in RTS games with application to StarCraft , 2011, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11).
[19] Kenneth D. Forbus,et al. How qualitative spatial reasoning can improve strategy game AIs , 2002, IEEE Intelligent Systems.
[20] Adrien Treuille,et al. Continuum crowds , 2006, SIGGRAPH 2006.
[21] Michael Buro,et al. Heuristic Search Applied to Abstract Combat Games , 2005, Canadian Conference on AI.
[22] Jeff Orkin,et al. Three States and a Plan: The A.I. of F.E.A.R. , 2006 .
[23] Michael Buro,et al. Global State Evaluation in StarCraft , 2014, AIIDE.
[24] Vadim Bulitko,et al. An evaluation of models for predicting opponent positions in first-person shooter video games , 2008, 2008 IEEE Symposium On Computational Intelligence and Games.
[25] Arnav Jhala,et al. Applying Goal-Driven Autonomy to StarCraft , 2010, AIIDE.
[26] Stefan J. Johansson,et al. A Multiagent Potential Field-Based Bot for Real-Time Strategy Games , 2009, Int. J. Comput. Games Technol..
[27] Michael Buro,et al. Hierarchical Adversarial Search Applied to Real-Time Strategy Games , 2014, AIIDE.
[28] C. Miles. Co-evolving Real-Time Strategy Game Playing Influence Map Trees With Genetic Algorithms , 2022 .
[29] Wentong Cai,et al. Simulation-based optimization of StarCraft tactical AI through evolutionary computation , 2012, 2012 IEEE Conference on Computational Intelligence and Games (CIG).
[30] Doina Precup,et al. Learning Options in Reinforcement Learning , 2002, SARA.
[31] Santiago Ontañón,et al. Kiting in RTS Games Using Influence Maps , 2012, Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment.
[32] Nick Hawes,et al. Evolutionary Learning of Goal Priorities in a Real-Time Strategy Game , 2012, AIIDE.
[33] Michael Buro,et al. Adversarial Planning Through Strategy Simulation , 2007, 2007 IEEE Symposium on Computational Intelligence and Games.
[34] Pieter Spronck,et al. Opponent Modeling in Real-Time Strategy Games , 2007, GAMEON.
[35] Jonathan Schaeffer,et al. The History Heuristic and Alpha-Beta Search Enhancements in Practice , 1989, IEEE Trans. Pattern Anal. Mach. Intell..
[36] M. Buro,et al. A FIRST LOOK AT BUILD-ORDER OPTIMIZATION IN REAL-TIME STRATEGY GAMES , 2007 .
[37] Michael Buro,et al. Puppet Search: Enhancing Scripted Behavior by Look-Ahead Search with Applications to Real-Time Strategy Games , 2021, AIIDE.
[38] David W. Aha,et al. Learning to Win: Case-Based Plan Selection in a Real-Time Strategy Game , 2005, Künstliche Intell..
[39] Alan Fern,et al. Extending Online Planning for Resource Production in Real-Time Strategy Games with Search , 2007 .
[40] Michael Buro,et al. Hierarchical Portfolio Search: Prismata's Robust AI Architecture for Games with Large Search Spaces , 2015, AIIDE.
[41] Luke Perkins,et al. Terrain Analysis in Real-Time Strategy Games: An Integrated Approach to Choke Point Detection and Region Decomposition , 2010, AIIDE.
[42] Michael Buro,et al. Alpha-Beta Pruning for Games with Simultaneous Moves , 2012, AAAI.
[43] Thomas G. Dietterich,et al. Learning Probabilistic Behavior Models in Real-Time Strategy Games , 2011, AIIDE.
[44] Michael Buro,et al. On the Complexity of Two-Player Attrition Games Played on Graphs , 2010, AIIDE.
[45] Santiago Ontañón,et al. Adversarial Hierarchical-Task Network Planning for Complex Real-Time Games , 2015, IJCAI.
[46] Michael Buro,et al. Concurrent Action Execution with Shared Fluents , 2007, AAAI.
[47] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[48] Alan Fern,et al. Online Planning for Resource Production in Real-Time Strategy Games , 2007, ICAPS.
[49] Michael Buro,et al. Fast Heuristic Search for RTS Game Combat Scenarios , 2012, AIIDE.
[50] Pierre Bessière,et al. A Bayesian Model for Plan Recognition in RTS Games Applied to StarCraft , 2011, AIIDE.
[51] Michael Buro,et al. Predicting Army Combat Outcomes in StarCraft , 2013, AIIDE.
[52] M. Buro,et al. ON THE DEVELOPMENT OF A FREE RTS GAME ENGINE , 2005 .
[53] Carlos Roberto Lopes,et al. Planning for resource production in real-time strategy games based on partial order planning, search and learning , 2010, 2010 IEEE International Conference on Systems, Man and Cybernetics.
[54] Ian D. Watson,et al. Applying reinforcement learning to small scale combat in the real-time strategy game StarCraft:Broodwar , 2012, 2012 IEEE Conference on Computational Intelligence and Games (CIG).
[55] Sushil J. Louis,et al. Evolving coordinated spatial tactics for autonomous entities using influence maps , 2009, 2009 IEEE Symposium on Computational Intelligence and Games.
[56] Sushil J. Louis,et al. Using co-evolved RTS opponents to teach spatial tactics , 2010, Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games.
[57] Michael Buro,et al. Real-Time Strategy Games: A New AI Research Challenge , 2003, IJCAI.
[58] Arnav Jhala,et al. Reactive planning idioms for multi-scale game AI , 2010, Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games.
[59] Michael Buro,et al. Build Order Optimization in StarCraft , 2011, AIIDE.
[60] Michael Buro,et al. Using Lanchester Attrition Laws for Combat Prediction in StarCraft , 2021, AIIDE.
[61] Nicola Beume,et al. Intelligent moving of groups in real-time strategy games , 2008, 2008 IEEE Symposium On Computational Intelligence and Games.
[62] Alex M. Andrew,et al. ROBOT LEARNING, edited by Jonathan H. Connell and Sridhar Mahadevan, Kluwer, Boston, 1993/1997, xii+240 pp., ISBN 0-7923-9365-1 (Hardback, 218.00 Guilders, $120.00, £89.95). , 1999, Robotica (Cambridge. Print).
[63] Vincent Corruble,et al. Designing a Reinforcement Learning-based Adaptive AI for Large-Scale Strategy Games , 2006, AIIDE.
[64] Michael Buro,et al. Portfolio greedy search and simulation for large-scale combat in starcraft , 2013, 2013 IEEE Conference on Computational Inteligence in Games (CIG).
[65] Rémi Coulom,et al. Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search , 2006, Computers and Games.
[66] Alan Fern,et al. UCT for Tactical Assault Planning in Real-Time Strategy Games , 2009, IJCAI.
[67] J. Nash. Equilibrium Points in N-Person Games. , 1950, Proceedings of the National Academy of Sciences of the United States of America.
[68] Craig W. Reynolds. Steering Behaviors For Autonomous Characters , 1999 .
[69] Arnav Jhala,et al. A Particle Model for State Estimation in Real-Time Strategy Games , 2011, AIIDE.
[70] Santiago Ontañón,et al. Automatic Learning of Combat Models for RTS Games , 2015, AIIDE.
[71] Santiago Ontañón,et al. A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft , 2013, IEEE Transactions on Computational Intelligence and AI in Games.
[72] Stefan J. Johansson,et al. Dealing with fog of war in a Real Time Strategy game environment , 2008, 2008 IEEE Symposium On Computational Intelligence and Games.
[73] Csaba Szepesvári,et al. Bandit Based Monte-Carlo Planning , 2006, ECML.