Computational Intelligence in Mind Games
暂无分享,去创建一个
[1] Claude E. Shannon,et al. Programming a computer for playing chess , 1950 .
[2] Allen Newell,et al. Chess-Playing Programs and the Problem of Complexity , 1958, IBM J. Res. Dev..
[3] Marvin Minsky,et al. Steps toward Artificial Intelligence , 1995, Proceedings of the IRE.
[4] David Elkind,et al. Learning: An Introduction , 1968 .
[5] Albert Lindsey Zobrist,et al. Feature extraction and representation for pattern recognition and the game of go , 1970 .
[6] A. D. D. Groot. Thought and Choice in Chess , 1978 .
[7] George C. Stockman,et al. A Minimax Algorithm Better than Alpha-Beta? , 1979, Artif. Intell..
[8] Hans J. Berliner,et al. The B* Tree Search Algorithm: A Best-First Proof Procedure , 1979, Artif. Intell..
[9] Ivan Bratko,et al. THE BRATKO-KOPEC EXPERIMENT: A COMPARISON OF HUMAN AND COMPUTER PERFORMANCE IN CHESS , 1982 .
[10] Alexander Reinefeld,et al. An Improvement to the Scout Tree Search Algorithm , 1983, J. Int. Comput. Games Assoc..
[11] R M Hyatt,et al. Cray Blitz , 1986 .
[12] David A. McAllester. Conspiracy Numbers for Min-Max Search , 1988, Artif. Intell..
[13] Gerald Tesauro,et al. Neurogammon Wins Computer Olympiad , 1989, Neural Computation.
[14] Jonathan Schaeffer,et al. The History Heuristic and Alpha-Beta Search Enhancements in Practice , 1989, IEEE Trans. Pattern Anal. Mach. Intell..
[15] Murray Campbell,et al. Singular Extensions: Adding Selectivity to Brute-Force Searching , 1990, Artif. Intell..
[16] Jonathan Schaeffer,et al. Computers, Chess, and Cognition , 2011, Springer New York.
[17] Donald F. Beal,et al. A Generalised Quiescence Search Algorithm , 1990, Artif. Intell..
[18] Robert Levinson,et al. Adaptive Pattern-Oriented Chess , 1991, AAAI Conference on Artificial Intelligence.
[19] William Tunstall-Pedoe,et al. Genetic Algorithms Optimizing Evaluation Functions , 1991, J. Int. Comput. Games Assoc..
[20] Paul E. Utgoff,et al. Automatic Feature Generation for Problem Solving Systems , 1992, ML.
[21] Gerald Tesauro,et al. Practical Issues in Temporal Difference Learning , 1992, Mach. Learn..
[22] Jonathan Schaeffer,et al. A World Championship Caliber Checkers Program , 1992, Artif. Intell..
[23] Michael Gherrity,et al. A game-learning machine , 1993 .
[24] Sebastian Thrun,et al. Explanation Based Learning: A Comparison of Symbolic and Neural Network Approaches , 1993, ICML.
[25] Terrence J. Sejnowski,et al. Temporal Difference Learning of Position Evaluation in the Game of Go , 1993, NIPS.
[26] Gerald Tesauro,et al. TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play , 1994, Neural Computation.
[27] Sebastian Thrun,et al. Learning to Play the Game of Chess , 1994, NIPS.
[28] Susan L. Epstein. Identifying the Right Reasons: Learning to Filter Decision Makers , 1994 .
[29] Xin Yao,et al. On Evolving Robust Strategies for Iterated Prisoner's Dilemma , 1993, Evo Workshops.
[30] Michael Buro,et al. ProbCut: An Effective Selective Extension of the α-β Algorithm , 1995, J. Int. Comput. Games Assoc..
[31] Sebastian Thrun,et al. Learning One More Thing , 1994, IJCAI.
[32] Gerald Tesauro,et al. Temporal difference learning and TD-Gammon , 1995, CACM.
[33] Martin Müller,et al. Computer go as a sum of local games: an application of combinatorial game theory , 1995 .
[34] Risto Miikkulainen,et al. Discovering Complex Othello Strategies Through Evolutionary Neural Networks , 1995 .
[35] Sebastian Thrun,et al. Explanation-based neural network learning a lifelong learning approach , 1995 .
[36] Jonathan Schaeffer,et al. CHINOOK: The World Man-Machine Checkers Champion , 1996, AI Mag..
[37] Susan L. Epstein,et al. PATTERN‐BASED LEARNING AND SPATIALLY ORIENTED CONCEPT FORMATION IN A MULTI‐AGENT, DECISION‐MAKING EXPERT , 1996, Comput. Intell..
[38] Andrew W. Moore,et al. Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..
[39] Johannes Fürnkranz,et al. Machine Learning in Computer Chess: The Next Generation , 1996, J. Int. Comput. Games Assoc..
[40] Jonathan Schaeffer,et al. Best-First Fixed-Depth Minimax Algorithms , 1996, J. Int. Comput. Games Assoc..
[41] Jonathan Schaeffer,et al. Exploiting Graph Properties of Game Trees , 1996, AAAI/IAAI, Vol. 1.
[42] Jordan B. Pollack,et al. Coevolution of a Backgammon Player , 1996 .
[43] Donald F. Beal,et al. Learning Piece Values Using Temporal Differences , 1997, J. Int. Comput. Games Assoc..
[44] Jonathan Schaeffer,et al. One jump ahead - challenging human supremacy in checkers , 1997, J. Int. Comput. Games Assoc..
[45] Andrew G. Barto,et al. Reinforcement learning , 1998 .
[46] Shin Ishii,et al. Strategy Acquisition for the Game "Othello" Based on Reinforcement Learning , 1999, ICONIP.
[47] Michael Buro,et al. From Simple Features to Sophisticated Evaluation Functions , 1998, Computers and Games.
[48] Andrew Tridgell,et al. Experiments in Parameter Learning Using Temporal Differences , 1998, J. Int. Comput. Games Assoc..
[49] Andrew Tridgell,et al. KnightCap: A Chess Programm That Learns by Combining TD(lambda) with Game-Tree Search , 1998, ICML.
[50] David B. Fogel,et al. Evolving neural networks to play checkers without relying on expert knowledge , 1999, IEEE Trans. Neural Networks.
[51] Luigi Barone,et al. An adaptive learning model for simplified poker using evolutionary algorithms , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).
[52] Igor Aleksander. Neural networks: Evolutionary checkers , 1999, Nature.
[53] Ernst A. Heinz. Adaptive Null-Move Pruning , 1999, J. Int. Comput. Games Assoc..
[54] Michael Buro. Toward Opening Book Learning , 1999, J. Int. Comput. Games Assoc..
[55] David B. Fogel,et al. Evolution, neural networks, games, and intelligence , 1999, Proc. IEEE.
[56] Matthew L. Ginsberg,et al. GIB: Steps Toward an Expert-Level Bridge-Playing Program , 1999, IJCAI.
[57] Darse Billings,et al. Thoughts on RoShamBo , 2000, J. Int. Comput. Games Assoc..
[58] R. Lyndon While,et al. Adaptive Learning for Poker , 2000, GECCO.
[59] David B. Fogel,et al. Anaconda defeats Hoyle 6-0: a case study competing an evolved checkers program against commercially available software , 2000, Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512).
[60] Arthur L. Samuel,et al. Some studies in machine learning using the game of checkers , 2000, IBM J. Res. Dev..
[61] Boris Stilman. Linguistic Geometry: From Search to Construction , 2000 .
[62] Boris Stilman. From Search to Construction , 2000 .
[63] Kieran Greer. Computer chess move-ordering schemes using move influence , 2000, Artif. Intell..
[64] Sung-Bae Cho,et al. Exploiting coalition in co-evolutionary learning , 2000, Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512).
[65] Graham Kendall,et al. An evolutionary approach for the tuning of a chess evaluation function using population dynamics , 2001, Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546).
[66] D. B. Fogel,et al. Evolving a neural network to play checkers without human expertise , 2001 .
[67] Terrence J. Sejnowski,et al. Learning to evaluate Go positions via temporal difference methods , 2001 .
[68] Bruno Bouzy,et al. Computer Go: An AI oriented survey , 2001, Artif. Intell..
[69] Johannes Fürnkranz,et al. Machines that learn to play games , 2001 .
[70] Jonathan Schaeffer,et al. Temporal Difference Learning Applied to a High-Performance Game-Playing Program , 2001, IJCAI.
[71] Matthew L. Ginsberg,et al. GIB: Imperfect Information in a Computationally Challenging Game , 2011, J. Artif. Intell. Res..
[72] Paul E. Utgoff,et al. Feature construction for game playing , 2001 .
[73] Jonathan Schaeffer,et al. The challenge of poker , 2002, Artif. Intell..
[74] Murray Campbell,et al. Deep Blue , 2002, Artif. Intell..
[75] Martin Müller,et al. Computer Go , 2002, Artif. Intell..
[76] Brian Sheppard,et al. World-championship-caliber Scrabble , 2002, Artif. Intell..
[77] Lokendra Shastri,et al. Incremental class learning approach and its application to handwritten digit recognition , 2002, Inf. Sci..
[78] Michael Buro,et al. Improving heuristic mini-max search by supervised learning , 2002, Artif. Intell..
[79] Markus Enzenberger,et al. Evaluation in Go by a Neural Network using Soft Segmentation , 2003, ACG.
[80] Jugal K. Kalita,et al. The Significance of Temporal-Difference Learning in Self-Play Training TD-Rummy versus EVO-rummy , 2003, ICML.
[81] Dap Hartmann,et al. BEHIND DEEP BLUE , 2003 .
[82] Michael Buro,et al. Evaluation Function Tuning via Ordinal Correlation , 2003, ACG.
[83] Daniel Osman,et al. Comparison of TDLeaf(lambda) and TD(lambda) Learning in Game Playing Domain , 2004, ICONIP.
[84] Rich Caruana,et al. Multitask Learning , 1997, Machine Learning.
[85] Jacek Mandziuk,et al. Artificial Neural Networks for Solving Double Dummy Bridge Problems , 2004, ICAISC.
[86] D.B. Fogel,et al. A self-learning evolutionary chess program , 2004, Proceedings of the IEEE.
[87] Daniel Osman,et al. Temporal Difference Approach to Playing Give-Away Checkers , 2004, ICAISC.
[88] Andrew Tridgell,et al. Learning to Play Chess Using Temporal Differences , 2000, Machine Learning.
[89] Ariel Arbiser. Towards the unification of intuitive and formal game concepts with applications to computer chess , 2005, DiGRA Conference.
[90] Lakhmi C. Jain,et al. Computational Intelligence in Games , 2005, IEEE Transactions on Neural Networks.
[91] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[92] Richard S. Sutton,et al. Learning to predict by the methods of temporal differences , 1988, Machine Learning.
[93] Simon M. Lucas,et al. Coevolution versus self-play temporal difference learning for acquiring position evaluation in small-board go , 2005, IEEE Transactions on Evolutionary Computation.
[94] Jacek Mandziuk,et al. Evolution of Heuristics for Give-Away Checkers , 2005, ICANN.
[95] Jacek Mandziuk,et al. Neural Networks and the Estimation of Hands' Strength in Contract Bridge , 2006, ICAISC.
[96] Jacek Mandziuk,et al. Evolutionary-based heuristic generators for checkers and give-away checkers , 2007, Expert Syst. J. Knowl. Eng..