Comparison of Loss Functions for Training of Deep Neural Networks in Shogi
暂无分享,去创建一个
[1] Michael Buro,et al. From Simple Features to Sophisticated Evaluation Functions , 1998, Computers and Games.
[2] Murray Campbell,et al. Deep Blue , 2002, Artif. Intell..
[3] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[4] Tomoyuki Kaneko,et al. Large-Scale Optimization for Evaluation Functions with Minimax Search , 2014, J. Artif. Intell. Res..
[5] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[6] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[7] Demis Hassabis,et al. Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm , 2017, ArXiv.
[8] Tomoyuki Kaneko,et al. Building Evaluation Functions for Chess and Shogi with Uniformity Regularization Networks , 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG).
[9] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] A. J. Lawrance,et al. An exponential moving-average sequence and point process (EMA1) , 1977, Journal of Applied Probability.
[11] Tomoyuki Kaneko,et al. Heterogeneous Multi-task Learning of Evaluation Functions for Chess and Shogi , 2018, ICONIP.
[12] William H. Press,et al. Numerical recipes , 1990 .
[13] W. Press,et al. Savitzky-Golay Smoothing Filters , 2022 .