Deep Q networks for visual fighting game AI
暂无分享,去创建一个
Recently, the introduction of vision-based deep Q learning demonstrated successful results in Atari, and Visual Doom AI platform. Unlike the previous study, the fighting game assumes two players with a relatively large number of actions. In this study, we propose to use deep Q Networks (DQN) for the visual fighting game AI competitions. The number of actions was reduced to 11 and the sensitivity of several control parameters was tested using the visual fighting platform. The experimental results show the potential of the DQN approach for the two- player real-time fighting game.
[1] Ruck Thawonmas,et al. Applying and Improving Monte-Carlo Tree Search in a Fighting Game AI , 2016, ACE.
[2] Axel van Lamsweerde,et al. Learning machine learning , 1991 .