Reinforcement learning for penalty avoiding policy making
暂无分享,去创建一个
Reinforcement learning is a kind of machine learning. It aims to adapt an agent to a given environment with a clue to a reward. In general, the purpose of a reinforcement learning system is to acquire an optimum policy that can maximize expected reward per action. However, it is not always important for any environment. Especially, if we apply reinforcement learning to engineering, we expect the agent to avoid all penalties. In Markov decision processes, we call a rule penalty if and only if it has a penalty or it can transit to a penalty state where it does not contribute to get any reward. After suppressing all penalty rules, we aim to make a rational policy whose expected reward per action is larger than zero. We propose the penalty avoiding rational policy making algorithm that can suppress any penalty as stable as possible and get a reward constantly. By applying the algorithm to the tick-tack-toe its effectiveness is shown.
[1] Dimitri P. Bertsekas,et al. Dynamic Programming and Stochastic Control , 1977, IEEE Transactions on Systems, Man, and Cybernetics.
[2] Shigenobu Kobayashi,et al. k-Certainty Exploration Method: An Action Selector to Identify the Environment in Reinforcement Learning , 1997, Artif. Intell..
[3] Shigenobu Kobayashi,et al. Rationality of Reward Sharing in Multi-agent Reinforcement Learning , 1999, PRIMA.
[4] Richard S. Sutton,et al. Reinforcement Learning , 1992, Handbook of Machine Learning.