Reinforcement learning for penalty avoiding policy making

Reinforcement learning is a kind of machine learning. It aims to adapt an agent to a given environment with a clue to a reward. In general, the purpose of a reinforcement learning system is to acquire an optimum policy that can maximize expected reward per action. However, it is not always important for any environment. Especially, if we apply reinforcement learning to engineering, we expect the agent to avoid all penalties. In Markov decision processes, we call a rule penalty if and only if it has a penalty or it can transit to a penalty state where it does not contribute to get any reward. After suppressing all penalty rules, we aim to make a rational policy whose expected reward per action is larger than zero. We propose the penalty avoiding rational policy making algorithm that can suppress any penalty as stable as possible and get a reward constantly. By applying the algorithm to the tick-tack-toe its effectiveness is shown.