Learning the Persistence of Actions in Reactive Control Rules

Abstract This paper explores the effect of explicitly searching for the persistence of each decision in a time-dependent sequential decision task. In prior studies, Grefenstette, et al, show the effectiveness of SAMUEL, a genetic algorithm-based system, in solving a simulation problem where an agent learns how to evade a predator that is in pursuit. In their work, an agent applies a control action at each time step. This paper examines a reformulation of the problem: the agent learns not only the level of response of a control action, but also how long to apply that control action. By examining this problem, the work shows that it is appropriate to choose a representation of the state space that compresses time information when solving a time-dependent sequential decision problem. By compressing time information, critical events in the decision sequence become apparent.