Hutter's optimal universal but incomputable AIXI agent models the environment as an initially unknown probability distributioncomputing program. Once the latter is found through (incomputable) exhaustive search, classical planning yields an optimal policy. Here we reverse the roles of agent and environment by assuming a computable optimal policy realizable as a program mapping histories to actions. This assumption is powerful for two reasons: (1) The environment need not be probabilistically computable, which allows for dealing with truly stochastic environments, (2) All candidate policies are computable. In stochastic settings, our novel method Optimal Direct Policy Search (ODPS) identifies the best policy by direct universal search in the space of all possible computable policies. Unlike AIXI, it is computable, model-free, and does not require planning. We show that ODPS is optimal in the sense that its reward converges to the reward of the optimal policy in a very broad class of partially observable stochastic environments.
[1]
Jürgen Schmidhuber,et al.
Ultimate Cognition à la Gödel
,
2009,
Cognitive Computation.
[2]
Jürgen Schmidhuber,et al.
Sequential Decision Making Based on Direct Search
,
2001,
Sequence Learning.
[3]
Tom Schaul,et al.
Towards Practical Universal Search
,
2010,
AGI 2010.
[4]
Marcus Hutter,et al.
Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability (Texts in Theoretical Computer Science. An EATCS Series)
,
2006
.
[5]
Jürgen Schmidhuber,et al.
Optimal Ordered Problem Solver
,
2002,
Machine Learning.
[6]
Jürgen Schmidhuber,et al.
Gödel Machines: Fully Self-referential Optimal Universal Self-improvers
,
2007,
Artificial General Intelligence.
[7]
Peter Dayan,et al.
A Neural Substrate of Prediction and Reward
,
1997,
Science.
[8]
Joel Veness,et al.
A Monte-Carlo AIXI Approximation
,
2009,
J. Artif. Intell. Res..