Bob and Alice Go to a Bar: Reasoning About Future With Probabilistic Programs

It is well known that reinforcement learning can be cast as inference in an appropriate probabilistic model. However, this commonly involves introducing a distribution over agent trajectories with probabilities proportional to exponentiated rewards. In this work, we formulate reinforcement learning as Bayesian inference without resorting to rewards, and show that rewards are derived from agent’s preferences, rather than the other way around. We argue that agent preferences should be specified stochastically rather than deterministically. Reinforcement learning via inference with stochastic preferences naturally describes agent behaviors, does not require introducing rewards and exponential weighing of trajectories, and allows to reason about agents using the solid foundation of Bayesian statistics. Stochastic conditioning, a probabilistic programming paradigm for conditioning models on distributions rather than values, is the formalism behind agents with probabilistic preferences. We demonstrate realization of our approach on case studies using both a two-agent coordinate game and a single agent acting in a noisy environment, showing that despite superficial differences, both cases can be modelled and reasoned about based on the same principles.

[1]  David Tolpin,et al.  MCTS Based on Simple Regret , 2012, AAAI.

[2]  Andrew Y. Ng,et al.  Pharmacokinetics of a novel formulation of ivermectin after administration to goats , 2000, ICML.

[3]  David Tolpin,et al.  Deployable probabilistic programming , 2019, Onward!.

[4]  David Tolpin,et al.  Black-Box Policy Search with Probabilistic Programs , 2015, AISTATS.

[5]  Frédérick Garcia,et al.  On-Line Search for Solving Markov Decision Processes via Heuristic Sampling , 2004, ECAI.

[6]  Noah D. Goodman,et al.  Reasoning about reasoning by nested conditioning: Modeling theory of mind with probabilistic programs , 2014, Cognitive Systems Research.

[7]  H. Kappen An introduction to stochastic control theory, path integrals and reinforcement learning , 2007 .

[8]  Leslie Pack Kaelbling,et al.  Bayesian Policy Search with Policy Priors , 2011, IJCAI.

[9]  Vikash K. Mansinghka,et al.  Gen: a general-purpose probabilistic programming system with programmable inference , 2019, PLDI.

[10]  Matthew Botvinick,et al.  Goal-directed decision making in prefrontal cortex: a computational framework , 2008, NIPS.

[11]  Marc Toussaint,et al.  Probabilistic inference for solving discrete and continuous state Markov Decision Processes , 2006, ICML.

[12]  Hongseok Yang,et al.  An Introduction to Probabilistic Programming , 2018, ArXiv.

[13]  Jan-Willem van de Meent,et al.  Nested Reasoning About Autonomous Agents Using Probabilistic Programs , 2018 .

[14]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.

[15]  David Tolpin,et al.  Probabilistic Programs with Stochastic Conditioning , 2020, ICML.

[16]  Hector Geffner,et al.  Beliefs In Multiagent Planning: From One Agent to Many , 2015, ICAPS.

[17]  Pieter Abbeel,et al.  Apprenticeship learning via inverse reinforcement learning , 2004, ICML.

[18]  Csaba Szepesvári,et al.  Algorithms for Reinforcement Learning , 2010, Synthesis Lectures on Artificial Intelligence and Machine Learning.

[19]  Robert Nozick,et al.  Newcomb’s Problem and Two Principles of Choice , 1969 .

[20]  Tom Rainforth,et al.  Nesting Probabilistic Programs , 2018, UAI.

[21]  Marc Toussaint,et al.  Scalable Multiagent Planning Using Probabilistic Inference , 2011, IJCAI.

[22]  Hai Wan,et al.  A General Multi-agent Epistemic Planner Based on Higher-order Belief Change , 2018, IJCAI.

[23]  C. Andrieu,et al.  The pseudo-marginal approach for efficient Monte Carlo computations , 2009, 0903.5480.

[24]  Alan K. Mackworth,et al.  Artificial Intelligence - Foundations of Computational Agents , 2010 .