Identifying Reward Functions using Anchor Actions

We propose a reward function estimation framework for inverse reinforcement learning with deep energy-based policies. We name our method PQR, as it sequentially estimates the Policy, the $Q$-function, and the Reward function. PQR does not assume that the reward solely depends on the state, instead it allows for a dependency on the choice of action. Moreover, PQR allows for stochastic state transitions. To accomplish this, we assume the existence of one anchor action whose reward is known, typically the action of doing nothing, yielding no reward. We present both estimators and algorithms for the PQR method. When the environment transition is known, we prove that the PQR reward estimator uniquely recovers the true reward. With unknown transitions, we bound the estimation error of PQR. Finally, the performance of PQR is demonstrated by synthetic and real-world datasets.

[1]  John Rust Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher , 1987 .

[2]  Anca D. Dragan,et al.  Learning Human Objectives by Evaluating Hypothetical Behavior , 2019, ICML.

[3]  Andrew Y. Ng,et al.  Pharmacokinetics of a novel formulation of ivermectin after administration to goats , 2000, ICML.

[4]  Sergey Levine,et al.  Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization , 2016, ICML.

[5]  Liwei Wang,et al.  Gradient Descent Finds Global Minima of Deep Neural Networks , 2018, ICML.

[6]  Ruosong Wang,et al.  Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks , 2019, ICML.

[7]  Victor Aguirregabiria,et al.  Advances in Economics and Econometrics: Recent Developments in Empirical IO: Dynamic Demand and Dynamic Games , 2013 .

[8]  Anind K. Dey,et al.  Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.

[9]  Sergey Levine,et al.  Learning Robust Rewards with Adversarial Inverse Reinforcement Learning , 2017, ICLR 2017.

[10]  R. C. Merton,et al.  AN INTERTEMPORAL CAPITAL ASSET PRICING MODEL , 1973 .

[11]  Shane Legg,et al.  Scalable agent alignment via reward modeling: a research direction , 2018, ArXiv.

[12]  Charles F. Manski,et al.  Partial Identification in Econometrics , 2010 .

[13]  Mayank Bansal,et al.  ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst , 2018, Robotics: Science and Systems.

[14]  E. Ionides Truncated Importance Sampling , 2008 .

[15]  Csaba Szepesvári,et al.  Finite-Time Bounds for Fitted Value Iteration , 2008, J. Mach. Learn. Res..

[16]  Steven Berry,et al.  Airline Hubs: Costs, Markups and the Implications of Customer Heterogeneity , 1996 .

[17]  David Page,et al.  A Screening Rule for l1-Regularized Ising Model Estimation , 2017, NIPS.

[18]  Jaap H. Abbring,et al.  Identification of Dynamic Discrete Choice Models , 2010 .

[19]  Anca D. Dragan,et al.  SQIL: Imitation Learning via Regularized Behavioral Cloning , 2019, ArXiv.

[20]  V. J. Hotz,et al.  A Simulation Estimator for Dynamic Models of Discrete Choice , 1994 .

[21]  R. C. Merton,et al.  Optimum Consumption and Portfolio Rules in a Continuous-Time Model* , 1975 .

[22]  Sergey Levine,et al.  Causal Confusion in Imitation Learning , 2019, NeurIPS.

[23]  Florian Heiss,et al.  Discrete Choice Methods with Simulation , 2016 .

[24]  H. Kappen Path integrals and symmetry breaking for optimal control theory , 2005, physics/0505066.

[25]  Emanuel Todorov,et al.  Linearly-solvable Markov decision problems , 2006, NIPS.

[26]  David Page,et al.  Support Vector Machines for Differential Prediction , 2014, ECML/PKDD.

[27]  Ying Jiang,et al.  Improving Policy Functions in High-Dimensional Dynamic Games , 2015 .

[28]  Christopher Ré,et al.  Mendelian Randomization with Instrumental Variable Synthesis (IVY) , 2019, bioRxiv.

[29]  Kenneth Y. Goldberg,et al.  Learning Deep Policies for Robot Bin Picking by Simulating Robust Grasping Sequences , 2017, CoRL.

[30]  Peter Arcidiacono,et al.  Practical Methods for Estimation of Dynamic Discrete Choice Models , 2011 .

[31]  Han Hong,et al.  Identification and Estimation of a Discrete Game of Complete Information , 2010 .

[32]  Sinong Geng,et al.  An Efficient Pseudo-likelihood Method for Sparse Binary Pairwise Markov Network Estimation , 2017, 1702.08320.

[33]  Geoffrey E. Hinton A Practical Guide to Training Restricted Boltzmann Machines , 2012, Neural Networks: Tricks of the Trade.

[34]  Markus Wulfmeier,et al.  Maximum Entropy Deep Inverse Reinforcement Learning , 2015, 1507.04888.

[35]  V. J. Hotz,et al.  Conditional Choice Probabilities and the Estimation of Dynamic Models , 1993 .

[36]  S. Athey,et al.  Estimating Treatment Effects with Causal Forests: An Application , 2019, Observational Studies.

[37]  Stefano Ermon,et al.  Generative Adversarial Imitation Learning , 2016, NIPS.

[38]  Christopher R'e,et al.  Ivy: Instrumental Variable Synthesis for Causal Inference , 2020, AISTATS.

[39]  Peter S. Arcidiacono,et al.  Conditional Choice Probability Estimation of Dynamic Discrete Choice Models With Unobserved Heterogeneity , 2011 .

[40]  Sergey Levine,et al.  Reinforcement Learning with Deep Energy-Based Policies , 2017, ICML.

[41]  David Page,et al.  Temporal Poisson Square Root Graphical Models , 2018, ICML.

[42]  Andrew Y. Ng,et al.  Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping , 1999, ICML.

[43]  Zhuoran Yang,et al.  A Theoretical Analysis of Deep Q-Learning , 2019, L4DC.

[44]  Sergey Levine,et al.  Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning , 2019, CoRL.

[45]  Martin A. Riedmiller Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method , 2005, ECML.