Acquiring and Exploiting Rich Causal Models for Robust Decision Making

Abstract : Our project has made fundamental contributions to the understanding of robust decision making in human beings and machines through an intensive examination of how to learn rich, causal models of the world and how agents can use those models to make decisions. We report progress in eight key areas: 1) Significant progress on building rich models using probabilistic programming. 2) New Bayesian nonparametric models for learning dynamical systems. 3) A new Bayesian model that learns optimal policies by combining expert demonstrations of optimal behavior and data gathered by non-optimal exploration. 4) New policy learning methods based on probabilistic search. 5) A new policy learning algorithm for Bayesian reinforcement learning which is provably efficient. 6) New algorithms for hierarchical planning. 7) New transfer learning models which use hierarchical knowledge to transfer abstract properties of domains, such as general notions of consistency, determinism, generalizability, or clusterability. 8) A new foundation for compositional transfer of policy fragments.