The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget

In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant. The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target. In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input. This is typically the case when we have a standard conditioning input, such as a state observation, and a ``privileged'' input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent. In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication). Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access. In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not. We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.

[1]  Sergey Levine,et al.  InfoBot: Transfer and Exploration via the Information Bottleneck , 2019, ICLR.

[2]  Shakir Mohamed,et al.  Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning , 2015, NIPS.

[3]  Shimon Whiteson,et al.  Learning to Communicate with Deep Multi-Agent Reinforcement Learning , 2016, NIPS.

[4]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[5]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[6]  Doina Precup,et al.  An information-theoretic approach to curiosity-driven reinforcement learning , 2012, Theory in Biosciences.

[7]  Pieter Abbeel,et al.  Emergence of Grounded Compositional Language in Multi-Agent Populations , 2017, AAAI.

[8]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[9]  Thomas M. Cover,et al.  Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing) , 2006 .

[10]  Yoshua Bengio,et al.  BabyAI: First Steps Towards Grounded Language Learning With a Human In the Loop , 2018, ArXiv.

[11]  Daan Wierstra,et al.  Variational Intrinsic Control , 2016, ICLR.

[12]  Rob Fergus,et al.  Learning Multiagent Communication with Backpropagation , 2016, NIPS.

[13]  Filip De Turck,et al.  VIME: Variational Information Maximizing Exploration , 2016, NIPS.

[14]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[15]  Alexander A. Alemi,et al.  Deep Variational Information Bottleneck , 2017, ICLR.

[16]  Regina Barzilay,et al.  Representation Learning for Grounded Spatial Reasoning , 2017, TACL.

[17]  Tom Schaul,et al.  Universal Value Function Approximators , 2015, ICML.

[18]  D. Kahneman Maps of Bounded Rationality: Psychology for Behavioral Economics , 2003 .

[19]  Sergey Levine,et al.  Recurrent Independent Mechanisms , 2019, ICLR.

[20]  Jason Weston,et al.  End-To-End Memory Networks , 2015, NIPS.

[21]  Daniel Polani,et al.  Grounding subgoals in information transitions , 2011, 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL).

[22]  Jonathan D. Cohen,et al.  Toward a Rational and Mechanistic Account of Mental Effort. , 2017, Annual review of neuroscience.

[23]  A. Dickinson Actions and habits: the development of behavioural autonomy , 1985 .

[24]  Alex Graves,et al.  Neural Turing Machines , 2014, ArXiv.

[25]  Christopher Joseph Pal,et al.  Sparse Attentive Backtracking: Temporal CreditAssignment Through Reminding , 2018, NeurIPS.

[26]  S. Sloman The empirical case for two systems of reasoning. , 1996 .

[27]  Yoshua Bengio,et al.  Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.

[28]  M. Botvinick,et al.  Motivation and cognitive control: from behavior to neural mechanism. , 2015, Annual review of psychology.

[29]  Alex Graves,et al.  Recurrent Models of Visual Attention , 2014, NIPS.

[30]  Yoshua Bengio,et al.  Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.

[31]  Stefano Soatto,et al.  Information Dropout: learning optimal representations through noise , 2017, ArXiv.