We define the relevant information in a signal $x\in X$ as being the information that this signal provides about another signal $y\in \Y$. Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. Understanding the signal $x$ requires more than just predicting $y$, it also requires specifying which features of $\X$ play a role in the prediction. We formalize this problem as that of finding a short code for $\X$ that preserves the maximum information about $\Y$. That is, we squeeze the information that $\X$ provides about $\Y$ through a `bottleneck' formed by a limited set of codewords $\tX$. This constrained optimization problem can be seen as a generalization of rate distortion theory in which the distortion measure $d(x,\x)$ emerges from the joint statistics of $\X$ and $\Y$. This approach yields an exact set of self consistent equations for the coding rules $X \to \tX$ and $\tX \to \Y$. Solutions to these equations can be found by a convergent re-estimation method that generalizes the Blahut-Arimoto algorithm. Our variational principle provides a surprisingly rich framework for discussing a variety of problems in signal processing and learning, as will be described in detail elsewhere.
[1]
Richard E. Blahut,et al.
Computation of channel capacity and rate-distortion functions
,
1972,
IEEE Trans. Inf. Theory.
[2]
Thomas M. Cover,et al.
Elements of Information Theory
,
2005
.
[3]
Naftali Tishby,et al.
Distributional Clustering of English Words
,
1993,
ACL.
[4]
Naftali Tishby,et al.
Agglomerative Information Bottleneck
,
1999,
NIPS.
[5]
Naftali Tishby,et al.
Document clustering using word clusters via the information bottleneck method
,
2000,
SIGIR '00.