The Deterministic Information Bottleneck

Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek (1999) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-off between throwing away as many bits as possible and selectively keeping those that are most important. In the IB, compression is measured by mutual information. Here, we introduce an alternative formulation that replaces mutual information with entropy, which we call the deterministic information bottleneck (DIB) and argue better captures this notion of compression. As suggested by its name, the solution to the DIB problem turns out to be a deterministic encoder, or hard clustering, as opposed to the stochastic encoder, or soft clustering, that is optimal under the IB. We compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB significantly outperforms the IB in terms of the DIB cost function. We also empirically find that the DIB offers a considerable gain in computational efficiency over the IB, over a range of convergence parameters. Our derivation of the DIB also suggests a method for continuously interpolating between the soft clustering of the IB and the hard clustering of the DIB.

[1]  David J. C. MacKay,et al.  Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.

[2]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[3]  Naftali Tishby,et al.  Deep learning and the information bottleneck principle , 2015, 2015 IEEE Information Theory Workshop (ITW).

[4]  Yuhong Yang,et al.  Information Theory, Inference, and Learning Algorithms , 2005 .

[5]  Michael I. Jordan,et al.  A Probabilistic Interpretation of Canonical Correlation Analysis , 2005 .

[6]  Naftali Tishby,et al.  Document clustering using word clusters via the information bottleneck method , 2000, SIGIR '00.

[7]  J. Kinney,et al.  Equitability, mutual information, and the maximal information coefficient , 2013, Proceedings of the National Academy of Sciences.

[8]  Naftali Tishby,et al.  The Power of Word Clusters for Text Classification , 2006 .

[9]  M. Alexander,et al.  Principles of Neural Science , 1981 .

[10]  S. Schultz Principles of Neural Science, 4th ed. , 2001 .

[11]  Ohad Shamir,et al.  Learning and generalization with the information bottleneck , 2008, Theor. Comput. Sci..

[12]  William Bialek,et al.  How Many Clusters? An Information-Theoretic Perspective , 2003, Neural Computation.

[13]  Joseph J. Atick,et al.  What Does the Retina Know about Natural Scenes? , 1992, Neural Computation.

[14]  Naftali Tishby,et al.  The information bottleneck method , 2000, ArXiv.

[15]  Naftali Tishby,et al.  Extraction of relevant speech features using the information bottleneck method , 2005, INTERSPEECH.

[16]  H Barlow,et al.  Redundancy reduction revisited , 2001, Network.

[17]  Gregory K. Wallace,et al.  The JPEG still picture compression standard , 1991, CACM.

[18]  H. Barlow The exploitation of regularities in the environment by the brain. , 2001, The Behavioral and brain sciences.

[19]  David J. Field,et al.  Sparse coding with an overcomplete basis set: A strategy employed by V1? , 1997, Vision Research.

[20]  Ran El-Yaniv,et al.  Distributional Word Clusters vs. Words for Text Categorization , 2003, J. Mach. Learn. Res..

[21]  Eero P. Simoncelli,et al.  Natural image statistics and neural representation. , 2001, Annual review of neuroscience.

[22]  Gal Chechik,et al.  Information Bottleneck for Gaussian Variables , 2003, J. Mach. Learn. Res..

[23]  H B Barlow,et al.  The Ferrier lecture, 1980 , 1981, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[24]  Susanne Still,et al.  Optimal causal inference: estimating stored information and approximating causal architecture. , 2007, Chaos.

[25]  H. Barlow Critical limiting factors in the design of the eye and visual cortex , 1981 .

[26]  Thomas L. Griffiths,et al.  Hierarchical Topic Models and the Nested Chinese Restaurant Process , 2003, NIPS.

[27]  Naftali Tishby,et al.  Agglomerative Information Bottleneck , 1999, NIPS.

[28]  Noam Slonim,et al.  Maximum Likelihood and the Information Bottleneck , 2002, NIPS.

[29]  Ran El-Yaniv,et al.  On feature distributional clustering for text categorization , 2001, SIGIR '01.

[30]  Michael J. Berry,et al.  Predictive information in a sensory population , 2013, Proceedings of the National Academy of Sciences.

[31]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[32]  Naftali Tishby,et al.  Past-future information bottleneck in dynamical systems. , 2009, Physical review. E, Statistical, nonlinear, and soft matter physics.

[33]  Bruno A Olshausen,et al.  Sparse coding of sensory inputs , 2004, Current Opinion in Neurobiology.

[34]  Richard E. Turner,et al.  A Maximum-Likelihood Interpretation for Slow Feature Analysis , 2007, Neural Computation.

[35]  Naftali Tishby,et al.  Speaker recognition by Gaussian information bottleneck , 2009, INTERSPEECH.