Compression in a Distributed Setting

Motivated by an attempt to understand the formation and development of (human) language, we introduce a "distributed compression" problem. In our problem a sequence of pairs of players from a set of K players are chosen and tasked to communicate messages drawn from an unknown distribution Q. Arguably languages are created and evolve to compress frequently occurring messages, and we focus on this aspect. The only knowledge that players have about the distribution Q is from previously drawn samples, but these samples differ from player to player. The only common knowledge between the players is restricted to a common prior distribution P and some constant number of bits of information (such as a learning algorithm). Letting T_epsilon denote the number of iterations it would take for a typical player to obtain an epsilon-approximation to Q in total variation distance, we ask whether T_epsilon iterations suffice to compress the messages down roughly to their entropy and give a partial positive answer. We show that a natural uniform algorithm can compress the communication down to an average cost per message of O(H(Q) + log (D(P || Q)) in tilde{O}(T_epsilon) iterations while allowing for O(epsilon)-error, where D(. || .) denotes the KL-divergence between distributions. For large divergences this compares favorably with the static algorithm that ignores all samples and compresses down to H(Q) + D(P || Q) bits, while not requiring T_epsilon * K iterations that it would take players to develop optimal but separate compressions for each pair of players. Along the way we introduce a "data-structural" view of the task of communicating with a natural language and show that our natural algorithm can also be implemented by an efficient data structure, whose storage is comparable to the storage requirements of Q and whose query complexity is comparable to the lengths of the message to be compressed. Our results give a plausible mathematical analogy to the mechanisms by which human languages get created and evolve, and in particular highlights the possibility of coordination towards a joint task (agreeing on a language) while engaging in distributed learning.

[1]  M A Nowak,et al.  Evolutionary biology of language. , 2000, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[2]  Douglas Adams,et al.  The hitchhiker's guide to the galaxy : radio scripts , 2005 .

[3]  M. Arbib,et al.  Language within our grasp , 1998, Trends in Neurosciences.

[4]  Madhu Sudan,et al.  Deterministic compression with uncertain priors , 2014, ITCS.

[5]  S. Pinker,et al.  Natural language and natural selection , 1990, Behavioral and Brain Sciences.

[6]  Erez Lieberman,et al.  Quantifying the evolutionary dynamics of language , 2007, Nature.

[7]  M A Nowak,et al.  The evolution of language. , 1999, Proceedings of the National Academy of Sciences of the United States of America.

[8]  Adam Tauman Kalai,et al.  Compression without a common prior: an information-theoretic justification for ambiguity in language , 2011, ICS.

[9]  Simon Kirby,et al.  Innateness and culture in the evolution of language , 2006, Proceedings of the National Academy of Sciences.

[10]  Nikolaus Ritt Towards an evolutionary theory of language , 2004 .

[11]  M A Nowak,et al.  Language evolution and information theory. , 2000, Journal of theoretical biology.

[12]  P. Niyogi,et al.  Computational and evolutionary aspects of language , 2002, Nature.

[13]  Colin McGinn,et al.  Rules and Representations by Noam Chomsky , 1981 .

[14]  M. Nowak,et al.  The evolutionary language game. , 1999, Journal of theoretical biology.

[15]  Noam Chomsky,et al.  The faculty of language: what is it, who has it, and how did it evolve? , 2002, Science.