Parameter-free Network Sparsification and Data Reduction by Minimal Algorithmic Information Loss

The study of large and complex datasets, or big data, organized as networks has emerged as one of the central challenges in most areas of science and technology. Cellular and molecular networks in biology is one of the prime examples. Henceforth, a number of techniques for data dimensionality reduction, especially in the context of networks, have been developed. Yet, current techniques require a predefined metric upon which to minimize the data size. Here we introduce a family of parameter-free algorithms based on (algorithmic) information theory that are designed to minimize the loss of any (enumerable computable) property contributing to the object's algorithmic content and thus important to preserve in a process of data dimension reduction when forcing the algorithm to delete first the least important features. Being independent of any particular criterion, they are universal in a fundamental mathematical sense. Using suboptimal approximations of efficient (polynomial) estimations we demonstrate how to preserve network properties outperforming other (leading) algorithms for network dimension reduction. Our method preserves all graph-theoretic indices measured, ranging from degree distribution, clustering-coefficient, edge betweenness, and degree and eigenvector centralities. We conclude and demonstrate numerically that our parameter-free, Minimal Information Loss Sparsification (MILS) method is robust, has the potential to maximize the preservation of all recursively enumerable features in data and networks, and achieves equal to significantly better results than other data reduction and network sparsification methods.

[1]  Albert-László Barabási,et al.  Statistical mechanics of complex networks , 2001, ArXiv.

[2]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[3]  Jean-Paul Delahaye,et al.  Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines , 2012, PloS one.

[4]  A. Kolmogorov Three approaches to the quantitative definition of information , 1968 .

[5]  Ming Li,et al.  An Introduction to Kolmogorov Complexity and Its Applications , 2019, Texts in Computer Science.

[6]  Nikhil Srivastava,et al.  Graph Sparsification by Effective Resistances , 2011, SIAM J. Comput..

[7]  T. Rado On non-computable functions , 1962 .

[8]  Paul M. B. Vitányi,et al.  An Introduction to Kolmogorov Complexity and Its Applications, Third Edition , 1997, Texts in Computer Science.

[9]  David R. Karger,et al.  Approximating s – t Minimum Cuts in ~ O(n 2 ) Time , 2007 .

[10]  Mark E. J. Newman,et al.  The Structure and Function of Complex Networks , 2003, SIAM Rev..

[11]  Paul M. B. Vitányi,et al.  The Google Similarity Distance , 2004, IEEE Transactions on Knowledge and Data Engineering.

[12]  Bin Ma,et al.  The similarity metric , 2001, IEEE Transactions on Information Theory.

[13]  Jean-Paul Delahaye,et al.  Correspondence and Independence of Numerical Evaluations of Algorithmic Information Measures , 2012, Comput..

[14]  S. N. Dorogovtsev,et al.  Evolution of networks , 2001, cond-mat/0106144.

[15]  Hector Zenil,et al.  A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity , 2016, Entropy.

[16]  Ronald L. Graham,et al.  On the History of the Minimum Spanning Tree Problem , 1985, Annals of the History of Computing.

[17]  Hector Zenil,et al.  Low Algorithmic Complexity Entropy-deceiving Graphs , 2016, Physical review. E.

[18]  Hector Zenil,et al.  Methods of information theory and algorithmic complexity for network biology. , 2014, Seminars in cell & developmental biology.

[19]  Paul M. B. Vitányi,et al.  An Introduction to Kolmogorov Complexity and Its Applications , 1993, Graduate Texts in Computer Science.

[20]  Hector Zenil,et al.  Algorithmic Data Analytics, Small Data Matters and Correlation versus Causation , 2013, 1309.1418.

[21]  Paul Chew,et al.  There are Planar Graphs Almost as Good as the Complete Graph , 1989, J. Comput. Syst. Sci..

[22]  Constantin F. Aliferis,et al.  Challenges in the Analysis of Mass-Throughput Data: A Technical Commentary from the Statistical Machine Learning Perspective , 2006, Cancer informatics.

[23]  Gregory J. Chaitin,et al.  On the Length of Programs for Computing Finite Binary Sequences , 1966, JACM.

[24]  Hector Zenil,et al.  Coding-theorem like behaviour and emergence of the universal distribution from resource-bounded algorithmic probability , 2017, Int. J. Parallel Emergent Distributed Syst..

[25]  Hector Zenil,et al.  Quantifying loss of information in network-based dimensionality reduction techniques , 2015, J. Complex Networks.

[26]  Shang-Hua Teng,et al.  Spectral sparsification of graphs: theory and algorithms , 2013, CACM.

[27]  Lev Muchnik,et al.  Identifying influential spreaders in complex networks , 2010, 1001.5285.

[28]  Alfred V. Aho,et al.  The Transitive Reduction of a Directed Graph , 1972, SIAM J. Comput..

[29]  S. Shen-Orr,et al.  Superfamilies of Evolved and Designed Networks , 2004, Science.

[30]  Yuval Shavitt,et al.  A model of Internet topology using k-shell decomposition , 2007, Proceedings of the National Academy of Sciences.

[31]  Jean-Paul Delahaye,et al.  Numerical evaluation of algorithmic complexity for short strings: A glance into the innermost structure of randomness , 2011, Appl. Math. Comput..

[32]  Christopher G. Langton,et al.  Studying artificial life with cellular automata , 1986 .

[33]  Stephen Wolfram,et al.  A New Kind of Science , 2003, Artificial Life.

[34]  Ming Li,et al.  Clustering by compression , 2003, IEEE International Symposium on Information Theory, 2003. Proceedings..

[35]  Bolian Liu,et al.  Graphs determined by their (signless) Laplacian spectra , 2011 .

[36]  Massimo Marchiori,et al.  Error and attacktolerance of complex network s , 2004 .