Methods for Efficient Network Coding

I. INTRODUCTION Random linear network coding is a multicast communication scheme in which all participating nodes send out coded packets formed from random linear combinations of packets received so far. This scheme is capacity-achieving for single sessions over lossy wireline or wireless packet networks [1], [2], [3], [4]. Thus, from the point of view of efficiently utilizing transmissions, random linear network coding is an attractive strategy. However, it is not presently attractive from the point of view of efficiently utilizing computational resources. To decode a k-packet-long message, a decoder needs to invert a k×k dense matrix, which, using Gaussian elimination, requires O(k 3) operations (or O(k 2) operations per input symbol). Once the matrix inverse is computed, applying the inverse to the received coded packets to recover the message requires O(k 2) operations (or O(k) operations per input symbol). Although the former computation is more costly than the latter as a function of k, it is usually the latter cost that dominates, since the latter computation depends on the length of the packets (which is usually on the order of kilobytes), while the former does not. This dominant cost can make the computational resources required for random linear network coding prohibitive [5]. But a random linear code is a somewhat na¨ıve code. By picking the code randomly, we ensure that the code is efficient at communicating information but, at the same time, by employing little design in the code, we obtain a code that is computationally inefficient. Thus, a natural code design question arises: can we design a network code that preserves the communication efficiency of a random linear code and achieves better computational efficiency? In this paper, we give an affirmative answer to this question. The code that we present as our answer achieves significantly better computational efficiency and is based primarily on techniques that are now quite standard in the literature on erasure codes. That said, our application of these techniques to network coding requires some novel ideas. First, we partition

[1]  Noga Alon,et al.  The Probabilistic Method , 2015, Fundamentals of Ramsey Theory.

[2]  Daniel A. Spielman,et al.  Practical loss-resilient codes , 1997, STOC '97.

[3]  P. Oswald,et al.  Capacity-achieving sequences for the erasure channel , 2001, Proceedings. 2001 IEEE International Symposium on Information Theory (IEEE Cat. No.01CH37252).

[4]  Michael Luby,et al.  LT codes , 2002, The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings..

[5]  P. Maymounkov Online codes , 2002 .

[6]  Amin Shokrollahi,et al.  Capacity-achieving sequences for the erasure channel , 2002, IEEE Trans. Inf. Theory.

[7]  K. Jain,et al.  Practical Network Coding , 2003 .

[8]  A. Shokrollahi Raptor codes , 2004, International Symposium onInformation Theory, 2004. ISIT 2004. Proceedings..

[9]  R. Koetter,et al.  On Coding for Reliable Communication over Packet Networks , 2005, ISIT.

[10]  Christina Fragouli,et al.  Coding schemes for line networks , 2005, Proceedings. International Symposium on Information Theory, 2005. ISIT 2005..

[11]  Muriel Médard,et al.  On coding for reliable communication over packet networks , 2005, Phys. Commun..

[12]  Christos Gkantsidis,et al.  Network coding for large scale content distribution , 2005, Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies..

[13]  Yunnan Wu A Trellis Connectivity Analysis of Random Linear Network Coding with Buffering , 2006, 2006 IEEE International Symposium on Information Theory.

[14]  Fang Zhao,et al.  Minimum-cost multicast over coded packet networks , 2005, IEEE Transactions on Information Theory.

[15]  Raymond W. Yeung Avalanche: A Network Coding Analysis , 2007, Commun. Inf. Syst..