Estimating the Entropy of DNA Sequences

The Shannon entropy is a standard measure for the order state of symbol sequences, such as, for example, DNA sequences. In order to incorporate correlations between symbols, the entropy of n-mers (consecutive strands of n symbols) has to be determined. Here, an assay is presented to estimate such higher order entropies (block entropies) for DNA sequences when the actual number of observations is small compared with the number of possible outcomes. The n-mer probability distribution underlying the dynamical process is reconstructed using elementary statistical principles: The theorem of asymptotic equi-distribution and the Maximum Entropy Principle. Constraints are set to force the constructed distributions to adopt features which are characteristic for the real probability distribution. From the many solutions compatible with these constraints the one with the highest entropy is the most likely one according to the Maximum Entropy Principle. An algorithm performing this procedure is expounded. It is tested by applying it to various DNA model sequences whose exact entropies are known. Finally, results for a real DNA sequence, the complete genome of the Epstein Barr virus, are presented and compared with those of other information carriers (texts, computer source code, music). It seems as if DNA sequences possess much more freedom in the combination of the symbols of their alphabet than written language or computer source codes. 7 1997 Academic Press Limited

[1]  Claude E. Shannon,et al.  Prediction and Entropy of Printed English , 1951 .

[2]  B. McMillan The Basic Theorems of Information Theory , 1953 .

[3]  Abraham Lempel,et al.  On the Complexity of Finite Sequences , 1976, IEEE Trans. Inf. Theory.

[4]  E. T. Jaynes,et al.  Where do we Stand on Maximum Entropy , 1979 .

[5]  Richard W. Hamming,et al.  Coding and Information Theory , 2018, Feynman Lectures on Computation.

[6]  R. Swanson A unifying concept for the amino acid code. , 1984, Bulletin of mathematical biology.

[7]  P. L. Deininger,et al.  DNA sequence and expression of the B95-8 Epstein—Barr virus genome , 1984, Nature.

[8]  J. Justice Maximum entropy and bayesian methods in applied statistics , 1986 .

[9]  H. Herzel Complexity of symbol sequences , 1988 .

[10]  P. Grassberger Finite sample corrections to entropy and dimension estimates , 1988 .

[11]  J. N. Kapur Maximum-entropy models in science and engineering , 1992 .

[12]  Werner Ebeling,et al.  Word frequency and entropy of symbolic sequences: a dynamical perspective , 1992 .

[13]  B. Dujon,et al.  The complete DNA sequence of yeast chromosome III , 1992, Nature.

[14]  W. Ebeling,et al.  A New Method to Calculate Higher-Order Entropies from Finite Samples , 1993 .

[15]  W. Ebeling ENTROPY, PREDICTABILITY AND HISTORICITY OF NONLINEAR PROCESSES , 1993 .

[16]  W. Ebeling,et al.  Finite sample effects in sequence analysis , 1994 .

[17]  S H White,et al.  Global statistics of protein sequences: implications for the origin, evolution, and prediction of structure. , 1994, Annual review of biophysics and biomolecular structure.

[18]  A. Stewart Genes V , 1994 .

[19]  Ebeling,et al.  Entropies of biosequences: The role of repeats. , 1994, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.

[20]  W. Ebeling,et al.  Guessing probability distributions from small samples , 1995, cond-mat/0203467.

[21]  T G Dewey,et al.  The Shannon information entropy of protein sequences. , 1996, Biophysical journal.