Weighted information and entropy rates

The weighted entropy $H^{\rm w}_\phi (X)=H^{\rm w}_\phi (f)$ of a random variable $X$ with values $x$ and a probability-mass/density function $f$ is defined as the mean value ${\mathbb E} I^{\rm w}_\phi(X)$ of the weighted information $I^{\rm w}_\phi (x)=-\phi (x)\log\,f(x)$. Here $x\mapsto\phi (x)\in{\mathbb R}$ is a given weight function (WF) indicating a 'value' of outcome $x$. For an $n$-component random vector ${\mathbf{X}}_0^{n-1}=(X_0,\ldots ,X_{n-1})$ produced by a random process ${\mathbf{X}}=(X_i,i\in{\mathbb Z})$, the weighted information $I^{\rm w}_{\phi_n}({\mathbf x}_0^{n-1})$ and weighted entropy $H^{\rm w}_{\phi_n}({\mathbf{X}}_0^{n-1})$ are defined similarly, with an WF $\phi_n({\mathbf x}_0^{n-1})$. Two types of WFs $\phi_n$ are considered, based on additive and a multiplicative forms ($\phi_n({\mathbf x}_0^{n-1})=\sum\limits_{i=0}^{n-1}{\varphi} (x_i)$ and $\phi_n({\mathbf x}_0^{n-1})=\prod\limits_{i=0}^{n-1}{\varphi} (x_i)$, respectively). The focus is upon ${\it rates}$ of the weighted entropy and information, regarded as parameters related to ${\mathbf{X}}$. We show that, in the context of ergodicity, a natural scale for an asymptotically additive/multiplicative WF is $\frac{1}{n^2}H^{\rm w}_{\phi_n}({\mathbf{X}}_0^{n-1})$ and $\frac{1}{n}\log\;H^{\rm w}_{\phi_n}({\mathbf{X}}_0^{n-1})$, respectively. This gives rise to ${\it primary}$ ${\it rates}$. The next-order terms can also be identified, leading to ${\it secondary}$ ${\it rates}$. We also consider emerging generalisations of the Shannon-McMillan-Breiman theorem.

[1]  M. Arató Linear Stochastic Systems with Constant Coefficients , 1982 .

[2]  Steven Kalikow,et al.  An Outline of Ergodic Theory , 2010 .

[3]  Steven Kay,et al.  Gaussian Random Processes , 1978 .

[4]  T. Cover,et al.  A sandwich proof of the Shannon-McMillan-Breiman theorem , 1988 .

[5]  A. Barron THE STRONG ERGODIC THEOREM FOR DENSITIES: GENERALIZED SHANNON-MCMILLAN-BREIMAN THEOREM' , 1985 .

[6]  M. Kreĭn,et al.  Linear operators leaving invariant a cone in a Banach space , 1950 .

[7]  P. Walters Introduction to Ergodic Theory , 1977 .

[8]  Konrad Jacobs,et al.  Lecture notes on ergodic theory , 1963 .

[9]  Yuri M. Suhov,et al.  On principles of large deviation and selected data compression , 2016, ArXiv.

[10]  D. Ruelle Statistical Mechanics: Rigorous Results , 1999 .

[11]  Y. Suhov,et al.  Random point processes and DLR equations , 1976 .

[12]  Yuri M. Suhov,et al.  Basic inequalities for weighted entropies , 2015, ArXiv.

[13]  F. Browder Nonlinear functional analysis , 1970 .

[14]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[15]  Hans-Otto Georgii,et al.  Gibbs Measures and Phase Transitions , 1988 .

[16]  Mark Kelbert,et al.  Information Theory and Coding by Example , 2013 .

[17]  I. Stuhl,et al.  Weight functions and log-optimal investment portfolios , 2015 .

[18]  M. Mirzakhani,et al.  Introduction to Ergodic theory , 2010 .

[19]  E. M.,et al.  Statistical Mechanics , 2021, Manual for Theoretical Chemistry.

[20]  David Ruelle,et al.  Statistical mechanics of a one-dimensional lattice gas , 1968 .

[21]  Yuri M. Suhov,et al.  Weighted Gaussian entropy and determinant inequalities , 2015, Aequationes mathematicae.