暂无分享,去创建一个
[1] Wojciech Szpankowski,et al. Minimax redundancy for large alphabets , 2010, 2010 IEEE International Symposium on Information Theory.
[2] Peter L. Bartlett,et al. Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families , 2013, COLT.
[3] Dominique Bontemps. Universal Coding on Infinite Alphabets: Exponentially Decreasing Envelopes , 2011, IEEE Transactions on Information Theory.
[4] I. Csiszár. $I$-Divergence Geometry of Probability Distributions and Minimization Problems , 1975 .
[5] Andrew R. Barron,et al. Minimax redundancy for the class of memoryless sources , 1997, IEEE Trans. Inf. Theory.
[6] Evgueni A. Haroutunian,et al. Information Theory and Statistics , 2011, International Encyclopedia of Statistical Science.
[7] Alon Orlitsky,et al. Always Good Turing: Asymptotically Optimal Probability Estimation , 2003, Science.
[8] J. Rissanen,et al. ON SEQUENTIALLY NORMALIZED MAXIMUM LIKELIHOOD MODELS , 2008 .
[9] Jan M. Van Campenhout,et al. Maximum entropy and conditional probability , 1981, IEEE Trans. Inf. Theory.
[10] Lada A. Adamic. Zipf, Power-laws, and Pareto-a ranking tutorial , 2000 .
[11] Aurélien Garivier,et al. Coding on Countably Infinite Alphabets , 2008, IEEE Transactions on Information Theory.
[12] Andrew R. Barron,et al. Asymptotic minimax regret for data compression, gambling, and prediction , 1997, IEEE Trans. Inf. Theory.
[13] Y. Shtarkov,et al. Multialphabet universal coding of memoryless sources , 1995 .
[14] I. Csiszár. Sanov Property, Generalized $I$-Projection and a Conditional Limit Theorem , 1984 .