Gaussian Assumption: The Least Favorable but the Most Useful [Lecture Notes]
暂无分享,去创建一个
[1] Daniel Pérez Palomar,et al. Gradient of mutual information in linear vector Gaussian channels , 2006, IEEE Transactions on Information Theory.
[2] Suhas N. Diggavi,et al. The worst additive noise under a covariance constraint , 2001, IEEE Trans. Inf. Theory.
[3] Petre Stoica,et al. Training sequence design for frequency offset and frequency-selective channel estimation , 2003, IEEE Trans. Commun..
[4] Shunsuke Ihara,et al. On the Capacity of Channels with Additive Non-Gaussian Noise , 1978, Inf. Control..
[5] Olivier Rioul,et al. Information Theoretic Proofs of Entropy Power Inequalities , 2007, IEEE Transactions on Information Theory.
[6] Amir Dembo,et al. Information theoretic inequalities , 1991, IEEE Trans. Inf. Theory.
[7] Thomas M. Cover,et al. Elements of Information Theory , 2005 .
[8] T. Cover,et al. IEEE TRANSACTIONSON INFORMATIONTHEORY,VOL. IT-30,N0. 6,NOVEmER1984 Correspondence On the Similarity of the Entropy Power Inequality The preceeding equations allow the entropy power inequality and the Brunn-Minkowski Inequality to be rewritten in the equiv , 2022 .
[9] Petre Stoica,et al. The Gaussian Data Assumption Leads to the Largest Cramér-Rao Bound [Lecture Notes] , 2011, IEEE Signal Processing Magazine.
[10] Petre Stoica,et al. Training sequence selection for frequency offset estimation in frequency selective channels , 2003, Digit. Signal Process..