Exponential Strong Converse for Content Identification With Lossy Recovery

We revisit the high-dimensional content identification with lossy recovery problem (Tuncel and Gündüz, 2014) and establish an exponential strong converse theorem. As a corollary of the exponential strong converse theorem, we derive an upper bound on the joint identification-error and excess-distortion exponent for the problem. Our main results can be specialized to the biometrical identification problem (Willems, 2003) and the content identification problem (Tuncel, 2009) since these two problems are both special cases of the content identification with lossy recovery problem. We leverage the information spectrum method introduced by Oohama and adapt the strong converse techniques therein to be applicable to the problem at hand.

[1]  Pierre Moulin,et al.  Fingerprint information maximization for content identification , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[2]  Mehul Motani,et al.  Discrete Lossy Gray–Wyner Revisited: Second-Order Asymptotics, Large and Moderate Deviations , 2015, IEEE Transactions on Information Theory.

[3]  Aaron B. Wagner,et al.  Moderate Deviations in Channel Coding , 2012, IEEE Transactions on Information Theory.

[4]  Amin Gohari,et al.  A technique for deriving one-shot achievability results in network information theory , 2013, 2013 IEEE International Symposium on Information Theory.

[5]  Deniz Gündüz,et al.  Identification and Lossy Reconstruction in Noisy Databases , 2014, IEEE Transactions on Information Theory.

[6]  Ioannis Kontoyiannis,et al.  Lossless compression with moderate error probability , 2013, 2013 IEEE International Symposium on Information Theory.

[7]  Te Sun Han,et al.  Universal coding for the Slepian-Wolf data compression system and the strong converse theorem , 1994, IEEE Trans. Inf. Theory.

[8]  Mehul Motani,et al.  Refined Asymptotics for Rate-Distortion Using Gaussian Codebooks for Arbitrary Sources , 2019, IEEE Transactions on Information Theory.

[9]  S. Verdú,et al.  Channel dispersion and moderate deviations limits for memoryless channels , 2010, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[10]  Masahito Hayashi,et al.  Information Spectrum Approach to Second-Order Coding Rate in Channel Coding , 2008, IEEE Transactions on Information Theory.

[11]  Vincent Y. F. Tan,et al.  A proof of the strong converse theorem for Gaussian broadcast channels via the Gaussian Poincaré inequality , 2015, 2016 IEEE International Symposium on Information Theory (ISIT).

[12]  Neri Merhav Reliability of Universal Decoding Based on Vector-Quantized Codewords , 2017, IEEE Transactions on Information Theory.

[13]  Ton Kalker,et al.  On the capacity of a biometrical identification system , 2003, IEEE International Symposium on Information Theory, 2003. Proceedings..

[14]  Sviatoslav Voloshynovskiy,et al.  Information-theoretic analysis of content based identification for correlated data , 2011, 2011 IEEE Information Theory Workshop.

[15]  Sergio Verdú,et al.  Beyond the blowing-up lemma: Sharp converses via reverse hypercontractivity , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[16]  Sergio Verdú,et al.  Smoothing Brascamp-Lieb inequalities and strong converses for common randomness generation , 2016, 2016 IEEE International Symposium on Information Theory (ISIT).

[17]  Vincent Y. F. Tan,et al.  Moderate Deviation Analysis for Classical Communication over Quantum Channels , 2017, Communications in Mathematical Physics.

[18]  Vincent Y. F. Tan,et al.  Moderate-deviations of lossy source coding for discrete and Gaussian sources , 2011, 2012 IEEE International Symposium on Information Theory Proceedings.

[19]  Aaron D. Wyner,et al.  On source coding with side information at the decoder , 1975, IEEE Trans. Inf. Theory.

[20]  Chandra Nair,et al.  The Capacity Region of the Two-Receiver Gaussian Vector Broadcast Channel With Private and Common Messages , 2014, IEEE Transactions on Information Theory.

[21]  Amir Dembo,et al.  Large Deviations Techniques and Applications , 1998 .

[22]  Mehul Motani,et al.  Achievable Moderate Deviations Asymptotics for Streaming Compression of Correlated Sources , 2016, IEEE Transactions on Information Theory.

[23]  Vincent Y. F. Tan,et al.  A Proof of the Strong Converse Theorem for Gaussian Multiple Access Channels , 2015, IEEE Transactions on Information Theory.

[24]  H. Vincent Poor,et al.  Channel Coding Rate in the Finite Blocklength Regime , 2010, IEEE Transactions on Information Theory.

[25]  Vincent Y. F. Tan,et al.  Wyner’s Common Information Under Rényi Divergence Measures , 2017, IEEE Transactions on Information Theory.

[26]  Vincent Yan Fu Tan,et al.  Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities , 2014, Found. Trends Commun. Inf. Theory.

[27]  Andrea J. Goldsmith,et al.  Identification over multiple databases , 2009, 2009 IEEE International Symposium on Information Theory.

[28]  Vincent Yan Fu Tan,et al.  Nonasymptotic and Second-Order Achievability Bounds for Coding With Side-Information , 2013, IEEE Transactions on Information Theory.

[29]  Ke Sun,et al.  Efficient two stage decoding scheme to achieve content identification capacity , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[30]  Katalin Marton,et al.  Error exponent for source coding with a fidelity criterion , 1974, IEEE Trans. Inf. Theory.

[31]  Aaron D. Wyner,et al.  The common information of two dependent random variables , 1975, IEEE Trans. Inf. Theory.

[32]  János Körner,et al.  General broadcast channels with degraded message sets , 1977, IEEE Trans. Inf. Theory.

[33]  Pierre Moulin,et al.  RGB-D video content identification , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[34]  Vincent Yan Fu Tan,et al.  Moderate deviations for joint source-channel coding of systems with Markovian memory , 2014, 2014 IEEE International Symposium on Information Theory.

[35]  Pierre Moulin Statistical modeling and analysis of content identification , 2010, 2010 Information Theory and Applications Workshop (ITA).

[36]  Yasutada Oohama Exponent Function for Source Coding with Side Information at the Decoder at Rates below the Rate Distortion Function , 2016, ArXiv.

[37]  Suguru Arimoto,et al.  On the converse to the coding theorem for discrete memoryless channels (Corresp.) , 1973, IEEE Trans. Inf. Theory.

[38]  Yasutada Oohama Exponent function for one helper source coding problem at rates outside the rate region , 2015, 2015 IEEE International Symposium on Information Theory (ISIT).

[39]  Mehul Motani,et al.  The Dispersion of Mismatched Joint Source-Channel Coding for Arbitrary Sources and Additive Channels , 2017, IEEE Transactions on Information Theory.

[40]  Yasutada Oohama New Strong Converse for Asymmetric Broadcast Channels , 2016, ArXiv.

[41]  Ertem Tuncel Capacity/Storage Tradeoff in High-Dimensional Identification Systems , 2006, IEEE Transactions on Information Theory.

[42]  Mehul Motani,et al.  Non-Asymptotic Converse Bounds and Refined Asymptotics for Two Lossy Source Coding Problems , 2017, ArXiv.

[43]  Jonathan Scarlett,et al.  On the Dispersions of the Gel’fand–Pinsker Channel and Dirty Paper Coding , 2013, IEEE Transactions on Information Theory.

[44]  Ertem Tuncel Recognition capacity versus search speed in noisy databases , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.

[45]  Aaron D. Wyner,et al.  The rate-distortion function for source coding with side information at the decoder , 1976, IEEE Trans. Inf. Theory.

[46]  Shun Watanabe,et al.  Second-Order Region for Gray–Wyner Network , 2015, IEEE Transactions on Information Theory.

[47]  Hideki Yagi,et al.  Reliability function and strong converse of biomedical identification systems , 2016, 2016 International Symposium on Information Theory and Its Applications (ISITA).

[48]  Nan Liu,et al.  Deception With Side Information in Biometric Authentication Systems , 2015, IEEE Transactions on Information Theory.

[49]  Pierre Moulin,et al.  Regularized Adaboost for content identification , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[50]  Mehul Motani,et al.  The Dispersion of Universal Joint Source-Channel Coding for Arbitrary Sources and Additive Channels , 2017, arXiv.org.

[51]  Vincent Y. F. Tan,et al.  Wyner's Common Information under Renyi Divergence Measures , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[52]  Mehul Motani,et al.  Second-Order and Moderate Deviations Asymptotics for Successive Refinement , 2016, IEEE Transactions on Information Theory.

[53]  Imre Csiszár,et al.  Information Theory - Coding Theorems for Discrete Memoryless Systems, Second Edition , 2011 .

[54]  Pierre Moulin,et al.  Model-based decoding metrics for content identification , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[55]  Rudolf Ahlswede,et al.  Source coding with side information and a converse for degraded broadcast channels , 1975, IEEE Trans. Inf. Theory.

[56]  Stark C. Draper,et al.  Upper and Lower Bounds on the Reliability of Content Identification , 2014 .

[57]  Michelle Effros,et al.  A strong converse for a collection of network source coding problems , 2009, 2009 IEEE International Symposium on Information Theory.