Error Exponents in the Bee Identification Problem

We derive various error exponents in the bee identification problem under two different decoding rules. Under naive decoding, which decodes each bee independently of the others, we analyze a general discrete memoryless channel and a relatively wide family of stochastic decoders. Upper and lower bounds to the random coding error exponent are derived and proved to be equal at relatively high coding rates. Then, we propose a lower bound on the error exponent of the typical random code, which improves upon the random coding exponent at low coding rates. We also derive a third bound, which is related to expurgated codes, which turns out to be strictly higher than the other bounds, also at relatively low rates. We show that the universal maximum mutual information decoder is optimal with respect to the typical random code and the expurgated code. Moving further, we derive error exponents under optimal decoding, the relatively wide family of symmetric channels, and the maximum likelihood decoder. We first propose a random coding lower bound, and then, an improved bound which stems from an expurgation process. We show numerically that our second bound strictly improves upon the random coding bound at an intermediate range of coding rates, where a bound derived in a previous work no longer holds.

[1]  Kannan Ramchandran,et al.  Fundamental limits of DNA storage systems , 2017, 2017 IEEE International Symposium on Information Theory (ISIT).

[2]  Daniela Tuninetti,et al.  The Strongly Asynchronous Massive Access Channel , 2018, Entropy.

[3]  Martin J. Wainwright,et al.  Linear Regression With Shuffled Data: Statistical and Computational Limits of Permutation Recovery , 2018, IEEE Transactions on Information Theory.

[4]  Vincent Y. F. Tan,et al.  Bee-Identification Error Exponent with Absentee Bees , 2019, 2020 IEEE International Symposium on Information Theory (ISIT).

[5]  Martin Middendorf,et al.  Automated monitoring of behavior reveals bursty interaction patterns and rapid spreading dynamics in honeybee social networks , 2018, Proceedings of the National Academy of Sciences.

[6]  Neri Merhav,et al.  Large Deviations Behavior of the Logarithmic Error Probability of Random Codes , 2019, IEEE Transactions on Information Theory.

[7]  Daniela Tuninetti,et al.  On Identifying a Massive Number of Distributions , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[8]  Vincent Y. F. Tan,et al.  The Bee-Identification Problem: Bounds on the Error Exponent , 2019, IEEE Transactions on Communications.

[9]  Neri Merhav,et al.  The generalized stochastic likelihood decoder: Random coding and expurgated bounds , 2015, 2016 IEEE International Symposium on Information Theory (ISIT).

[10]  Alexander Barg,et al.  Random codes: Minimum distances and error exponents , 2002, IEEE Trans. Inf. Theory.

[11]  Achilleas Anastasopoulos,et al.  Error Exponent for Multiple Access Channels: Upper Bounds , 2015, IEEE Transactions on Information Theory.

[12]  Neri Merhav,et al.  Trade-offs Between Error Exponents and Excess-Rate Exponents of Typical Slepian-Wolf Codes , 2020, ArXiv.

[13]  Neri Merhav,et al.  The MMI Decoder is Asymptotically Optimal for the Typical Random Code and for the Expurgated Code , 2020, ArXiv.

[14]  Robert G. Gallager,et al.  A simple derivation of the coding theorem and some applications , 1965, IEEE Trans. Inf. Theory.

[15]  Neri Merhav,et al.  Error Exponents of Typical Random Codes , 2017, 2018 IEEE International Symposium on Information Theory (ISIT).

[16]  Ilan Shomorony,et al.  Capacity Results for the Noisy Shuffling Channel , 2019, 2019 IEEE International Symposium on Information Theory (ISIT).