The error exponent for finite-hypothesis channel identification

We consider the issue of signal selection in hypothesis testing. In particular, we model each hypothesis as a discrete memoryless channel. We first derive the Bayesian error exponent as a function of the limiting empirical distribution of the input sequence. We show that in the case of discriminating between two hypotheses, the asymptotically optimal input sequence consists of always repeating the same input. Finally, we derive an efficient method to evaluate the error exponent as a function of the limiting distribution.