Sublinear Bounds on the Distinguishing Advantage for Multiple Samples

The maximal achievable advantage of a (computationally unbounded) distinguisher to determine whether a source Z is distributed according to distribution \(P_0\) or \(P_1\), when given access to one sample of Z, is characterized by the statistical distance \(d(P_0,P_1)\). Here, we study the distinguishing advantage when given access to several i.i.d. samples of Z. For n samples, the advantage is then naturally given by \(d(P_0^{\otimes n},P_1^{\otimes n})\), which can be bounded as \(d(P_0^{\otimes n},P_1^{\otimes n}) \le n \cdot d(P_0,P_1)\). This bound is tight for some choices of \(P_0\) and \(P_1\); thus, in general, a linear increase in the distinguishing advantage is unavoidable.