Capacity of the Product of Channels

In this paper we shall consider the product or parallel combination of channels, and show that (1) the capacity of the product channel is the sum of the capacities of the component channels, and (2) the "strong converse" holds for the product channel if it holds for each of the component channels. The result is valid for any class of channels (with or without memory, continuous or discrete) provided that the capacities exist. "Ciapacity" is defined here as the supremum of those rates for which arbitrarily high reliability is achievable with block coding for sufficiently long delay. Let us remark here that there are two ways in which "channel capaci ty" is commonly defined. The first definition takes the channel capacity to be the suprenmm of the "information" processed by the channel, where "information" is the difference of the input "uncer ta inty" and the "equivocation" at the output. The second definition, which is the one we use here, takes the channel capacity to be the maximum "error free ra te ." For certain classes of channels (e.g., memoryless channels, and finite state indecomposable channels) it has been established that these two definitions are equivalent. In fact, this equivalence is the essence of the Fundamental Theorem of Information Theory. For such channels, (1) above follows directly. The second definition, however, is applicable to a broader class of channels than the first. One very important such class are time-continuous channels.