Sequential decoding — the computation problem

Sequential decoding is a technique for encoding and decoding at moderate cost with a decoding reliability which approximates that of the optimum, and expensive, maximum-likelihood decoder. The several known sequential decoding algorithms enjoy a cost advantage over the maximum-likelihood decoder because they allow the level of the channel noise to regulate the level of the decoding computation. Since the average level of the required decoding computation for sequential decoders is small for source rates below a rate R comp , such a decoder can be realized for these rates with a relatively small logic unit and a buffer. The logic unit is normally designed to handle computation rates which are less than two or three times the average computation rate; the buffer serves to store data during those noisy periods when the required computation rate exceeds the computation rate of the logic unit. If the periods of high computation, which are caused by noise, are too frequent or too long, the buffer, which is necessarily finite in capacity, will fill and overflow. Since data are lost during an overflow, continuity in the decoding process cannot be maintained. The decoder, then, cannot continue to decode without error. For this reason, buffer overflow is an important event. In addition, since errors in the absence of overflow are much less frequent than are overflows themselves, the overflow event is of primary concern in the design of a sequential decoder. This paper presents some recent analytical results concerning the probability of a buffer overflow. In particular, it is shown that this probability is relatively insensitive to both the buffer capacity and the maximum speed of the logic unit for moderate capacities and speeds. By contrast, it is shown that the overflow probability decreases rapidly with a decrease in the source