Error propagation and definite decoding of convolutional codes

The error-propagation effect in decoding convolutional codes is a result of the internal feedback in the usual decoding method, feedback decoding (FD). As a measure of this effect, the propagation length L of a system is defined as the maximum span of decoding errors following a decoding error when all succeding parity checks are satisfied. A relationship between L , the parity check matrix, and the decoding algorithm is developed. A decoding method having no internal feedback, definite decoding (DD), is formalized. It is shown that a code using FD with limited L exists if and only if that same code can be decoded using DD. When using DD a smaller class of errors is corrected. The self-orthogonal codes are shown to be decodable using FD with L small. The minimum possible value of L when using bounded distance decoding is given for some of these codes. Codes are given which minimize the spacing between single correctable errors using DD. These values of spacing are compared with those for similar (known) codes which use FD, and with the theoretical minimum spacing.