A Generalized Independence Condition and Error Correction Codes

The notion of independence of a set of vectors is fundamental to the theory of vector spaces. In this note we define a generalization of this concept for vector spaces over finite fields and present an application to the theory of error correction codes. Before proceeding to this generalization we shall digress to present a few facts concerning error correction codes. Let M be a finite metric space and C a subset of M such that elements in C are separated from one another by at least do units. Such a subset will be called a code. In order to motivate this terminology consider a communication system in which the transmitting station selects its messages from C. If the transmission channel linking the transmitting and receiving stations is noiseless there will be no difficulty in recovering the transmitted message. If noise is present let us assume that the received message is an element of M which is "close" to the transmitted message. If, in fact, the distance a transmitted message moves may be bounded by 'do e (for some e >0) there is an element of C, namely the transmitted message, which is closest to the received message. Decoding, or the recovery of the transmitted message, is accomplished by finding that (unique) element of C closest to the received message. We say that information has been encoded by restricting the class of transmittable messages to C and thus removing or lessening the chance of ambiguity arising due to noise. Now suppose that M consists of all 2n sequences of n symbols, each symbol assuming one of two states which we denote by zero and one. If the elements of M are identified with the vertices of a unit-edge cube in En we define the distance between vertices to be the minimum number of edges that must be traversed in going from one vertex to the other. An error is said to have occurred in the ith symbol if the transmitted and received ith symbols do not agree. Errors are introduced by the channel performing a chance experiment on each of the n symbols; the ith symbol is received as transmitted with probability p and altered with probability 1-p. It is assumed that these chance experiments are independent and have the same probability p associated with them. If do 2e +1 and if no more than e alterations occur in the transmission of a message then there is a unique element of C closest to the received message and this is the most probable transmitted message. In this case C is an e-order error correction code. A central problem is the determination of the largest code C with a fixed n and e. Such error correction codes have been studied by Hamming [1], Slepian [2] and Reed [3 ].