Error Detection and Correction in Numerical Computations by Algebraic Methods

A simple analytical unified approach is described for error detection and correction in numerical computations. The computation process may be implemented by a computer program or memory or by a specialized digital or analog device. This approach does not depend on the form of representation or on the specific features of the implementation of a program or a device computing the given function and is based on algebraic concepts, such as transcendental degree of fielf extensions. The described approach results in a substantial reduction of the hardware overhead required for multiple error detection and correction, as compared to the check sum approach and other methods previously known.