Input scaling and output scaling a binary calculator

Suppose 1) the input for a problem which is to be solved on a binary calculator is given in decimal form; 2) the programmer desires to specify in his program the scale factors to be applied to intermediate results to keep these results within the capacity of the registers of the calculator; 3) the scale factors to be applied are powers of two (this allows advantage to be taken of the shifting operations which may be built into the calculator). A method will now be described for using the binary calculator itself to scale the input and in such a way as to obtain the best possible accuracy together with exact reconversion. This method is sufficiently general to form the basis of a standard input program. Once such a program has been developed, the programmer is relieved of much of the labor of preparing decimal input in a form suitable for a binary calculator.