Why Linear Interpolation

Linear interpolation is the computationally simplest of all possible interpolation techniques. Interestingly, it works reasonably well in many practical situations, even in situations when the corresponding computational models are rather complex. In this paper, we explain this empirical fact by showing that linear interpolation is the only interpolation procedure that satisfies several reasonable properties such as consistency and scale-invariance. 1 Formulation of the Problem Need for interpolation. In many practical situations, we know that the value of a quantity y is uniquely determined by the value of some other quantity x, but we do not know the exact form of the corresponding dependence y = f(x). To find this dependence, we measure the values of x and y in different situations. As a result, we get the values yi = f(xi) of the unknown function f(x) for several values x1, . . . , xn. Based on this information, we would like to predict the value f(x) for all other values x. When x is between the smallest and the largest of the values xi, this prediction is known as the interpolation, for values x smaller than the smallest of xi or larger than the largest of xi, this prediction is known as extrapolation; see, e.g., [1]. Simplest possible case of interpolation. The simplest possible case of interpolation is when we only know the values y1 = f(x1) and y2 = f(x2) of the function f(x) at two points x1 < x2, and we would like to predict the value f(x) at points x ∈ (x1, x2). In many cases, linear interpolations works well: why? One of the most well-known interpolation techniques is based on the assumption that the function f(x) is linear on the interval [x1, x2]. Under this assumption, we get the following formula for f(x): f(x) = x− x1 x2 − x1 · f(x2) + x2 − x x2 − x1 · f(x1).

[1]  J. Miller Numerical Analysis , 1966, Nature.