Maximum Entropy Approximation

In this paper, the construction of scattered data approximants is studied using the principle of maximum entropy. For under‐determined and ill‐posed problems, Jaynes’s principle of maximum information‐theoretic entropy is a means for least‐biased statistical inference when insufficient information is available. Consider a set of distinct nodes {xi}i=1n in Rd, and a point p with coordinate x that is located within the convex hull of the set {xi}. The convex approximation of a function u(x) is written as: uh(x) = Σi=1n φi(x)ui, where {φi}i=1n ⩾ 0 are known as shape functions, and uh must reproduce affine functions (d = 2): Σi=1n φi = 1, Σi=1n φixi = x, Σi=1n φiyi = y. We view the shape functions as a discrete probability distribution, and the linear constraints as the expectation of a linear function. For n > 3, the problem is under‐determined. To obtain a unique solution, we compute φi by maximizing the uncertainty H(φ) = − Σi=1n φi log φi, subject to the above three constraints. In this approach, only the...