In this paper, the construction of scattered data approximants is studied using the principle of maximum entropy. For under‐determined and ill‐posed problems, Jaynes’s principle of maximum information‐theoretic entropy is a means for least‐biased statistical inference when insufficient information is available. Consider a set of distinct nodes {xi}i=1n in Rd, and a point p with coordinate x that is located within the convex hull of the set {xi}. The convex approximation of a function u(x) is written as: uh(x) = Σi=1n φi(x)ui, where {φi}i=1n ⩾ 0 are known as shape functions, and uh must reproduce affine functions (d = 2): Σi=1n φi = 1, Σi=1n φixi = x, Σi=1n φiyi = y. We view the shape functions as a discrete probability distribution, and the linear constraints as the expectation of a linear function. For n > 3, the problem is under‐determined. To obtain a unique solution, we compute φi by maximizing the uncertainty H(φ) = − Σi=1n φi log φi, subject to the above three constraints. In this approach, only the...
[1]
Solomon Kullback,et al.
Information Theory and Statistics
,
1960
.
[2]
Aleksandr Yakovlevich Khinchin,et al.
Mathematical foundations of information theory
,
1959
.
[3]
Huafeng Liu,et al.
Meshfree Particle Methods
,
2004
.
[4]
Maya Rani Gupta,et al.
An information theory approach to supervised learning
,
2003
.
[5]
G. Strang,et al.
An Analysis of the Finite Element Method
,
1974
.
[6]
Guirong Liu.
Mesh Free Methods: Moving Beyond the Finite Element Method
,
2002
.
[7]
E. Jaynes.
Probability theory : the logic of science
,
2003
.
[8]
Kai Hormann,et al.
Surface Parameterization: a Tutorial and Survey
,
2005,
Advances in Multiresolution for Geometric Modelling.