dc.description.abstracteng | Kernels K
arise in many contexts, including
approximation,
surface reconstruction, numerical
analysis of fluid-structure interactions, and
geostatistics.
In particular, the span of translates K(.,x)
for scattered centers xj
forms a very useful trial space for these applications. However, the basis
of these functions usually is very ill--conditioned, but
due to results of S. De Marchi and R. Schaback, the interpolants to data
on X, when viewed as functions, are rather stable. This leads us
to consider different data--dependent bases.
We concentrate to introduce a new technique to came up with
a number of useful bases having special properties
in kernel spaces for multivariate interpolation on a set of scattered data.
This work provides a variety of different bases based on factorizations of the kernel matrix.
These bases differ in their stability, orthogonality, adaptivity, duality, and computational efficiency properties.
Special emphasis is given to the "Newton'' basis
arising from a pivoted Cholesky factorization. It turns out to be stable and computationally cheap
while being orthonormal in the "native'' Hilbert space
of the kernel. There are efficient adaptive algorithms
for calculating the Newton basis along the lines
of Orthogonal Matching Pursuit.
This work extends to the important case of
Conditionally positive definite kernels arise in many contexts including approximation function algorithms,
surface reconstruction, numerical
analysis of fluid-structure interactions, computer experiment, geostatistics. They are intended to generalize the well known positive
definite kernel cases
to the important case of conditionally positive kernels such as
thin--plate splines or polyharmonic kernels.
The goal is to construct well-behaving bases for
interpolation on a finite set X in R by translates
K(.,x) for x in X of a fixed kernel
K which is conditionally positive definite of
order m>0.
Particularly interesting cases are bases of Lagrange or
Newton type, and bases which are orthogonal or orthonormal
either discretely (i.e. via their function values on X) or as
elements of the underlying "native'' space for the given kernel,
which is a direct product of a Hilbert space with the space Pmd of
d-variate polynomials of order up to m. Eventually, we verify that using reduced kernel,
a positive definite Kernel, leads to the same Newton Basis that provided by adaptive method,
by these foundations, the conditionally positive definite case turn out by adding polynomial part,
that proposed by Extended Kernels. | de |