Summary of interpolation

Summary of interpolation#

We have seen now that polynomial interpolation, splines, and RBFs can all be expressed in terms of a linear function basis with the coordinates given by a set of weights, \(\vec{\omega}\) which is determined by the data:

\[ y^{interp}(x) = A(x) \cdot \vec{\omega} \]

This is excellent since we can use linear system tools to solve for \(\vec{\omega}\) and therefore build our function. Note that the functional dependence of \(\vec{x}\) is within \(A\).

Let’s recap and generalize:

  • For any \(n\) points there is a polynomial that fits it, but because of Runge’s phenomenon you don’t want to use that!

  • Piecewise polynomials are stiffer and avoids Runge’s phenomenon, but smoothness causes issues for N-D So what do we do? Standard pacakges offer simplistic but pragmatic interpolators (optimized for either rectangular or irregular grids)

  • RBFs are reat and flexible but prone to numerical error due to solving a poorly conditioned matrix.

One pragmatic possibility is to use local interpolator methods but these have the drawback of having to find (triangulate) the nearest neighbours before being able to use them:

  • Nearest ND interpolator: Find the nearest data point and use that.

  • Linear ND interpolators: For each input, a triangulation finds the nearest data points and a linear barycentric Lagrange interpolation is performed.

Let’s now consider moving to curve-fitting inwhich the fucntion doesn’t have to pass through every data point. Through a nifty tool called the pseudoinverse we’ll see that the methods of interpolation can be applied almost unchanged!