Truncation error#

Truncation error occurs when we approximate a mathematical function.

Taylor series#

Recall the EVER SO USEFUL Taylor series:

\(f(x+\Delta x) = f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x ^2}{2} + f'''(x) \frac{\Delta x ^3}{6} + ...\)

\(= \sum_{n=0}^{\infty} \frac{f^{(n)}(x)}{n!} \Delta x ^n\)

but this is not useful unless we have an infinite amount of time and resources.

If \(\Delta x\) is small, \(\Delta x ^2\) is smaller, and \(\Delta x ^3\) smaller still. In fact, as long as \(f(x)\) is well behaved (loosely defined as continuous, smooth, differentiable, not infinite, etc) the derivatives don’t explode exponentially and the rightmost terms get very small.

So let’s truncate the series and only keep the first \(k\) terms:

\(f(x+\Delta x) = f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x ^2}{2} + E_k \)

where

\(E_k = \sum_{n=k}^{\infty} \frac{f^{(n)}(x)}{n!} \Delta x ^n\)

is the truncation error.

This quantity is akin to a True Error in that if we knew what \(E_k\) was exactly, we would have the true function \(f\)!

It is more useful to define the order of the error. Noting that the leading term is \(\propto \Delta x ^k\), we would say:

\(f(x_0+\Delta x) \approx f(x_0) + f'(x_0) \Delta x + f''(x_0) \frac{\Delta x ^2}{2} + 𝒪(\Delta x ^3)\)

or that this is a third order approximation.

This is a useful statement, since it indicates the payoff for tuning the numerical parameters. In this case, halving the step size halves the error.

Example: Find the order of approximate derivative we calculated previously (known as the forward difference).#

\(f'(x) \approx \frac{f(x+\Delta x) - f(x)}{\Delta x}\)

We can substitute the Taylor series for \(f(x+\Delta x)\)

\(f'(x) \approx \frac{f(x) + f'(x_0) \Delta x + f''(x_0) \frac{\Delta x ^2}{2} ... - f(x)}{\Delta x}\)

\(\approx \frac{f'(x_0) \Delta x + f''(x_0) \frac{\Delta x ^2}{2} ...}{\Delta x}\)

\(\approx f'(x_0) + f''(x_0) \frac{\Delta x}{2} ...\)

\(\approx f'(x_0) + f''(x_0) \frac{\Delta x}{2} ...\)

\(\approx f'(x_0) + 𝒪(\Delta x)\)

Therefore this approximation, call the forward difference is a first order algorithm.

Example 2: Find the order of approximatino of the central difference formula,#

\(f'(x) \approx \frac{f(x+\Delta x) - f(x-\Delta x)}{2 \Delta x}\)

Substituting the (3rd order) Taylor series for \(f(x+\Delta x)\) and \(f(x+[-\Delta x])\)

\(f'(x) \approx \frac{f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x ^2}{2} + f'''(x_0) \frac{\Delta x ^3}{6} ... - [f(x) + f'(x) [-\Delta x] + f''(x) \frac{-\Delta x ^2}{2} + f'''(x_0) \frac{-\Delta x ^3}{6}...]}{2 \Delta x}\)

\(\approx \frac{f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x ^2}{2} + f'''(x_0) \frac{\Delta x ^3}{6} ... - f(x) + f'(x) \Delta x - f''(x) \frac{-\Delta x ^2}{2} + f'''(x_0) \frac{-\Delta x ^3}{6}...}{2 \Delta x}\)

\(\approx \frac{2 f'(x) \Delta x + 2 f'''(x_0) \frac{\Delta x ^3}{6} ...}{2 \Delta x}\)

\(\approx f'(x) + f'''(x_0) \frac{\Delta x ^2}{6} ...\)

\(\approx f'(x) + 𝒪(\Delta x ^2)\)

Therefore, the central difference approximation is a second order algorithm (for the same number of function calls!). By halving the step size, the error is quartered!

Common mathematical functions#

Computers are very good at addition / subtraction, multiplication / division, and exponentiation. How should we calculate other functions?

Let’s examine some Taylor expansions:

Function

Taylor Expansion

\(\sin(x) \)

\( x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \)

\( \cos(x) \)

\( 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots \)

\( \exp(x) \)

\( 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots \)

\( \ln(1+x) \)

\( x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \cdots \)

On the surface these look good, and in infinite precision they are globally convergent.

But we are not in infinite precision.

Example: Examine the terms of \(sin(x)\) for small x#

from numpy import pi, e, sqrt, binary_repr
import numpy as np
import math

def taylor_series_sin(x, n_terms):
    """
    Calculate the Taylor series expansion of sin(x) up to n_terms.

    Parameters:
    x (float): The point at which to evaluate the Taylor series.
    n_terms (int): The number of terms to include in the expansion.

    Returns:
    list: A list of terms in the Taylor series expansion.
    """
    terms = []
    for n in range(n_terms):
        term = ((-1)**n * x**(2*n + 1)) / math.factorial(2*n + 1)
        terms.append(term)
    print("The terms are as follows:")
    for i, term in enumerate(terms):
        print(f"Term {i+1}: {term:.10f}")
    print(f"Approximate sin({x}): {sum(terms):.10f}, and it should be {math.sin(x):.10f}")

For a small x:

taylor_series_sin(1, 10)

NB: The terms are flipping signs (potential for roundoff error), but more importantly they are decresing.

-> No problem

Example: Examine the terms of sin(x) for large x#

taylor_series_sin(10, 10)

Getting a bit funny…

taylor_series_sin(100, 10)

Completely wrong.

Why you should use a package#

In this case, the remedy is fairly simple, but if you are not careful, these function can behave very strangely. In practice, the means of calculation are very sophisticated for performance and stability, including other expansion techniques and sometimes even look-up tables.

This is why we use packages! :-)

The Taylor expansion is still useful to consider limiting behaviour.

For small \(x\),

\(exp(x) \approx 1+x\) which is subject to roundoff error. Therefore packages like numpy provide special functions like \(expm1 = exp(x)-1\)

# Poorer approximation
print(np.exp(1e-10) - 1)

#Better approximation
print(np.expm1(1e-10))