Truncation Error#
import numpy as np
import math
Truncation error occurs when we approximate a mathematical function by truncating an infinite series.
Taylor Series#
The Taylor series is a fundamental tool in numerical methods. It allows us to represent a function as an infinite sum of its derivatives at a single point.
\(f(x+\Delta x) = f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x^2}{2!} + f'''(x) \frac{\Delta x^3}{3!} + \cdots = \sum_{n=0}^{\infty} \frac{f^{(n)}(x)}{n!} \Delta x^n\)
However, we cannot compute an infinite number of terms.
If \(\Delta x\) is small, the higher-order terms become progressively smaller. This allows us to truncate the series after a certain number of terms, creating an approximation.
\(f(x+\Delta x) \approx f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x^2}{2!} + E_k\)
The error introduced by this truncation is called the truncation error, and it is equal to the sum of all the neglected terms:
\(E_k = \sum_{n=k}^{\infty} \frac{f^{(n)}(x)}{n!} \Delta x^n\)
The truncation error is often expressed using Big-O notation, which describes the order of the error. If the first neglected term is proportional to \(\Delta x^k\), we say the approximation is of order \(k\), or \(\mathcal{O}(\Delta x^k)\).
\(f(x_0+\Delta x) \approx f(x_0) + f'(x_0) \Delta x + f''(x_0) \frac{\Delta x^2}{2} + \mathcal{O}(\Delta x^3)\)
This notation is useful because it tells us how the error changes as we adjust the step size \(\Delta x\).
Example: Order of the Forward Difference Approximation#
Let’s find the order of the forward difference approximation for the first derivative:
\(f'(x) \approx \frac{f(x+\Delta x) - f(x)}{\Delta x}\)
We can substitute the Taylor series for \(f(x+\Delta x)\):
\(f'(x) \approx \frac{(f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x^2}{2} + \cdots) - f(x)}{\Delta x} = f'(x) + f''(x) \frac{\Delta x}{2} + \cdots\)
The leading error term is proportional to \(\Delta x\), so the forward difference is a first-order approximation: \(f'(x) \approx f'(x) + \mathcal{O}(\Delta x)\).
Example 2: Order of the Central Difference Approximation#
Now let’s find the order of the central difference approximation:
\(f'(x) \approx \frac{f(x+\Delta x) - f(x-\Delta x)}{2 \Delta x}\)
Substituting the Taylor series for \(f(x+\Delta x)\) and \(f(x-\Delta x)\):
\(f'(x) \approx \frac{(f(x) + f'(x) \Delta x + f''(x) \frac{\Delta x^2}{2} + \cdots) - (f(x) - f'(x) \Delta x + f''(x) \frac{\Delta x^2}{2} - \cdots)}{2 \Delta x} = f'(x) + f'''(x) \frac{\Delta x^2}{6} + \cdots\)
The leading error term is proportional to \(\Delta x^2\), so the central difference is a second-order approximation: \(f'(x) \approx f'(x) + \mathcal{O}(\Delta x^2)\). This means that halving the step size quarters the error, making it much more accurate than the forward difference for the same number of function calls.
Common Mathematical Functions#
Computers are excellent at basic arithmetic, but how do they calculate more complex functions? Often, they use Taylor series expansions.
Function |
Taylor Expansion |
---|---|
\(\sin(x)\) |
\(x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots\) |
\(\cos(x)\) |
\(1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots\) |
\(\exp(x)\) |
\(1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots\) |
\(\ln(1+x)\) |
\(x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \cdots\) |
In infinite precision, these series are globally convergent. However, in finite precision, they can lead to issues.
Example: Examining the terms of \(\sin(x)\)#
def taylor_series_sin(x, n_terms):
"""
Calculates the Taylor series expansion of sin(x) up to n_terms.
"""
terms = []
for n in range(n_terms):
term = ((-1)**n * x**(2*n + 1)) / math.factorial(2*n + 1)
terms.append(term)
print(f"The terms for sin({x}) are as follows:")
for i, term in enumerate(terms):
print(f"Term {i+1}: {term:.10f}")
print(f"Approximate sin({x}): {sum(terms):.10f}")
print(f"Actual sin({x}): {math.sin(x):.10f}")
For a small value of x, the terms decrease rapidly:
taylor_series_sin(1, 10)
The terms for sin(1) are as follows:
Term 1: 1.0000000000
Term 2: -0.1666666667
Term 3: 0.0083333333
Term 4: -0.0001984127
Term 5: 0.0000027557
Term 6: -0.0000000251
Term 7: 0.0000000002
Term 8: -0.0000000000
Term 9: 0.0000000000
Term 10: -0.0000000000
Approximate sin(1): 0.8414709848
Actual sin(1): 0.8414709848
For a large value of x, the terms initially grow very large before converging, which can lead to a loss of precision due to round-off error.
taylor_series_sin(10, 10)
The terms for sin(10) are as follows:
Term 1: 10.0000000000
Term 2: -166.6666666667
Term 3: 833.3333333333
Term 4: -1984.1269841270
Term 5: 2755.7319223986
Term 6: -2505.2108385442
Term 7: 1605.9043836822
Term 8: -764.7163731820
Term 9: 281.1457254346
Term 10: -82.2063524662
Approximate sin(10): -16.8118501374
Actual sin(10): -0.5440211109
For a very large x, the approximation can be completely wrong:
taylor_series_sin(100, 10)
The terms for sin(100) are as follows:
Term 1: 100.0000000000
Term 2: -166666.6666666667
Term 3: 83333333.3333333284
Term 4: -19841269841.2698402405
Term 5: 2755731922398.5888671875
Term 6: -250521083854417.1875000000
Term 7: 16059043836821614.0000000000
Term 8: -764716373181981696.0000000000
Term 9: 28114572543455207424.0000000000
Term 10: -822063524662433021952.0000000000
Approximate sin(100): -794697857233433001984.0000000000
Actual sin(100): -0.5063656411
Why You Should Use a Package#
While the remedy for this specific issue is relatively simple (e.g., using range reduction), it highlights the complexities of implementing numerical functions. In practice, the methods used in packages like NumPy are highly sophisticated, often employing a combination of techniques, including different expansion methods and look-up tables, to ensure both performance and stability.
This is why it is almost always better to use a well-tested package rather than implementing these functions yourself.
The Taylor expansion is still a valuable tool for understanding the limiting behavior of functions.
For small \(x\), \(\exp(x) \approx 1+x\), which is subject to round-off error when \(x\) is very close to zero. To address this, packages like NumPy provide specialized functions like expm1
, which calculates \(\exp(x) - 1\) more accurately.
# Poorer approximation for small x
print(f"np.exp(1e-10) - 1 = {np.exp(1e-10) - 1}")
# Better approximation for small x
print(f"np.expm1(1e-10) = {np.expm1(1e-10)}")
np.exp(1e-10) - 1 = 1.000000082740371e-10
np.expm1(1e-10) = 1.00000000005e-10