First order Runge-Kutta methods#

Forward Euler method#

The Forward Euler method (aka: Euler-Cauchy / point-slope method, Explicit Euler) is the simplest Runge-Kutta method. One simply takes the slope at the current point, which is given in the differential equation, and assumes it is constant over the step:

Given,

\[ \frac{dy}{dt} = f(t, y) \]

the next time step is:

\[ y_{i+1} = y_i + f(t_i, y_i) h\]

which is simply the left Riemann sum, and similarly the error is \(O(h^2)\) over the step, and \(O(h)\) over the full solution.

Heun’s method#

Now that we have a prediction for \(y_{i+1}\) is there a way we can use this as an initial guess and correct ourselves? Heun’s method does exactly this and is called a predictor-corrector algorithm as a result.

Consider:

\[ \frac{dy}{dt} = f(t, y) \]

make a prediction,

\[ y^0_{i+1} = y_i + f(t_i, y_i) h\]

Now take the average of the slopes:

\[\begin{split}\begin{align} \bar{y}' &= \frac{y'_i+y'_{i+1}}{2} \\ &= \frac{f(t_i, y_i)+f(x_{i+1}, y^0_{i+1})}{2} \end{align} \end{split}\]

which is used in the corrector:

\[ y_{i+1} = y_i + \frac{f(t_i,y_i)+f(x_{i+1}, y^0_{i+1})}{2} h\]

This is interesting since one could repeat the predictor-corrector cycle with the intention of converging towards the correct answer, but we will see this is a bad idea:

Example: Heun’s method with multiple predictor-corrector cycles#

Integrate

\[y' = 4 e^{0.8 x}-0.5 y\]

with

\[ y(0)=2\]

using Heun’s method with 1 and 15 predictor-corrector cycles.

import numpy as np

def f(x, y):
  return 4 * (2.71828 ** (0.8 * x)) - 0.5 * y

def heuns_method(x0, y0, h, t, iterations):
  """
  Applies Heun's method to approximate the solution of a differential equation.

  Args:
    x0: The initial x value.
    y0: The initial y value.
    h: The step size.
    t: The target time for the approximation.
    iterations: The number of iterations for the predictor-corrector cycle.

  Returns:
    The approximate y value at time t.
  """
  x = x0
  y = y0
  while x < t:
    y_pred = y + h * f(x, y)
    y_next = y + h * (f(x, y) + f(x + h, y_pred)) / 2
    for _ in range(iterations - 1):
      y_pred = y_next
      print(y_pred)
      y_next = y + h * (f(x, y) + f(x + h, y_pred)) / 2

    y = y_next
    x += h
    print("Time ", x, ", approximation, ", y_next, ', True ,', 4/1.3*(np.exp(.8*x)-np.exp(-.5*x))+2*np.exp(-.5*x))
  return y


# Initial conditions
x0 = 0
y0 = 2

# Step size and target time
h = 1
t = 4

# Estimate integral with Heun's method, iterating once
print("Heun's method with 1 iteration")
y_approx_once = heuns_method(x0, y0, h, t, 1)

# Estimate integral with Heun's method, iterating 15 times
print("\nHeun's method with 15 iterations")
y_approx_15 = heuns_method(x0, y0, h, t, 15)
Heun's method with 1 iteration
Time  1 , approximation,  6.701079461759733 , True , 6.194631377209372
Time  2 , approximation,  16.319768581929353 , True , 14.84392190764649
Time  3 , approximation,  37.199199627848756 , True , 33.67717176796817
Time  4 , approximation,  83.33761313495894 , True , 75.33896260915857

Heun's method with 15 iterations
6.701079461759733
6.2758095963197995
6.382127062679783
6.355547696089787
6.362192537737286
6.360531327325411
6.36094662992838
6.360842804277638
6.360868760690323
6.360862271587152
6.360863893862945
6.3608634882939965
6.360863589686233
6.360863564338175
Time  1 , approximation,  6.360863570675189 , True , 6.194631377209372
16.107133650001515
15.1009979147154
15.35253184853693
15.289648365081547
15.305369235945392
15.30143901822943
15.302421572658421
15.302175934051174
15.302237343702986
15.302221991290033
15.302225829393272
15.302224869867462
15.302225109748914
15.302225049778551
Time  2 , approximation,  15.302225064771143 , True , 14.84392190764649
36.563234929624876
34.28823142403988
34.85698230043613
34.71479458133707
34.75034151111183
34.74145477866814
34.74367646177906
34.74312104100133
34.74325989619577
34.743225182397154
34.74323386084681
34.7432316912344
34.7432322336375
34.74323209803672
Time  3 , approximation,  34.74323213193692 , True , 33.67717176796817
81.80263345001404
76.7180376892117
77.98918662941227
77.67139939436214
77.75084620312467
77.73098450093404
77.7359499264817
77.73470857009478
77.73501890919151
77.73494132441732
77.73496072061087
77.73495587156249
77.73495708382458
77.73495678075906
Time  4 , approximation,  77.73495685652544 , True , 75.33896260915857

Unfortunatley, Heun’s method does converge but not necessarily to the correct answer!

The Trapezoid rule#

In the case that the slope doesn’t depend on the function value, \(y'(t) = f(t)\) the predictor can be calcualted directly, eliminating the cycle:

\[ y_{i+1} = y_i + \frac{f(t_i)+f(t_{i+1})}{2} h\]

which is mearly the trapezoid rule which carries \(O(h^3)\) accuracy locally and \(O(h^2)\) globally.

The Midpoint method#

Recall from the discussion on on differentiation / integration that information from the midpoint was often superior that of either endpoint (in isolation) as over/under estimates tend to cancel.

As an alternative to Heun’s predictor-corrector method, let’s subdivide the interval and find the slope at the midpoint. To do this, take a half step:

\[ y_{i+1/2} = y_i + f(t_i,y_i) \frac{h}{2} \]

then use the slope at the midpoint, \(y'_{i+1/2} = f(t_{i+1/2},y_{i+1/2})\) to estimate the full step:

\[ y_{i+1} = y_i + f(t_{i+1/2},y_{i+1/2}) h \]

which has \(O(h^3)\) local / \(O(h^2)\) global error.

The midpoint method is interesting because it implies we can calculate and combine substeps to effectively achieve better predictions. Let’s generalize this concept…