General Runge-Kutta methods#

We are now in a position to generalize the RK methods. Recalling:

\[ y_{i+1} \approx y_i + h \sum_{n=1}^s a_n k_n\]

For the explicit set of RK methods,

\[\begin{split}\begin{align} k_1 &= f(t_i, y_i) \\ k_2 &= f(t_i + p_2 h, y_i + [q_{21}k_1] h) \\ k_3 &= f(t_i + p_3 h, y_i + [q_{31}k_1 + q_{32}k_2] h) \\ \vdots\\ k_s &= f(t_i + p_s h, y_i + [q_{s1}k_1 + q_{s2}k_2 ... q_{s,s-1} k_{s-1}] h) \\ \end{align} \end{split}\]

The \(p_n\) and \(q_{nm}\), along with \(a_n\) are constants and determine the type of RK method.

Note RK-1 is simply the Forward Euler equation.

Butcher Tableaus#

Noting that \(a_n\), \(p_n\), and \(q_{nm}\) are 2 vectors of dimension \(s\) and an \(s \times s\) matrix, RK schemes can be written compactly in a Butcher Tableau. For explicit methods, this looks like:

\[\begin{split}\begin{array}{c|ccccc} p_1 & & & & &\\ p_2 & q_{21} & & & &\\ p_3 & q_{31} & q_{32} & & &\\ \vdots & \vdots & & \ddots & &\\ p_s & q_{s1} & q_{s2} & \cdots & q_{s,s-1} &\\ \hline & a_1 & a_2 & \cdots & a_{s-1} & a_s \\ \end{array}\end{split}\]

Note that the \(q_{nm}\) matrix is lower triangular. We will soon introduce the implicit family of RK methods which are upper triangular!

RK-2 methods#

For a given order, the values of \(a_n\), \(p_n\), and \(q_{nm}\) are derived such that

\[\sum_{n=1}^s a_n k_n \approx f(t_i,y_i) + \frac{f(t_i,y_i)^\prime}{2}h + \frac{f(t_i,y_i)^{\prime\prime}}{6}h^2 + \cdots \]

The second order RK method is:

\[ y_{i+1} = y_i + [a_1 k_1 + a_2 k_2] h \]

with

\[\begin{split} \begin{align} k_1 &= f(t_i, y_i) \\ k_2 &= f(t_i + p_2 h, y_i+q_{21} k_1 h) \end{align} \end{split}\]

Taylor expanding \(k_2\) in \(p\) and \(q\),

\[ f(t_i + p_2 h, y_i+q_{21} k_1 h) = f(t_i,y_i) + p_2 h \frac{\partial f}{\partial t} + q_{21} k_1 h \frac{\partial f}{\partial y} + O(h^2)\]

which plugged back in to \(y_{i+1}\):

\[y_{i+1} = y_i + [a_1+a_2] f(t_i, y_i) h + \bigg[a_2 p_2 \frac{\partial f}{\partial t} + a_2 q_{21} f(t_i, y_i) \frac{\partial f}{\partial y} \bigg] h^2 + O(h^3) \]

which we can compare to a second order Taylor expansion:

\[\begin{split} \begin{align} y_{i+1} &= y_i + f(t_1, y_1) h + \frac{f'(t_1, y_1)}{2} h^2 \\ &= y_i + f(t_1, y_1) h + \bigg[\frac{\partial f}{\partial t} +\frac{\partial f}{\partial t} \frac{dy}{dt} \bigg] \frac{h}{2} \end{align}\end{split}\]

Comparing terms we see:

\[\begin{split}\begin{align} a_1+a_2 &= 1 \\ a_2 p_2 &= \frac{1}{2} \\ a_2 q_{21}&= \frac{1}{2}\\ \end{align} \end{split}\]

Here we see that we have a single degree of freedom for the set of constants! Any choice will satisfy 2’nd order equations and therefore be exact for constant, linear, or quadratic ODEs. Certaint choices will have better properties in general.

Heun’s method#

With \(a_2 = \frac{1}{2}\),

\[y_{i+1} = y_i + \frac{k_1 + k_2}{2} h \]

with

\[\begin{split}\begin{align} k_1 &= f(t_i, y_i) \\ k_2 &= f(t_i + h, y_i + k_1 h) \end{align} \end{split}\]

which is simply Heun’s method, and is concisely represented by the tableau,

\[\begin{split} \begin{array}{c|cc} 0 & & \\ 1 & 1 & \\ \hline & \frac{1}{2} & \frac{1}{2} \\ \end{array} \end{split}\]

Midpoint method#

With \(a_2 = 1\),

\[y_{i+1} = y_i + k_2 h \]

with

\[\begin{split}\begin{align} k_1 &= f(t_i, y_i) \\ k_2 &= f(t_i + \frac{1}{2}h, y_i + \frac{1}{2} k_1 h) \end{align} \end{split}\]

which is the midpoint method.

\[\begin{split} \begin{array}{c|cc} 0 & & \\ \frac{1}{2} & \frac{1}{2} & \\ \hline & 0 & 1 \\ \end{array} \end{split}\]

Ralston’s method#

The choice \(a_2 = \frac{2}{3}\), can be shown to provide a minimum bound on the truncation error,

\[y_{i+1} = y_i + [k_1 + 2 k_2 ] \frac{h}{3} \]

with

\[\begin{split}\begin{align} k_1 &= f(t_i, y_i) \\ k_2 &= f(t_i + \frac{3}{4}h, y_i + \frac{3}{4} k_1 h) \end{align} \end{split}\]
\[\begin{split} \begin{array}{c|cc} 0 & & \\ \frac{2}{3} & \frac{2}{3} & \\ \hline & \frac{1}{4} & \frac{3}{4} \\ \end{array} \end{split}\]

RK-3 methods#

A similar derivation follows for RK3, but now we have more missing information and degrees of freedom. A common choice is:

\[y_{i+1} = y_i + \frac{1}{6}[k_1 + 4 k_2 + k_3] h \]

with

\[\begin{split}\begin{align} k_1 &= f(t_i, y_i) \\ k_2 &= f(t_i + \frac{1}{2}h, y_i + \frac{1}{2} k_1 h) \\ k_3 &= f(t_i + h, y_i - k_1 h + 2 k_2 h) \\ \end{align} \end{split}\]

which reduces to Simpson’s 1/3 Rule if \(f\) is only a function of \(t\). As with Simpson’s rule, it is \(O(h^4)\) local / \(O(h^3)\) global error.

The Butcher Tableau is:

\[\begin{split} \begin{array}{c|ccc} 0 & & & \\ \frac{1}{2} & \frac{1}{2} & & \\ 1 & -1 & 2 & \\ \hline & \frac{1}{6} & \frac{2}{3} & \frac{1}{6} \\ \end{array} \end{split}\]

RK4#

RK4 is the most common implementation. The ‘Classical Runge-Kutta method’ is:

\[y_{i+1} = y_i + \frac{1}{6}[k_1 + 2 k_2 + 2 k_3 + k_4] h \]

with

\[\begin{split}\begin{align} k_1 &= f(t_i, y_i) \\ k_2 &= f(t_i + \frac{1}{2}h, y_i + \frac{1}{2} k_1 h) \\ k_3 &= f(t_i + \frac{1}{2}h, y_i + \frac{1}{2} k_2 h ) \\ k_4 &= f(t_i + h, y_i + k_3 h) \end{align} \end{split}\]

with \(O(h^5)\) local / \(O(h^4)\) global error.

The Butcher Tableau is:

\[\begin{split} \begin{array}{c|cccc} 0 & & & & \\ \frac{1}{2} & \frac{1}{2} & & & \\ \frac{1}{2} & 0 & \frac{1}{2} & & \\ 1 & 0 & 0 & 1 & \\ \hline & \frac{1}{6} & \frac{1}{3} & \frac{1}{3} & \frac{1}{6} \\ \end{array} \end{split}\]

Example - RK4 steps#

An example of the RK4 algorithm is below, showing partial steps inform subsequent steps culminating in very good estimate!

Runge-Kutta_slopes.svg

Notice how the RK4 method walks its way forward to achieve good accuracy in the time step. This means that a single, RK4 step will have better accuracy than a series of small RK2 steps. This may seem surprising but consider that this is taking small time steps internally, and using them to progress the solution and cancel out higher order derivatives in the taylor expansion.

There is however an implicit assumption that the time-behaviour of the function is smooth, which is not generally the case. For this, we need adaptive timestepping.