Computational complexity#
The computational complexity of an algorithm characterizes the number of operations it requires (thus comparing the algorithm instead of the hardware). Linear algebra algorithms are characterized in terms of the dimension of their arguments (vector length, matrix size, etc). Complexity is represented in big O notation (similar to the accuracy of function approximations). I.e.: Only the leading order is kept, and constants disregarded.
Example#
Traditional matrix multiplication of two \(n \times n\) matricies - involves \(n^3\) operations and is thus \(O(n^3)\)
NB: This is algorithm-dependant - The Strassen algorithm has \(O(n^{2.81})\)
def time_matmult(n):
import numpy as np
A = np.random.rand(n, n)
B = np.random.rand(n, n)
%timeit C = A @ B
time_matmult(100)
time_matmult(1000)
time_matmult(2000)
time_matmult(4000)
time_matmult(8000)
time_matmult(16000)
46.7 μs ± 65.8 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
24.3 ms ± 73.1 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
188 ms ± 2.43 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
Cell In[1], line 10
8 time_matmult(1000)
9 time_matmult(2000)
---> 10 time_matmult(4000)
11 time_matmult(8000)
12 time_matmult(16000)
Cell In[1], line 5, in time_matmult(n)
3 A = np.random.rand(n, n)
4 B = np.random.rand(n, n)
----> 5 get_ipython().run_line_magic('timeit', 'C = A @ B')
File ~/.local/lib/python3.12/site-packages/IPython/core/interactiveshell.py:2511, in InteractiveShell.run_line_magic(self, magic_name, line, _stack_depth)
2509 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2510 with self.builtin_trap:
-> 2511 result = fn(*args, **kwargs)
2513 # The code below prevents the output from being displayed
2514 # when using magics with decorator @output_can_be_silenced
2515 # when the last Python token in the expression is a ';'.
2516 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
File ~/.local/lib/python3.12/site-packages/IPython/core/magics/execution.py:1226, in ExecutionMagics.timeit(self, line, cell, local_ns)
1223 if time_number >= 0.2:
1224 break
-> 1226 all_runs = timer.repeat(repeat, number)
1227 best = min(all_runs) / number
1228 worst = max(all_runs) / number
File /usr/lib/python3.12/timeit.py:208, in Timer.repeat(self, repeat, number)
206 r = []
207 for i in range(repeat):
--> 208 t = self.timeit(number)
209 r.append(t)
210 return r
File ~/.local/lib/python3.12/site-packages/IPython/core/magics/execution.py:184, in Timer.timeit(self, number)
182 gc.disable()
183 try:
--> 184 timing = self.inner(it, self.timer)
185 finally:
186 if gcold:
File <magic-timeit>:1, in inner(_it, _timer)
KeyboardInterrupt:
Some complexities of common linear algebra operations:
Operation |
Description |
Complexity |
|---|---|---|
Matrix addition |
Adding two matrices of the same size |
O(n²) |
Matrix multiplication |
Multiplying two matrices |
O(n³) (standard algorithm) |
Matrix-vector multiplication |
Multiplying a matrix by a vector |
O(n²) |
Matrix inversion |
Finding the inverse of a matrix |
O(n³) |
Determinant calculation |
Computing the determinant of a matrix |
O(n³) |
Transpose |
Swapping rows and columns of a matrix |
O(n²) |
Matrix norm |
Sqrt of sum of squares |
O(n²) |
High-performance computing introduces a new element to complexity which deals with how algorithms parallelize. This is called scaling and measured in \(O(number \ of \ nodes)\). The goal is usually \(O(n)\) but is hard to achieve!