Weighted linear least squares#
As with most approximate methods, the moment we start to move away from an exact solution subtle effects start to show up!
Notice we are minimizing the residuals but there is a subtle problem with the problem definition above:
The coefficients of the second equation is about an order of magnitude lower than the others. Of course this system is equivilant to:
or even
What does this remind you of?
Jacobi (diagonal) preconditioning!
If we define \(r_1, r_2, r_3\):
We say the residuals are / can be weighted, i.e.: the least squares problem becomes,
Let’s code it!
A = np.array([[20, 50], [1, 1], [60, 20]])
b = np.array([700, 20, 700])
#~~ Question: What's the preconditioner? How do we apply it?
Pi = np.diag([1,7,1])
print(Pi)
A = Pi@A
b = Pi@b
###
x_lsq,_,_,_ = np.linalg.lstsq(A,b)
print(x_lsq)
# Plot the lines
plt.plot(x, y1, label='20c + 50t = 700')
plt.plot(x, y2, label='c + t = 20')
plt.plot(x, y3, label='60c + 20t = 700')
plt.plot(x_lsq[0], x_lsq[1], 'ro', label='Least Squares Solution')
plt.xlabel('c')
plt.ylabel('t')
plt.title('Least Squares Solution for Inconsistent System')
plt.legend()
plt.grid(True)
plt.show()
[[1 0 0]
[0 7 0]
[0 0 1]]
[[1 0 0]
[0 7 0]
[0 0 1]]
[ 8.22887365 11.15996082]
Weights are an excellent way to introduce measurement uncertainty into your fit!
If you have multiple meeasurements, \(y_i^j\), at a single point, \(x_i\), then you can find the mean,
and its variance,
and weight by the inverse of the variance, \(W_i = \frac{1}{\sigma^2}\).
Most package implementations of weighted least square regression has an option to input the weights, but exact usage varies. Consult the documentation to determine how to use it in practice.