import numpy as np
import matplotlib.pyplot as plt
1955)
np.random.seed(
#plt.rcParams['figure.figsize'] = (10, 6)
def KF_ct(n_iter=50, sig_w=0.001, sig_v=0.1):
# intial parameters
= 50
n_iter = (n_iter,) # size of array
sz = -0.37727 # truth value
x = np.random.normal(x,0.1,size=sz) # observations (normal about x, sigma=0.1)
z
= sig_w**2 # 1e-6 # process variance
Q
# allocate space for arrays
= np.zeros(sz) # a posteri estimate of x
xhat = np.zeros(sz) # a posteri error estimate
P = np.zeros(sz) # a priori estimate of x
xhatminus = np.zeros(sz) # a priori error estimate
Pminus = np.zeros(sz) # gain or blending factor
K
= sig_v**2 # 0.1**2 # estimate of measurement variance, change to see effect
R
# intial guesses
0] = 0.0
xhat[0] = 1.0
P[
for k in range(1,n_iter):
# time update
= xhat[k-1]
xhatminus[k] = P[k-1] + Q
Pminus[k]
# measurement update
= Pminus[k]/( Pminus[k] + R )
K[k] = xhatminus[k]+K[k]*(z[k]-xhatminus[k])
xhat[k] = (1 - K[k])*Pminus[k]
P[k]
= plt.subplots(2,1)
fig, (ax1, ax2) 'KF with default values')
fig.suptitle('ro',label='noisy measurements')
ax1.plot(z,'b-',label='KF posterior estimate')
ax1.plot(xhat,='g',label='true value')
ax1.axhline(x,color
ax1.legend()
= range(1,n_iter) # Pminus not valid at step 0
valid_iter ='a priori error estimate')
ax2.semilogy(valid_iter,Pminus[valid_iter],label
ax2.grid()
ax2.legend()set(xlabel='Iteration', ylabel='Voltage$^2$') #, ylim=[0,.01]) ax2.
4 Example 1 - estimating a constant
In this simple numerical example let us attempt to estimate a scalar random constant, a voltage for example. Let us assume that we can obtain measurements of the constant, but that the measurements are corrupted by a 0.1 volt RMS white measurement noise (e.g. our analog-to-digital converter is not very accurate).
Here, we will use data assimilation notation, where
f
denotes forecast (or prediction)a
denotes analysis (or correction)t
denotes the true value.
In this scalar, 1D example, our process is governed by the state equation,
\[\begin{align*} x_{k} & = F x_{k-1}+w_{k}=x_{k-1}+w_{k} \end{align*}\]
and the measurement equation, \[\begin{align*} y_{k} & = H x_{k}+v_{k} = x^{\mathrm{t}} + v_{k}. \end{align*}\] The state, being constant, does not change from step to step, so \(F=I.\) Our noisy measurement is of the state directly so \(H=1.\)
The time-update (forecast) equations are, \[\begin{align} x_{k+1}^{\mathrm{f}} & = x_{k}^{\mathrm{a}}\,,\\ P_{k+1}^{\mathrm{f}} & = P_{k}^{\mathrm{a}}+Q \end{align}\] and the measurement update (analysis) equations are \[\begin{align} K_{k+1} & = P_{k+1}^{\mathrm{f}}(P_{k+1}^{\mathrm{f}}+R)^{-1},\\ x_{k+1}^{\mathrm{a}} & = x_{k+1}^{\mathrm{f}} + K_{k+1}(y_{k+1}-x_{k+1}^{\mathrm{f}}),\\ P_{k+1}^{\mathrm{a}} & = (1-K_{k+1})P_{k+1}^{\mathrm{f}}. \end{align}\]
# use all default values
KF_ct()
In this first simulation we fixed the measurement variance at \(R=(0.1)^{2}=0.01.\) Because this is the “true” measurementerror variance, we would expect the “best” performance in terms of balancing responsiveness and estimate variance. This will become more evident in the second and third simulations.
The above figure depicts the results of this first simulation. The true value of the random constant \(x=-0.37727\) is given by the solid line, the noisy measurements by the dots and the filter estimate by the blue curve.
Now, we will see what happens when the measurement error variance \(R\) is increased or decreased by a factor of \(100\) respectively.
- first, the filter was told that the measurement variance was \(100\) times greater (i.e. \(R=1\)) so it was “slower” to believe the measurements.
- then, the filter was told that the measurement variance was \(100\) times smaller (i.e. \(R=0.0001\) ) so it was very “quick” to believe the noisy measurements.
# increase R = 1
=1) KF_ct(sig_v
# decrease R = 0.0001
=0.01) KF_ct(sig_v
4.1 Conclusion
While the estimation of a constant is relatively straightforward, this example clearly demonstrates the workings of the Kalman filter. In the second Figure (R=1) in particular, the Kalman filtering is evident as the estimate appears considerably smoother than the noisy measurements. We observe the speed of convergence of the variance in the bottom subplot of the respective Figures.