Skip to content

Commit 39e29bb

Browse files
Add comprehensive documentation for Ensemble Kalman Filter localization (#1278)
* Initial plan * Add comprehensive documentation for Ensemble Kalman Filter localization Co-authored-by: AtsushiSakai <[email protected]> --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: AtsushiSakai <[email protected]>
1 parent b0c81ae commit 39e29bb

File tree

1 file changed

+194
-2
lines changed

1 file changed

+194
-2
lines changed
Lines changed: 194 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,204 @@
1-
Ensamble Kalman Filter Localization
1+
Ensemble Kalman Filter Localization
22
-----------------------------------
33

44
.. figure:: https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/Localization/ensamble_kalman_filter/animation.gif
55

6-
This is a sensor fusion localization with Ensamble Kalman Filter(EnKF).
6+
This is a sensor fusion localization with Ensemble Kalman Filter(EnKF).
7+
8+
The blue line is true trajectory, the black line is dead reckoning trajectory,
9+
and the red line is estimated trajectory with EnKF.
10+
11+
The red ellipse is estimated covariance ellipse with EnKF.
12+
13+
It is assumed that the robot can measure distance and bearing angle from landmarks (RFID).
14+
15+
These measurements are used for EnKF localization.
716

817
Code Link
918
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1019

1120
.. autofunction:: Localization.ensemble_kalman_filter.ensemble_kalman_filter.enkf_localization
1221

22+
23+
Ensemble Kalman Filter Algorithm
24+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
25+
26+
The Ensemble Kalman Filter (EnKF) is a Monte Carlo approach to the Kalman filter that
27+
uses an ensemble of state estimates to represent the probability distribution. Unlike
28+
the traditional Kalman filter that propagates the mean and covariance analytically,
29+
EnKF uses a collection of samples (ensemble members) to estimate the state and its uncertainty.
30+
31+
The EnKF algorithm consists of two main steps:
32+
33+
=== Predict ===
34+
35+
For each ensemble member :math:`i = 1, ..., N`:
36+
37+
1. Add random noise to the control input:
38+
39+
:math:`\mathbf{u}^i = \mathbf{u} + \epsilon_u`, where :math:`\epsilon_u \sim \mathcal{N}(0, \mathbf{R})`
40+
41+
2. Predict the state:
42+
43+
:math:`\mathbf{x}^i_{pred} = f(\mathbf{x}^i_t, \mathbf{u}^i)`
44+
45+
3. Predict the observation:
46+
47+
:math:`\mathbf{z}^i_{pred} = h(\mathbf{x}^i_{pred}) + \epsilon_z`, where :math:`\epsilon_z \sim \mathcal{N}(0, \mathbf{Q})`
48+
49+
=== Update ===
50+
51+
1. Calculate ensemble mean for states:
52+
53+
:math:`\bar{\mathbf{x}} = \frac{1}{N}\sum_{i=1}^N \mathbf{x}^i_{pred}`
54+
55+
2. Calculate ensemble mean for observations:
56+
57+
:math:`\bar{\mathbf{z}} = \frac{1}{N}\sum_{i=1}^N \mathbf{z}^i_{pred}`
58+
59+
3. Calculate state deviation:
60+
61+
:math:`\mathbf{X}' = \mathbf{x}^i_{pred} - \bar{\mathbf{x}}`
62+
63+
4. Calculate observation deviation:
64+
65+
:math:`\mathbf{Z}' = \mathbf{z}^i_{pred} - \bar{\mathbf{z}}`
66+
67+
5. Calculate covariance matrices:
68+
69+
:math:`\mathbf{U} = \frac{1}{N-1}\mathbf{X}'\mathbf{Z}'^T`
70+
71+
:math:`\mathbf{V} = \frac{1}{N-1}\mathbf{Z}'\mathbf{Z}'^T`
72+
73+
6. Calculate Kalman gain:
74+
75+
:math:`\mathbf{K} = \mathbf{U}\mathbf{V}^{-1}`
76+
77+
7. Update each ensemble member:
78+
79+
:math:`\mathbf{x}^i_{t+1} = \mathbf{x}^i_{pred} + \mathbf{K}(\mathbf{z}_{obs} - \mathbf{z}^i_{pred})`
80+
81+
8. Calculate final state estimate:
82+
83+
:math:`\mathbf{x}_{est} = \frac{1}{N}\sum_{i=1}^N \mathbf{x}^i_{t+1}`
84+
85+
9. Calculate covariance estimate:
86+
87+
:math:`\mathbf{P}_{est} = \frac{1}{N}\sum_{i=1}^N (\mathbf{x}^i_{t+1} - \mathbf{x}_{est})(\mathbf{x}^i_{t+1} - \mathbf{x}_{est})^T`
88+
89+
90+
Filter Design
91+
~~~~~~~~~~~~~
92+
93+
In this simulation, the robot has a state vector with 4 states at time :math:`t`:
94+
95+
.. math:: \mathbf{x}_t = [x_t, y_t, \phi_t, v_t]^T
96+
97+
where:
98+
99+
- :math:`x, y` are 2D position coordinates
100+
- :math:`\phi` is orientation (yaw angle)
101+
- :math:`v` is velocity
102+
103+
The filter uses an ensemble of :math:`N=20` particles to represent the state distribution.
104+
105+
Input Vector
106+
^^^^^^^^^^^^
107+
108+
The robot has a velocity sensor and a gyro sensor, so the control input vector at each time step is:
109+
110+
.. math:: \mathbf{u}_t = [v_t, \omega_t]^T
111+
112+
where:
113+
114+
- :math:`v_t` is linear velocity
115+
- :math:`\omega_t` is angular velocity (yaw rate)
116+
117+
The input vector includes sensor noise modeled by covariance matrix :math:`\mathbf{R}`:
118+
119+
.. math:: \mathbf{R} = \begin{bmatrix} \sigma_v^2 & 0 \\ 0 & \sigma_\omega^2 \end{bmatrix}
120+
121+
122+
Observation Vector
123+
^^^^^^^^^^^^^^^^^^
124+
125+
The robot can observe landmarks (RFID tags) with distance and bearing measurements:
126+
127+
.. math:: \mathbf{z}_t = [d_t, \theta_t, x_{lm}, y_{lm}]
128+
129+
where:
130+
131+
- :math:`d_t` is the distance to the landmark
132+
- :math:`\theta_t` is the bearing angle to the landmark
133+
- :math:`x_{lm}, y_{lm}` are the known landmark positions
134+
135+
The observation noise is modeled by covariance matrix :math:`\mathbf{Q}`:
136+
137+
.. math:: \mathbf{Q} = \begin{bmatrix} \sigma_d^2 & 0 \\ 0 & \sigma_\theta^2 \end{bmatrix}
138+
139+
140+
Motion Model
141+
~~~~~~~~~~~~
142+
143+
The robot motion model is:
144+
145+
.. math:: \dot{x} = v \cos(\phi)
146+
147+
.. math:: \dot{y} = v \sin(\phi)
148+
149+
.. math:: \dot{\phi} = \omega
150+
151+
.. math:: \dot{v} = 0
152+
153+
Discretized with time step :math:`\Delta t`, the motion model becomes:
154+
155+
.. math:: \mathbf{x}_{t+1} = f(\mathbf{x}_t, \mathbf{u}_t) = \mathbf{F}\mathbf{x}_t + \mathbf{B}\mathbf{u}_t
156+
157+
where:
158+
159+
:math:`\begin{equation*} \mathbf{F} = \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \end{equation*}`
160+
161+
:math:`\begin{equation*} \mathbf{B} = \begin{bmatrix} \cos(\phi) \Delta t & 0\\ \sin(\phi) \Delta t & 0\\ 0 & \Delta t\\ 1 & 0\\ \end{bmatrix} \end{equation*}`
162+
163+
The motion function expands to:
164+
165+
:math:`\begin{equation*} \begin{bmatrix} x_{t+1} \\ y_{t+1} \\ \phi_{t+1} \\ v_{t+1} \end{bmatrix} = \begin{bmatrix} x_t + v_t\cos(\phi_t)\Delta t \\ y_t + v_t\sin(\phi_t)\Delta t \\ \phi_t + \omega_t \Delta t \\ v_t \end{bmatrix} \end{equation*}`
166+
167+
168+
Observation Model
169+
~~~~~~~~~~~~~~~~~
170+
171+
For each landmark at position :math:`(x_{lm}, y_{lm})`, the observation model calculates
172+
the expected landmark position in the global frame:
173+
174+
.. math::
175+
\begin{bmatrix}
176+
x_{lm,obs} \\
177+
y_{lm,obs}
178+
\end{bmatrix} =
179+
\begin{bmatrix}
180+
x + d \cos(\phi + \theta) \\
181+
y + d \sin(\phi + \theta)
182+
\end{bmatrix}
183+
184+
where :math:`d` and :math:`\theta` are the observed distance and bearing to the landmark.
185+
186+
The observation function projects the robot state to expected landmark positions,
187+
which are then compared with actual landmark positions for the update step.
188+
189+
190+
Advantages of EnKF
191+
~~~~~~~~~~~~~~~~~~
192+
193+
- **No Jacobian required**: Unlike EKF, EnKF does not need to compute Jacobian matrices, making it easier to implement for nonlinear systems
194+
- **Handles non-Gaussian distributions**: The ensemble representation can capture non-Gaussian features of the state distribution
195+
- **Computationally efficient**: For high-dimensional systems, EnKF can be more efficient than maintaining full covariance matrices
196+
- **Easy to parallelize**: Each ensemble member can be propagated independently
197+
198+
199+
Reference
200+
~~~~~~~~~
201+
202+
- `Ensemble Kalman filtering <https://rmets.onlinelibrary.wiley.com/doi/10.1256/qj.05.135>`_
203+
- `PROBABILISTIC ROBOTICS <http://www.probabilistic-robotics.org/>`_
204+

0 commit comments

Comments
 (0)