Hide code cell source
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('svg')
import seaborn as sns
sns.set_context("paper")
sns.set_style("ticks")

from matplotlib.patches import Ellipse

def plot_ellipse(
    mu,
    cov,
    ax,
    std=2.0,
    edgecolor='red'
):
    """Plot an ellipse.
    
    Arguments:
    mu  -- The center of the ellipse.
    cov -- A covariance matrix. We find its principal
           axes to draw the ellipse.
    ax  -- An axes object to draw on.
    
    Keyword Arguments
    edgecolor -- The color we use.
    """
    a = cov[0, 0]
    b = cov[0, 1]
    c = cov[1, 1]
    lam1 = (
        0.5 * (a + c)
        + np.sqrt((0.5 * (a - c)) ** 2 + b ** 2)
    )
    lam2 = (
        0.5 * (a + c)
        - np.sqrt((0.5 * (a - c)) ** 2 + b ** 2)
    )
    if b == 0.0 and a >= c:
        theta = 0.0
    elif b == 0 and a < c:
        theta = 0.5 * np.pi
    else:
        theta = np.arctan2(lam1 - a, b)
    angle = 0.5 * 360.0 * theta / np.pi
    ell_radius_x = np.sqrt(lam1)
    ell_radius_y = np.sqrt(lam2)
    obj = Ellipse(
        mu,
        width=ell_radius_x * std,
        height=ell_radius_y * std,
        angle=angle,
        facecolor='none',
        edgecolor=edgecolor
    )
    ax.add_patch(obj)

Kalman Filter for the Object Tracking Example#

Let’s bring back the code from the Object Tracking Example. We do not repeat the theoretical details.

# The timestep
Dt = 0.5
# The mass
m = 1.0
# The variance for the process noise for position
epsilon = 1e-6
# The standard deviation for the process noise for velocity
sigma_q = 1e-2
# The standard deviation for the measurement noise for position
sigma_r = 0.05

# INITIAL CONDITIONS
# initial mean
mu0 = np.zeros((4,))
# initial covariance
V0 = np.array([0.1**2, 0.1**2, 0.1**2, 0.1**2]) * np.eye(4)

# TRANSITION MATRIX
A = np.array(
    [
        [1.0, 0, Dt, 0],
        [0.0, 1.0, 0.0, Dt],
        [0.0, 0.0, 1.0, 0.0],
        [0.0, 0.0, 0.0, 1.0]
    ]
)
# CONTROL MATRIX
B = np.array(
    [
        [0.0, 0.0],
        [0.0, 0.0],
        [Dt / m, 0.0],
        [0.0, Dt / m]
    ]
)
# PROCESS COVARIANCE
Q = (
    np.array(
        [epsilon, epsilon, sigma_q ** 2, sigma_q ** 2]
    )
    * np.eye(4)
)
# EMISSION MATRIX
C = np.array(
    [
        [1.0, 0.0, 0.0, 0.0],
        [0.0, 1.0, 0.0, 0.0]
    ]
)
# MEASUREMENT COVARIANCE
R = (
    np.array(
        [sigma_r ** 2, sigma_r ** 2]
    )
    * np.eye(2)
)

Generate a trajectory and observations:

np.random.seed(12345)

# The number of steps in the trajectory
num_steps = 50
# Space to store the trajectory (each state is 4-dimensional)
true_trajectory = np.ndarray((num_steps + 1, 4))
# Space to store the observations (each observation is 2-dimensional)
observations = np.ndarray((num_steps, 2))

# Sample the initial conditions
x0 = mu0 + np.sqrt(np.diag(V0)) * np.random.randn(4)
true_trajectory[0] = x0

# Pick a set of pre-determined forces to be applied to the object
# so that it does something interesting
force = .01
omega = 2.0 * np.pi / 5
times = Dt * np.arange(num_steps + 1)
us = np.zeros((num_steps, 2))
us[:, 0] = force * np.cos(omega * times[1:])
us[:, 1] = force * np.sin(omega * times[1:])

# Sample the trajectory
for n in range(num_steps):
    x = (
        A @ true_trajectory[n]
        + B @ us[n] 
        + np.sqrt(np.diag(Q)) * np.random.randn(4)
    )
    true_trajectory[n+1] = x
    y = (
        C @ x
        + np.sqrt(np.diag(R)) * np.random.randn(2)
    )
    observations[n] = y

We are not going to implement the filter from scratch. We will use the Python module FilterPy. This is not included in the default version of Google Colab. You need to install it manually. Here is how:

!pip install filterpy
Hide code cell output
DEPRECATION: Loading egg at /opt/homebrew/lib/python3.11/site-packages/ipython_tikzmagic-0.1.1-py3.11.egg is deprecated. pip 23.3 will enforce this behaviour change. A possible replacement is to use pip for package installation..
Requirement already satisfied: filterpy in /opt/homebrew/lib/python3.11/site-packages (1.4.5)
Requirement already satisfied: numpy in /opt/homebrew/lib/python3.11/site-packages (from filterpy) (1.24.3)
Requirement already satisfied: scipy in /opt/homebrew/lib/python3.11/site-packages (from filterpy) (1.10.1)
Requirement already satisfied: matplotlib in /opt/homebrew/lib/python3.11/site-packages (from filterpy) (3.7.1)
Requirement already satisfied: contourpy>=1.0.1 in /opt/homebrew/lib/python3.11/site-packages (from matplotlib->filterpy) (1.0.7)
Requirement already satisfied: cycler>=0.10 in /opt/homebrew/lib/python3.11/site-packages (from matplotlib->filterpy) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in /opt/homebrew/lib/python3.11/site-packages (from matplotlib->filterpy) (4.39.4)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/homebrew/lib/python3.11/site-packages (from matplotlib->filterpy) (1.4.4)
Requirement already satisfied: packaging>=20.0 in /opt/homebrew/lib/python3.11/site-packages (from matplotlib->filterpy) (23.1)
Requirement already satisfied: pillow>=6.2.0 in /opt/homebrew/lib/python3.11/site-packages (from matplotlib->filterpy) (9.5.0)
Requirement already satisfied: pyparsing>=2.3.1 in /opt/homebrew/lib/python3.11/site-packages (from matplotlib->filterpy) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7 in /opt/homebrew/lib/python3.11/site-packages (from matplotlib->filterpy) (2.8.2)
Requirement already satisfied: six>=1.5 in /opt/homebrew/lib/python3.11/site-packages (from python-dateutil>=2.7->matplotlib->filterpy) (1.16.0)

Now you should be able to load the library. Try the code below:

from filterpy.kalman import KalmanFilter

To define the filter in FilterPy we need to give the dimensionality of the state space (dim_x) and the observations (dim_z). Here is how:

kf = KalmanFilter(dim_x=4, dim_z=2)

Now we need to make the filter aware of the various vectors and matrices specifing initial conditions, transitions, emissions, covariances, etc. Note that FilterPy different notation than the one we use. The correspondance of the notation is as follows:

Name

This class

FilterPy

initial mean vector

\(\mu_t\)

\(x\)

initial covariance matrix

\(V_t\)

\(P\)

state transition matrix

\(A\)

\(F\)

control matrix

\(B\)

\(B\)

process covariance matrix

\(Q\)

\(Q\)

emission matrix

\(C\)

\(H\)

measurement covariance matrix

\(R\)

\(R\)

This is how you can make the kf object aware of the various matrices:

kf.x = mu0
kf.P = V0
kf.Q = Q
kf.R = R
kf.H = C
kf.F = A
kf.B = B

Here is a bit of code for plotting (skip and return later if you want to understand how it works).

Hide code cell source
def plot_mean_and_ellipse(
    mu,
    cov,
    ax,
    style=".",
    color="green",
    label="",
    **kwargs
):
    """Plot mean and ellipse.
    
    Argumets
    mu  -- The mean.
    cov -- The covariance.
    ax  -- The axes object to plot on.
    """
    ax.plot(
        mu[0],
        mu[1],
        style,
        color=color,
        label=label
    )
    plot_ellipse(
        mu,
        cov,
        ax,
        edgecolor=color,
        **kwargs
    )
    
def plot_after_prediction_step(
    kf,
    true_trajectory=None,
    observations=None
):
    """Plot summary right after prediction step.
    
    Arguments
    kf -- A Kalman filter object.
    
    Keyword Arguments
    true_trajector -- Plot the true trajectory if provided.
    observations   -- Plot the observations if provided.
    
    Returns an axes object.
    """
    fig, ax = plt.subplots()
    if true_trajectory is not None:
        ax.plot(
            true_trajectory[:n+1, 0],
            true_trajectory[:n+1, 1],
            'md-',
            label='True trajectory'
        )
    if observations is not None:
        ax.plot(
            observations[n,0],
            observations[n,1],
            'x',
            label='Observation'
        )
    plot_mean_and_ellipse(
        kf.x,
        kf.P,
        ax,
        style=".",
        color="green",
        label="After prediction step."
    )
    return ax

Now we can do filtering. You can do one time step at a time. This is what you would do if the data points were coming one by one: Here is how:

# DO NOT RERUN THIS WITHOUT RERUNNING THE INITIALIZATION CODE IN THE PREVIOUS
# CODE BLOCK
for n in range(1, num_steps):
    # Predict step (notice that you also need to pass the control (if there is any))
    kf.predict(u=us[n])
    # Make a figure one every few time steps
    if n % 1 == 0:
        ax = plot_after_prediction_step(
            kf,
            true_trajectory=true_trajectory,
            observations=observations
        )
    # Update step
    kf.update(observations[n])
    if n % 1 == 0:
        plot_mean_and_ellipse(
            kf.x,
            kf.P,
            ax,
            style="d",
            color="red",
            label="After observation step."
        )
        ax.set_xlim(-3.0, 0.5)
        plt.legend(loc='best', frameon=False)
        sns.despine(trim=True)
Hide code cell output
/var/folders/5y/28n32xmx0551k29hd21qs87c0000gp/T/ipykernel_87218/1013963138.py:48: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`). Consider using `matplotlib.pyplot.close()`.
  fig, ax = plt.subplots()
../_images/87a1e09489418ea7b0efdf1f0b20cce77f6dc0033feea69b4049e1b9f749406a.svg../_images/7532acc4e69d252105fa0136ebe5d338fc0b0a24aad00831508629304af290dc.svg../_images/8426c1879977380ebb86d65d6c869a78861663e7c3444284deebe4d05accd5d4.svg../_images/a5906b60c05fba2c4f8f2c6066bfa408ecbb5b0ec94f2d054be52255ec15ff4c.svg../_images/01c160d3cc545bbc30e199a5eeb8df0742363b8745f8603ed591a865b71979ad.svg../_images/37dfc1391000e13a45e839f452920ee1bf90e3e96c27f53c6934ffd22b87e00b.svg../_images/3d314ad34706ceb7a2890aeda20086dcd722efd4653eaa96ea8642a4b5cf498e.svg../_images/7d3a2a08d20086494c0ee39df92c8f79aad647bb9db16eb21daa83f2a571116d.svg../_images/be470e3a1ffad2a6afd8a0b3d1286eac5136fc59c08283f23ef1823cc4431d3e.svg../_images/60b1827f2607241d18c2c60c07cf0406770a884bca3d105a1d4ddf1ecc4bf235.svg../_images/c49dc0f37fd06ae526e189b119d8d9f45e577b36d03be5df929f013d34896b81.svg../_images/e4b3b3e649d9d50478fa164fe06af95c286af499c1cf2e93b93600ff562d942b.svg../_images/8f431cfa9247cc66ee806b00e94f0f121dc6123026f84634da8d7ff6b58e5355.svg../_images/328eb13697d55a3a0786c81fad9725c924aa2bbff5839ae9a241891edcb3962d.svg../_images/ce14e6b89fe47797bcd0da0f7621d50c486f055bd50722e63a7af773832fd48b.svg../_images/fc80604d2879fe0425248be28be9247e9cf22e6244b2a63f8f7c6df72371e797.svg../_images/946bf6fcb9ecf9333dd2fa9e8cd62cb30d1e61f2c013a3069582dc8274080c54.svg../_images/31de1068afcc9a2b017b43dbe91dd4f59625a57bbb467379c4866018a206bd68.svg../_images/4bde6641ac4b11dc2465b7f3d5852a0c7efad6837ccf45433e8b8b8e4254063c.svg../_images/5b1e8c788c26845693884e2df5d66964ab2099a92c01a3c902f9391f1d4a7adc.svg../_images/5bfc2e3842588eb47749baf8ea2abeb9f7ceede15877741bb8f8c168aa8ac457.svg../_images/7cd8ba78b595fc4bc8826e7e78695b62f98cbb85e92a46fa8a6f03dfbd3043be.svg../_images/eb31a5c03d795a79eb6a3eb951c59a466d2d6d82a0de95524dd95b549d72b7c2.svg../_images/6e55caded4625739cf64db35467c649aa8abe8c95141d218c43cd16adc646631.svg../_images/d6de56a68546b7566017bb05806813ae47c16ec8bd9901abe5a3b0adb2407aeb.svg../_images/61a17191f448e257ab56aad05a7e11840e2cf6eca36ea3e14608452f5b569693.svg../_images/29155b10eb62e036275be8d6d1cc2d3bbdd2f50a3e9f278f069d254e0045e011.svg../_images/0c96bd81cc7d4ce8d3eb99c4436e64a5c1f31eae2380fb9edcf3a5556f69baaf.svg../_images/1c219e04b44113b136307e900e4249e54dbc467e6b922006cdbec7d45659f00b.svg../_images/9f65a15eb3fc4d288ccb4ecc507bd8374225cc8d4ea29f3d755ac33006c39811.svg../_images/8238204d33d6e255e398073c9307b0eb771bba057a7c71052839cf7fe0d7c668.svg../_images/dc31fa79f5f855e4af6e987062ac9713e7f47e2ac842c202db82db9326650b13.svg../_images/293ab9e1e80aee9cd4daadad400fd523d909b567622253c769945a00e4a56bfb.svg../_images/64d4bdb381a421286d80e0845c823b6a91ba406ccc0e3d8d4b9d2b1d7e508ee1.svg../_images/f76f7834baab35b940e566172a0fbd964f5b0022b86ec5c8a4a22ea9383e5bf4.svg../_images/f01bc4e4bab7fb020a1d379bf5dd1ed8dcd18dcd260c85d67954b6f24c9b292e.svg../_images/0302f8fbe45617cfcf709040154ec86bd06458022c338501ed251e03c25af0b2.svg../_images/7d7188ec2539fb931a9b6761a5c8282ff795da4523debb9c5f294d66981d2c69.svg../_images/fe2d7de9169b7df1d594519d19c8f8d3f36d0dd11e1f54c9abca265e73efd908.svg../_images/6548a6a3a352f1030ea23bb4b9dbf710412dc033565af33b190a4d56a1708097.svg../_images/30eeac356e184e741c38719e18616724e5a20e8c0e7b14385b00d480515353c5.svg../_images/45caf481cdae0f8435261cafceebeb750c4e2e4dac205d0529cbe20c0434cccc.svg../_images/7f3298a117691e2fe6ec91a866324d8284a05eb027d8c2e6894659c12e04001f.svg../_images/b089229d9605b83cd6cc79b2754599d7d15259312d7fef59755baaaa720e136e.svg../_images/db8d8f649972df069fbab11ae74a89a80dad2db61c194be908cca8b40e877eb1.svg../_images/feda373aa29cb16bc9037e7fecc46b70b2d88b012cb476b637b638f25a106db4.svg../_images/579d805368e5ff4c89df4308578d524e201d0bd75b180b017908e83df284ce49.svg../_images/09453bd4c020db75d20e159a7551b1cccf8309f1c65034d72defcabeb16a3373.svg../_images/a328837ec77bf2068a0f5b147ccc999ca4fea87d84556901cbce0fc038dd639f.svg

Notice that the filter is very uncertain at the beginning. Then it gradually becomes better and better.

The other way to run the filter is with all the data at once. This is called a batch filter. Here is how:

# We need to reset the initial conditions
kf.x = mu0
kf.P = V0
# Here is the code that runs the batch:
means, covs, _, _ = kf.batch_filter(observations, us=us)

This returns the means and the covariances that you would have gotten at each timestep:

means
Hide code cell output
array([[-5.02156091e-03,  6.98533990e-02,  2.03662129e-03,
         3.08780507e-02],
       [-8.68173329e-04,  5.47903106e-02,  6.73175371e-03,
         4.98556683e-03],
       [-7.53078129e-02,  1.83764749e-02, -6.61662625e-02,
        -2.59391569e-02],
       [-9.47022508e-02, -1.14097576e-02, -5.99395579e-02,
        -3.56191950e-02],
       [-1.86669233e-01, -1.00489374e-01, -1.03815811e-01,
        -8.03101031e-02],
       [-2.18537235e-01, -1.06781379e-01, -9.69436322e-02,
        -6.48012446e-02],
       [-2.57369659e-01, -1.20926141e-01, -9.37440163e-02,
        -6.05706289e-02],
       [-2.67683371e-01, -1.43590842e-01, -7.53568940e-02,
        -6.18151720e-02],
       [-3.18625708e-01, -1.72890968e-01, -7.71975500e-02,
        -6.40408013e-02],
       [-3.30541143e-01, -1.92135370e-01, -6.05543097e-02,
        -5.84660113e-02],
       [-3.55277250e-01, -2.51019580e-01, -5.41012212e-02,
        -6.84127704e-02],
       [-3.46948231e-01, -2.93132434e-01, -3.71468226e-02,
        -6.71010871e-02],
       [-3.60573317e-01, -3.45301541e-01, -3.65275562e-02,
        -7.04893903e-02],
       [-3.95757608e-01, -3.55133909e-01, -4.80029217e-02,
        -5.63911852e-02],
       [-4.12644260e-01, -3.44159817e-01, -4.98697422e-02,
        -3.91418609e-02],
       [-4.18682258e-01, -3.69436999e-01, -4.55794404e-02,
        -4.45978076e-02],
       [-4.48948505e-01, -3.97303496e-01, -5.04248483e-02,
        -5.18107628e-02],
       [-4.78450264e-01, -4.17011447e-01, -5.07735248e-02,
        -5.38298508e-02],
       [-4.92472673e-01, -4.49494492e-01, -4.17114845e-02,
        -5.92269024e-02],
       [-5.05994848e-01, -4.64494415e-01, -3.34745583e-02,
        -5.27767112e-02],
       [-5.39343820e-01, -4.95374401e-01, -3.67603819e-02,
        -5.18199867e-02],
       [-5.58626978e-01, -5.15917987e-01, -3.56137370e-02,
        -4.46967440e-02],
       [-6.02424375e-01, -5.21259949e-01, -4.86264798e-02,
        -3.24378163e-02],
       [-6.34289441e-01, -5.51657659e-01, -5.60035055e-02,
        -3.57547186e-02],
       [-6.67645574e-01, -6.08560549e-01, -6.33658907e-02,
        -5.29730194e-02],
       [-6.76143777e-01, -6.30348757e-01, -5.71816929e-02,
        -5.38390198e-02],
       [-7.29909972e-01, -6.17970350e-01, -6.98344132e-02,
        -4.12556375e-02],
       [-7.31583410e-01, -6.45060195e-01, -5.36217141e-02,
        -4.88620562e-02],
       [-7.98007649e-01, -6.68552944e-01, -6.70546967e-02,
        -5.13869987e-02],
       [-8.59986039e-01, -6.39558589e-01, -7.46077975e-02,
        -2.72577539e-02],
       [-9.07352097e-01, -6.52559415e-01, -7.50023207e-02,
        -2.40417201e-02],
       [-9.34239516e-01, -6.41620129e-01, -6.87742574e-02,
        -9.15599789e-03],
       [-9.57119686e-01, -6.59100573e-01, -6.52422604e-02,
        -1.00935115e-02],
       [-1.05139003e+00, -6.63838989e-01, -9.64881035e-02,
        -7.01853972e-03],
       [-1.09831608e+00, -6.42272360e-01, -1.00906577e-01,
         4.04540089e-03],
       [-1.11164497e+00, -6.51995152e-01, -8.85717090e-02,
        -4.07584835e-03],
       [-1.14393882e+00, -6.37373188e-01, -8.48257070e-02,
        -1.48048892e-03],
       [-1.20849733e+00, -6.81286226e-01, -9.30516804e-02,
        -2.52843820e-02],
       [-1.27410601e+00, -6.85076725e-01, -9.74262862e-02,
        -2.43177833e-02],
       [-1.29979705e+00, -7.12529138e-01, -8.22685216e-02,
        -3.10655589e-02],
       [-1.33664939e+00, -7.36455212e-01, -7.63341764e-02,
        -3.18299044e-02],
       [-1.38052005e+00, -7.77668039e-01, -7.73056121e-02,
        -3.82364977e-02],
       [-1.45473052e+00, -7.79831815e-01, -9.45393761e-02,
        -2.60005976e-02],
       [-1.51447855e+00, -7.90794484e-01, -1.04090128e-01,
        -2.21626328e-02],
       [-1.62098053e+00, -8.11882203e-01, -1.33117496e-01,
        -2.65776368e-02],
       [-1.67915202e+00, -8.48613619e-01, -1.33461970e-01,
        -3.98598578e-02],
       [-1.72387644e+00, -9.00366123e-01, -1.25297367e-01,
        -5.86558313e-02],
       [-1.80197063e+00, -9.26371361e-01, -1.30567118e-01,
        -6.19450886e-02],
       [-1.85004459e+00, -9.40354171e-01, -1.18928849e-01,
        -5.73878397e-02],
       [-1.93605304e+00, -9.93855501e-01, -1.25640545e-01,
        -6.83333138e-02]])

And here is an alternative way to visualize your uncertainty about the state at all times:

Hide code cell source
def plot_kf_estimates(means, covs):
    """Plot estimates of the state with 95% credible intervals."""
    y_labels = ['$x_1$', '$x_2$', '$x_3$', '$x_4$']

    dpi = 150
    
    res_x = 1024
    res_y = 768

    w_in = res_x / dpi
    h_in = res_y / dpi

    fig, ax = plt.subplots(4, 1)
    fig.set_size_inches(w_in, h_in)

    times = Dt * np.arange(num_steps + 1)

    for j in range(4):
        ax[j].set_ylabel(y_labels[j])
    ax[-1].set_xlabel('$t$ (time)')

    for j in range(4):
        ax[j].plot(
            times[0:num_steps],
            true_trajectory[0:num_steps, j],
            'b.-'
        )
        ax[j].plot(
            times[1:num_steps],
            means[1:num_steps, j],
            'r.'
        )
        ax[j].fill_between(
            times[1:num_steps],
            (
                means[1:num_steps, j]
                - 2.0 * np.sqrt(covs[1:num_steps, j, j])
            ),
            (
                means[1:num_steps, j]
                + 2.0 * np.sqrt(covs[1:num_steps, j, j])
            ),
            color='red',
            alpha=0.25
        )                
        if j < 2:
            ax[j].plot(
                times[1:num_steps],
                observations[:n, j],
                'go'
            )

Here is how to use it:

plot_kf_estimates(means, covs)
../_images/b0bae98a9ff72113a4e46ae36bd1cf790c36e2a8b8719e80df55fbde2623cfd7.svg

Questions#

  • Rerun the code a couple of times to observe different trajectories.

  • Double the process noise variance \(\sigma_q^2\). What happens?

  • Double the measurement noise variance \(\sigma_r^2\). What happens?

  • Zero-out the control vector \(\mathbf{u}_{0:n-1}\). What happens?

Smoothing#

Let’s finish with some smoothing.

x, P, K, Pp = kf.rts_smoother(means, covs)
plot_kf_estimates(x, P)
../_images/3bb0c96fc27929884f7bab04d5dbbee4886babdb3607c60e0afd871a8f5fd874.svg
help(kf.rts_smoother)
Help on method rts_smoother in module filterpy.kalman.kalman_filter:

rts_smoother(Xs, Ps, Fs=None, Qs=None, inv=<function inv at 0x10a33a160>) method of filterpy.kalman.kalman_filter.KalmanFilter instance
    Runs the Rauch-Tung-Striebal Kalman smoother on a set of
    means and covariances computed by a Kalman filter. The usual input
    would come from the output of `KalmanFilter.batch_filter()`.
    
    Parameters
    ----------
    
    Xs : numpy.array
       array of the means (state variable x) of the output of a Kalman
       filter.
    
    Ps : numpy.array
        array of the covariances of the output of a kalman filter.
    
    Fs : list-like collection of numpy.array, optional
        State transition matrix of the Kalman filter at each time step.
        Optional, if not provided the filter's self.F will be used
    
    Qs : list-like collection of numpy.array, optional
        Process noise of the Kalman filter at each time step. Optional,
        if not provided the filter's self.Q will be used
    
    inv : function, default numpy.linalg.inv
        If you prefer another inverse function, such as the Moore-Penrose
        pseudo inverse, set it to that instead: kf.inv = np.linalg.pinv
    
    
    Returns
    -------
    
    x : numpy.ndarray
       smoothed means
    
    P : numpy.ndarray
       smoothed state covariances
    
    K : numpy.ndarray
        smoother gain at each step
    
    Pp : numpy.ndarray
       Predicted state covariances
    
    Examples
    --------
    
    .. code-block:: Python
    
        zs = [t + random.randn()*4 for t in range (40)]
    
        (mu, cov, _, _) = kalman.batch_filter(zs)
        (x, P, K, Pp) = rts_smoother(mu, cov, kf.F, kf.Q)
help(kf.batch_filter)
Help on method batch_filter in module filterpy.kalman.kalman_filter:

batch_filter(zs, Fs=None, Qs=None, Hs=None, Rs=None, Bs=None, us=None, update_first=False, saver=None) method of filterpy.kalman.kalman_filter.KalmanFilter instance
    Batch processes a sequences of measurements.
    
     Parameters
     ----------
    
     zs : list-like
         list of measurements at each time step `self.dt`. Missing
         measurements must be represented by `None`.
    
     Fs : None, list-like, default=None
         optional value or list of values to use for the state transition
         matrix F.
    
         If Fs is None then self.F is used for all epochs.
    
         Otherwise it must contain a list-like list of F's, one for
         each epoch.  This allows you to have varying F per epoch.
    
     Qs : None, np.array or list-like, default=None
         optional value or list of values to use for the process error
         covariance Q.
    
         If Qs is None then self.Q is used for all epochs.
    
         Otherwise it must contain a list-like list of Q's, one for
         each epoch.  This allows you to have varying Q per epoch.
    
     Hs : None, np.array or list-like, default=None
         optional list of values to use for the measurement matrix H.
    
         If Hs is None then self.H is used for all epochs.
    
         If Hs contains a single matrix, then it is used as H for all
         epochs.
    
         Otherwise it must contain a list-like list of H's, one for
         each epoch.  This allows you to have varying H per epoch.
    
     Rs : None, np.array or list-like, default=None
         optional list of values to use for the measurement error
         covariance R.
    
         If Rs is None then self.R is used for all epochs.
    
         Otherwise it must contain a list-like list of R's, one for
         each epoch.  This allows you to have varying R per epoch.
    
     Bs : None, np.array or list-like, default=None
         optional list of values to use for the control transition matrix B.
    
         If Bs is None then self.B is used for all epochs.
    
         Otherwise it must contain a list-like list of B's, one for
         each epoch.  This allows you to have varying B per epoch.
    
     us : None, np.array or list-like, default=None
         optional list of values to use for the control input vector;
    
         If us is None then None is used for all epochs (equivalent to 0,
         or no control input).
    
         Otherwise it must contain a list-like list of u's, one for
         each epoch.
    
    update_first : bool, optional, default=False
         controls whether the order of operations is update followed by
         predict, or predict followed by update. Default is predict->update.
    
     saver : filterpy.common.Saver, optional
         filterpy.common.Saver object. If provided, saver.save() will be
         called after every epoch
    
     Returns
     -------
    
     means : np.array((n,dim_x,1))
         array of the state for each time step after the update. Each entry
         is an np.array. In other words `means[k,:]` is the state at step
         `k`.
    
     covariance : np.array((n,dim_x,dim_x))
         array of the covariances for each time step after the update.
         In other words `covariance[k,:,:]` is the covariance at step `k`.
    
     means_predictions : np.array((n,dim_x,1))
         array of the state for each time step after the predictions. Each
         entry is an np.array. In other words `means[k,:]` is the state at
         step `k`.
    
     covariance_predictions : np.array((n,dim_x,dim_x))
         array of the covariances for each time step after the prediction.
         In other words `covariance[k,:,:]` is the covariance at step `k`.
    
     Examples
     --------
    
     .. code-block:: Python
    
         # this example demonstrates tracking a measurement where the time
         # between measurement varies, as stored in dts. This requires
         # that F be recomputed for each epoch. The output is then smoothed
         # with an RTS smoother.
    
         zs = [t + random.randn()*4 for t in range (40)]
         Fs = [np.array([[1., dt], [0, 1]] for dt in dts]
    
         (mu, cov, _, _) = kf.batch_filter(zs, Fs=Fs)
         (xs, Ps, Ks) = kf.rts_smoother(mu, cov, Fs=Fs)