Giter VIP home page Giter VIP logo

pykalman's Introduction

pykalman

Welcome to pykalman, the dead-simple Kalman Filter, Kalman Smoother, and EM library for Python.

Installation

For a quick installation::

pip install pykalman

Alternatively, you can setup from source:

pip install .

Usage

from pykalman import KalmanFilter
import numpy as np
kf = KalmanFilter(transition_matrices = [[1, 1], [0, 1]], observation_matrices = [[0.1, 0.5], [-0.3, 0.0]])
measurements = np.asarray([[1,0], [0,0], [0,1]])  # 3 observations
kf = kf.em(measurements, n_iter=5)
(filtered_state_means, filtered_state_covariances) = kf.filter(measurements)
(smoothed_state_means, smoothed_state_covariances) = kf.smooth(measurements)

Also included is support for missing measurements:

from numpy import ma
measurements = ma.asarray(measurements)
measurements[1] = ma.masked   # measurement at timestep 1 is unobserved
kf = kf.em(measurements, n_iter=5)
(filtered_state_means, filtered_state_covariances) = kf.filter(measurements)
(smoothed_state_means, smoothed_state_covariances) = kf.smooth(measurements)

And for the non-linear dynamics via the UnscentedKalmanFilter:

from pykalman import UnscentedKalmanFilter
ukf = UnscentedKalmanFilter(lambda x, w: x + np.sin(w), lambda x, v: x + v, transition_covariance=0.1)
(filtered_state_means, filtered_state_covariances) = ukf.filter([0, 1, 2])
(smoothed_state_means, smoothed_state_covariances) = ukf.smooth([0, 1, 2])

And for online state estimation:

 for t in range(1, 3):
    filtered_state_means[t], filtered_state_covariances[t] = \
            kf.filter_update(filtered_state_means[t-1], filtered_state_covariances[t-1], measurements[t])

And for numerically robust "square root" filters

from pykalman.sqrt import CholeskyKalmanFilter, AdditiveUnscentedKalmanFilter
kf = CholeskyKalmanFilter(transition_matrices = [[1, 1], [0, 1]], observation_matrices = [[0.1, 0.5], [-0.3, 0.0]])
ukf = AdditiveUnscentedKalmanFilter(lambda x, w: x + np.sin(w), lambda x, v: x + v, observation_covariance=0.1)

Examples

Examples of all of pykalman's functionality can be found in the scripts in the examples/ folder.

pykalman's People

Contributors

duckworthd avatar gliptak avatar jonathanng avatar mbalatsko avatar nils-werner avatar pierre-haessig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pykalman's Issues

Question on the usage on financial time series

Hi, I am trying to apply KalmanFilter on a one-dimensional time series.
ts is the one dimensional time series,

kf = KalmanFilter(em_vars='all')
kf = kf.em(ts)
kf.initial_state_mean = ts[0]
(smoothed_state_means, smoothed_state_covariances) = kf.smooth(ts)

After that I plot ts and smoothed_state_means on the same figure,

figure_1

as can be seen the green line is the original time series, while the blue one is the smoothed_state_means generated by KalmanFilter, I am new to this field, may I know why there is big drop at the first instance?

Missing Data

In the documentation, under the KalmanFilter.filter() description, we find:

observations corresponding to times [0...n_timesteps-1]. If X is a masked array and any of X[t] is masked, then X[t] will be treated as a missing observation.

My question is why should we ignore the whole line, i.e., X[t], instead of just one of its elements ? If I'm not mistaken the Kalman Filter is able to fulfil those missing elements with expected values.

I'm asking because I'm building an application that makes special use of that capability of a Kalman Filter.

masked array error

loglikelihoods[i] = kf.loglikelihood(data.observations) incurred masked array error in scipy linalg module.

This needs to be changed to loglikelihoods[i] = kf.loglikelihood(data.observations.data)

How to set up this model

Hi,

how should I set up the following model with pykalman

Y_t = Beta_t * X_t + Normal(0, Q_t)
Beta_t = Beta_t-1 + Normal(0, R_t)

The second one should be the state equation. Y_t and X_t are two time-series of stock prices.

Somewhat unclear on matrix specification

I've perhaps been misusing pykalman for a few projects of mine -- I think most of my misunderstanding is demonstrated by the following:

N = np.random.random((1000,10))
C = np.random.random((10,3))
sub = N.dot(C)
kf1 = pykalman.KalmanFilter(transition_matrices=np.eye(3),observation_matrices=np.eye(3),
initial_state_mean=sub[0,],initial_state_covariance=np.zeros((3,3)))
kf1_states = kf1.filter(sub)[0]
kf2 = pykalman.KalmanFilter(transition_matrices=np.eye(3),observation_matrices=C,
initial_state_mean=sub[0,],initial_state_covariance=np.zeros((3,3)))
kf2_states = kf2.filter(N)[0]

Both filters start with identical internal initial states, in the first, I've already reduced the dimensionality to 3 states and my A/O matrices are just the identity over time. In the second, I'm letting this happen during each step.

I would expect these to generate at least similar values, but the second is an order of magnitude off. What am I mis-specifying here? I understand that the error terms in (1) and (2) are of different dimension, but it's not obvious as to why this should 'matter'

EM for multiple trials

Hello

I've been experimenting using EM for multiple training example, for example for the case when we have a labeled data set with multiple training examples.

I've used your code as a base to do this extension, and was just wondering, whether you were planning on doing something similar. Or perhaps already did.

Step-dependent functions

In order to supply step-dependent transition functions I should create an array of these functions as stated in documents:

transition_functions : function or [n_timesteps-1] array of functions

However, extraction of a function at particular step is done using pykalman.standard._last_dims(..., ndims=1)[0], hence, it extracts the whole array (as ndims=1 ) and takes the first element only, thus at any step only the first function is used.

Providing an array of 1-element arrays of functions works, but documentation is confusing.

Example does not exexcute

I have the following installed:

NumPy 1.6.1
SciPy 0.11.0

on a Windows Vista (32-bit) OS with Python 2.7

I used your recommendation to install pykalman

easy_install pykalman

The following folder structure was created when this was executed:

D:\python27\Lib\site-packages\pykalman-0.9.2-py2.7.egg
\EGG-INFO
\pykalman
\datasets
\data
\descr
\sqrt
\tests
\tests

I then tried to execute plot_em.py (from link at pykalman homepage); but got
the following errors

D:\python27\lib\site-packages\scipy\io\matlab\mio4.py:15: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility
  from mio_utils import squeeze_element, chars_to_strings
D:\python27\lib\site-packages\scipy\io\matlab\mio4.py:15: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility
  from mio_utils import squeeze_element, chars_to_strings
D:\python27\lib\site-packages\scipy\io\matlab\mio5.py:96: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility
  from mio5_utils import VarReader5
D:\python27\lib\site-packages\scipy\io\matlab\mio5.py:96: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility
  from mio5_utils import VarReader5

When the following statement is executed in base.py

from scipy import io

What can be done to fix this problem?

Question on effect of EM

Hi, I am trying to understand the role of EM,
I have performed kalman filtering on a timeseries ts, with and without em.
I have first initialize my KalmanFilter as such,

kf = KalmanFilter(transition_offsets = 0, observation_offsets=0,observation_covariance =      all_df.ix['2011-1-4'].close.var())

so the observation_covariance has been initialized to the previous day's variance.

Later I perform em and print out the parameters in kf

kf = kf.em(all_df.ix['2011-1-4'].close)

transition_matrics [[ 1.]]
observation_matrices [[ 1.]]
transition_covariance [[ 24325.02604088]]
observation_covariance [[ 67736.02509203]]
transition_offsets [0]
observation_offsets [0]
initial_state_mean 3185.4
initial_state_covariance [[ 0.93721432]]

The figure with em is like this,

withem

I run the program again without the kf = kf.em(all_df.ix['2011-1-4'].close) line

withoutem

and the parameters,

transition_matrics None
observation_matrices None
transition_covariance None
observation_covariance 240.239337949
transition_offsets 0
observation_offsets 0
initial_state_mean 3185.4
initial_state_covariance None

My question is why graphically the first one with em, has both filtered and smoothed mean so close to the original time series, and the second one which has a smaller observation_covariance actually have filtered and smoothed mean further away from the original time series?

EM optimisation

Hi,

Is it possible to only optimize the diagonal elements of the observation_covariance using the EM algorithm ?

Tahnk you

Problem in filter_update() in standard.py

First of all, thank you for pykalman! It's a very nice piece of work.

I did run into an easiily fixable problem however. In filter_update(), you have this code:

        (transition_matrices, transition_offsets, transition_covariance,
         observation_matrices, observation_offsets, observation_covariance,
         initial_state_mean, initial_state_covariance) = (
            self._initialize_parameters()
        )

The intent is to set self variables for use as defaults for absent arguments. The problem is that this code stomps on arguments transition_covariance and observation_covariance and the result is that no matter what you pass in, the defaults are used. The least intrusive way to fix this is by changing the code to this:

        (transition_matrices, transition_offsets, self.transition_covariance,
         observation_matrices, observation_offsets, self.observation_covariance,
         initial_state_mean, initial_state_covariance) = (
            self._initialize_parameters()
        )

Thanks again!
Emilio

Argument Parsing

Clean up argument parsing in filter, smooth, and filter_update methods.

Code _em_observation_covariance

Hi All,

I'm new to this and not an expert of Kalman Filter. While reading through the code of the standard Kalman Filter, I saw that in the EM section in function _em_observation_covariance the observation covariance R is computed with transition matrix A. However, in the reference Abbeel, Pieter. “Maximum Likelihood, EM" http://www.cs.berkeley.edu/~pabbeel/cs287-fa11/slides/Likelihood_EM_HMM_Kalman-v2.pdf on p.22 and in the description added to the code the observation covariance is computed with the observation matrix C. Is this correct anyway?

Thanks

Best,

Help with quick model setup

I'm a bit rusty when it comes to setting up these matrices...

I've doing a model of the form:

y_t = β0_t +β1_t * xt + β2_t * xt + e0_t where e0_t ∼ N()
β0_t+1 = β0_t + e1_t where e1_t ∼ N(0)
β1_t+1 = β1_t + e2_t where e2_t ∼ N(0)
β2_t+1 = β2_t + e3_t where e3_t ∼ N(0)

Square Root Kalman Smoother

Implemented a version of the Kalman Smoother that propagates the Cholesky decomposition of the covariance matrix

Vectorized 1D (temporal) Kalman smoothing on 2D movie frames?

I'm interested in using Kalman smoothing to de-noise movies. At the moment I'm only interested in filtering in the time dimension, treating each pixel as an independent timeseries, i.e. one observation and one state dimension.

I initially wrote my own naive vectorized implementation of Kalman smoothing, then discovered your fancier version with the EM algorithm. I'm wondering whether there's a vectorized way to apply your Kalman smoother to all of the movie pixels simultaneously, since looping over them would be horrendously slow. I can't see how to do this, since X is always assumed to be [n_observations, n_dim_states].

Non-symmetric covariance matrix after EM fitting

Hi there! I'm new to pykalman and Kalman filters in general, so I may be doing something wrong. But I am pretty sure the covariance matrix should always be symmetric. If it's not, eigenvalues may be complex and things get weird. (I found this when computing the confidence ellipsoid for smoothed data)

The following code produces a non-symmetric initial covariance matrix; the asymmetry gets worse with larger n_iter (try 10, 20, 30, 40, 50, 100) or larger numbers of samples. It looks like a float precision problem; I am OK with unreliable information at high iterations but this asymmetry throws a wrench in things.

I imagine we could fix this by making the matrix symmetric after each iteration. Interested in your thoughts.

Thanks!
-Evan

import numpy as np
import numpy.ma as ma
import pykalman
dt = 1.

# define transition function F as in \bar x = F \cdot x
transition_matrix = np.array([
        [1, dt, dt*dt/2., dt*dt*dt/6., 0, 0,  0,        0 ],
        [0, 1,  dt,       dt*dt/2.,    0, 0,  0,        0 ],
        [0, 0,  1,        dt,          0, 0,  0,        0 ],
        [0, 0,  0,        1,           0, 0,  0,        0 ],
        [0, 0,  0,        0,           1, dt, dt*dt/2., dt*dt*dt/6. ],
        [0, 0,  0,        0,           0, 1,  dt,       dt*dt/2.    ],
        [0, 0,  0,        0,           0, 0,  1,        dt          ],
        [0, 0,  0,        0,           0, 0,  0,        1 ]
    ])

# define matrix which transforms state into measurements;
# our state is x, y, dx/dt, and dy/dt
observation_matrix = np.array([
        [1, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 1, 0, 0, 0],
        [0, 1, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0, 1, 0, 0],
    ])

# not really sure what this means; maybe let EM figure it out?
transition_covariance = np.eye(8) * 10.

# This is our variance in position and velocity. If std-dev of position is 10 meters, then
#100 m^2 is the variance. But speed seems to be known a little (a lot?) better.
observation_covariance = np.diag([200., 200., 8., 8.])

kf = pykalman.KalmanFilter(
    n_dim_state=8,
    n_dim_obs=4,
    transition_matrices=transition_matrix,
    observation_matrices=observation_matrix,
    transition_covariance=transition_covariance,
    observation_covariance=observation_covariance,
    em_vars=['transition_covariance', 'initial_state_mean', 'initial_state_covariance']
)

measurements = ma.MaskedArray([
       [  5.27477432255369727500e+05,   5.03764540228472929448e+06,   0.00000000000000000000e+00,
          6.21999979019165039062e+00],
       [  5.27477414331637672149e+05,   5.03764949726117122918e+06,   0.00000000000000000000e+00,
          6.09000015258789062500e+00],
       [  5.27477389067118521780e+05,   5.03765526935885101557e+06,   0.00000000000000000000e+00,
          6.07000017166137695312e+00],
       [  5.27477358017556718551e+05,   5.03766236313823424280e+06,   0.00000000000000000000e+00,
          6.03999996185302734375e+00],
       [  5.27477370752291753888e+05,   5.03766877081946097314e+06,   9.77669604515315898707e-02,
          6.02920758903960862796e+00],
       [  5.27477440341255976819e+05,   5.03767462179020419717e+06,   9.97125696816208162421e-02,
          6.14919170106664569886e+00],
       [  5.27477508913885336369e+05,   5.03768038927771802992e+06,   9.97125696816208162421e-02,
          6.14919170106664569886e+00],
       [  5.27477562812059884891e+05,   5.03768492094157077372e+06,   9.84154968615613273686e-02,
          6.06920229304862068886e+00],
       [  5.27477570536027313210e+05,   5.03768557030001003295e+06,   9.84154968615613273686e-02,
          6.06920229304862068886e+00],
       [  5.27477570536027313210e+05,   5.03768557030001003295e+06,   9.82533588934719759322e-02,
          6.05920337865912728148e+00]])

kf.initial_state_mean = np.array([
        measurements[0, 0],
        measurements[0, 2],
        0.0,
        0.,
        measurements[0, 1],
        measurements[0, 3],
        0.,
        0.,
    ])

kf.em(measurements, n_iter=50)
np.set_printoptions(precision=3, linewidth=120)
print kf.initial_state_covariance

Output:

array([[  4.827e-01,  -4.122e-02,  -6.912e-03,   3.051e-03,  -1.660e-02,   4.739e-03,  -1.753e-03,   8.911e-04],
       [ -3.962e-02,   9.683e-02,  -5.243e-02,   1.129e-02,   1.396e-04,  -4.835e-04,   1.561e-04,   3.462e-04],
       [ -8.120e-03,  -5.200e-02,   5.937e-02,  -1.725e-02,   2.004e-03,   1.206e-03,  -4.209e-03,   1.251e-03],
       [  3.233e-03,   1.148e-02,  -1.782e-02,   8.169e-03,   5.355e-04,   3.772e-04,   8.248e-04,  -4.479e-04],
       [  1.953e-02,  -4.290e-03,   1.119e-03,  -7.555e-04,   4.844e-01,  -4.070e-02,  -6.510e-03,   2.944e-03],
       [ -1.334e-04,   5.095e-04,  -1.539e-04,  -3.632e-04,  -3.899e-02,   9.679e-02,  -5.233e-02,   1.127e-02],
       [ -2.107e-03,  -1.324e-03,   4.377e-03,  -1.307e-03,  -8.132e-03,  -5.206e-02,   5.933e-02,  -1.720e-02],
       [ -4.134e-04,  -3.451e-04,  -8.956e-04,   4.622e-04,   3.401e-03,   1.153e-02,  -1.781e-02,   8.157e-03]])

Interpreting output of 3d data.

I have a data in the form [(varx1,vary1,varz1),.....,(varx100,vary100,varz100)]

There are more than 100 values, I have given 100 here for example.

I'm looking spatio-temporal data.

The output I get after running kalmanfilter.smooth or filter is a 2d list. I'm not sure how to interpret it. Can you please guide me.

Test failing

I've been trying to update the Fedora package, and some tests seem to be failing:

+ nosetests pykalman/tests/
..E./builddir/build/BUILD/pykalman-2aeb4ad80f9dcc4ea182331e33bda7ea4866548e/pykalman/standard.py:1397: UserWarning: transition_offsets has 2 dimensions now; after fitting, it will have dimension 1
  warnings.warn(warn_str)
E............
======================================================================
ERROR: test_kalman_fit (pykalman.tests.test_standard.KalmanFilterTestSuite)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/builddir/build/BUILD/pykalman-2aeb4ad80f9dcc4ea182331e33bda7ea4866548e/pykalman/tests/test_standard.py", line 128, in test_kalman_fit
    loglikelihoods[i] = kf.loglikelihood(self.data.observations)
  File "/builddir/build/BUILD/pykalman-2aeb4ad80f9dcc4ea182331e33bda7ea4866548e/pykalman/standard.py", line 1474, in loglikelihood
    predicted_state_means, predicted_state_covariances, Z
  File "/builddir/build/BUILD/pykalman-2aeb4ad80f9dcc4ea182331e33bda7ea4866548e/pykalman/standard.py", line 170, in _loglikelihoods
    predicted_observation_covariance[np.newaxis, :, :]
  File "/builddir/build/BUILD/pykalman-2aeb4ad80f9dcc4ea182331e33bda7ea4866548e/pykalman/utils.py", line 73, in log_multivariate_normal_density
    cv_sol = solve_triangular(cv_chol, (X - mu).T, lower=True).T
  File "/usr/lib64/python2.7/site-packages/scipy/linalg/basic.py", line 155, in solve_triangular
    b1 = _asarray_validated(b, check_finite=check_finite)
  File "/usr/lib64/python2.7/site-packages/scipy/_lib/_util.py", line 138, in _asarray_validated
    raise ValueError('masked arrays are not supported')
ValueError: masked arrays are not supported

======================================================================
ERROR: test_kalman_pickle (pykalman.tests.test_standard.KalmanFilterTestSuite)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/builddir/build/BUILD/pykalman-2aeb4ad80f9dcc4ea182331e33bda7ea4866548e/pykalman/tests/test_standard.py", line 183, in test_kalman_pickle
    loglikelihood = kf.loglikelihood(X)
  File "/builddir/build/BUILD/pykalman-2aeb4ad80f9dcc4ea182331e33bda7ea4866548e/pykalman/standard.py", line 1474, in loglikelihood
    predicted_state_means, predicted_state_covariances, Z
  File "/builddir/build/BUILD/pykalman-2aeb4ad80f9dcc4ea182331e33bda7ea4866548e/pykalman/standard.py", line 170, in _loglikelihoods
    predicted_observation_covariance[np.newaxis, :, :]
  File "/builddir/build/BUILD/pykalman-2aeb4ad80f9dcc4ea182331e33bda7ea4866548e/pykalman/utils.py", line 73, in log_multivariate_normal_density
    cv_sol = solve_triangular(cv_chol, (X - mu).T, lower=True).T
  File "/usr/lib64/python2.7/site-packages/scipy/linalg/basic.py", line 155, in solve_triangular
    b1 = _asarray_validated(b, check_finite=check_finite)
  File "/usr/lib64/python2.7/site-packages/scipy/_lib/_util.py", line 138, in _asarray_validated
    raise ValueError('masked arrays are not supported')
ValueError: masked arrays are not supported

----------------------------------------------------------------------
Ran 17 tests in 0.747s

FAILED (errors=2)

Possible minor mistake

On line 385 of unscented.py, is there a reason you are using points_pred.weights_mean rather than points_pred.weights_covariance? I think it should be weights_covariance, though the difference is pretty minor.

fully observed LDS

Hi pykalman team!

I followed the issue discussion in scikit-learn and was sad to see this module was not implemented in the end. It looks like this is a nice implementation with additional nice extensions to the standard KF. It appears there hasn't been much activity here in a while. Is this project still active? One feature I'd like is the fully observed (supervised) linear dynamical system. Here the observations and states are given in training. The training is then trivial in comparison to EM for the KF- it's just linear regression, but it is a particularly useful case. It's commonplace in my field, brain computer interfaces. It would be nice to have it in the pykalman framework. I am interested in implementing it if you would be interested in merging it.

Ben

Unable to load robot.mat

Trying to load robot.mat results in the following error.

ValueError: Unknown mat file type, version 58, 47

This is not an issue and happened due to improper downloading of robot.mat as it doesn't come part of the package when you do pip install

Wrong pypi format

Just started using this library and it's awesome. However, it's currently not installable using pip instead of easy_install. I think you created the wrong type of package when you uploaded it to pypi. You want to run something like python setup.py sdist upload. I think you ran something like python setup.py bdist_egg upload. pip can't find the package because it looks for .tar.gz packages, not .egg packages.

Also, the url parameter in setup.py is missing the http:// prefix, so the link from the PyPi page is broken. Thanks!

Possibility of including control input

I am new to using Kalman filters, and I have noticed that most introductory texts seem to drop the control input to the system early on, saying it is easy to include it later, and I have not seen a method to include it in pykalman so far. I tried including control input as an observation with zero noise, but then i get an error that a matrix is not positive definite.

Any hints on how best to include control input?

PS: I am looking at using the additive unscented Kalman filter.

Add support for custom residual function

The UKF currently only supports subtraction for calculating the residual of an observation (e.g. here). This can cause issues when the difference between observations is non-linear, e.g. for angles. It would be great if users could provide an optional residual function which would default to vector subtraction.

Spectral Learning

I noticed that you've started an implementation/port of Boots' spectral learning work -- is this something you plan on continuing? I've been working on porting some of his stuff to python but I'm interested in doing it in a way that uses online SVD.

Dependencies for documentation

In the installation guide https://pykalman.github.io/#installation the following is stated

  • Sphinx (for generating documentation)
  • numpydoc (for generating documentation)

Can someone explain to me for what documentation these packages are needed? When I want to just use pykalman can I do it without installing sphinx and numpydoc (are they optional)?

pykalman with Cython

Hi all,

I'm using pykalman to filter thousands of time series, to a point where performance is becoming an issue. Is there a way to compile pykalman using cython to make it run as natively as possible?

Cheers

Python3 compatibility?

When trying to run any example in python 3, I get the following error:

ImportError: No module named 'datasets'

in Python/3.3/lib/python/site-packages/pykalman/init.py

Is pykalman only compatible with python2?

Unscented example cant execute

Hi, I am using python 2.7, numpy 1.7.1 and scipy 0.12.0 on windows 7 32bit
When running unsecented example, plot_additive.py and plot_filter.py, i am getting the below error:

  File "C:\Users\ssss\workspace\pykalman\pykalman\unscented.py", line 818, in sample
    self._initialize_parameters()

 File "C:\Users\ssss\workspace\pykalman\pykalman\unscented.py", line 721, in _initialize_parameters
    processed = preprocess_arguments([arguments, defaults], converters)

 File "C:\Users\ssss\workspace\pykalman\pykalman\utils.py", line 146, in preprocess_arguments
     argval = converters[argname](argval)

 TypeError: int() argument must be a string or a number, not 'mtrand.RandomState'

Any idea why this happening? thanks!

Non-linear filters not time-dependent

Though I may well be misunderstanding the intended use of pykalman, I can't get time-dependence to work properly, as passing an array of functions to the filter results in it only every using the first element of the array. Below is code exhibiting this for the sample() function, and filter() also behaves similarly.

Linear filters work fine, though, and this seems to be because transition functions (as well as observation functions) are obtained by using the _last_dims() method imported from linear filters, which seems to be defined solely with matrices in mind.

from pykalman import UnscentedKalmanFilter
import numpy as np

def get_time_dependent_filter():
    F1 = np.eye(2)
    F2 = np.array([[0, 1],
                   [1, 0]])
    H = np.eye(2)
    eps = 0.000000001
    Q = np.eye(2)*eps
    R = np.eye(2)*eps
    x0 = np.array([1, 0])

    f1 = lambda state, noise : np.dot(F1, state)
    f2 = lambda state, noise : np.dot(F2, state)
    g = lambda state, noise : np.dot(H, state)

    return UnscentedKalmanFilter(
        transition_functions = [f1, f2],
        observation_functions = g,
        transition_covariance = Q,
        observation_covariance = R,
        initial_state_covariance = Q,
        initial_state_mean = x0)

def test_ukf_time_dependence():
    ukf = get_time_dependent_filter()
    f1 = ukf.transition_functions[0]
    f2 = ukf.transition_functions[1]
    x0 = ukf.initial_state_mean

    states_ukf = ukf.sample(3, x0)[0]
    states = np.array([x0, f1(x0, 0), f2(f1(x0, 0), 0)])

    print states_ukf
    print states

    assert (np.isclose(states_ukf, states, atol=0.0001).all())

if __name__=="__main__":
    test_ukf_time_dependence()

Time-dependent transition covariance

Hi,

The help suggests that filtering/smoothing works also with time-dependent transition covariances (it is possible to give the transition covariance as an array of shape [n_timesteps-1,n_state,n_state]). However, it seems that the transition covariance is always set to the first given value at line 371 of standard.py:

transition_covariance = _last_dims(transition_covariance, t - 1)

Is this intentional? Renaming the left hand side would probably fix the issue?

Time-dependent observation covariance

[From documentation]
observation_covariance : [n_dim_obs, n_dim_obs] array-like
Also known as R. observation covariance matrix for times [0...n_timesteps-1]

I'd like to use time-dependent observation covariance matrices. Also there seems to be a typo, it should be [n_dim_obs, n_dim_state] or the other way around.

DeprecationWarning for `getargspec` when invoking KalmanFilter.em()

When invoking KalmanFilter().em() on python-3.5 I'm receiving this deprecation warning about the use of inspect.getargspec() that has to be replaced by inspect.signature().

python-3.5.1.amd64\lib\site-packages\pykalman\utils.py:111: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
  args = inspect.getargspec(obj.__init__)[0]

KalmanFilter docstring error

The docstring of class KalmanFilter states the shape of parameter 'observation_matrices' as [n_timesteps, n_dim_obs, n_dim_obs] or [n_dim_obs, n_dim_obs].

The correct shape should be [n_timesteps, n_dim_obs, n_dim_state] or [n_dim_obs, n_dim_state].

_filter_correct parameters

Hi there!
I have done a forecast using ARIMA and collected the forecast data and the actual (real) data in a DataFrame. I have used univariate historical past data of active power and I suppose that there was no error during measurement so my observation covariance is 0.
I took into account that all the parameters in this functions will be scalar.
I want now to correct my forecast in order to have lower error.Here is what I did considering observation as my real data and forecast as my forecast data (both in a DataFrame).

num_steps = len(observations) - 1

_filter_correct(
  observation_matrix         = np.eye(num_steps*2)*1,
  observation_covariance     = np.eye(num_steps*2)*0,
  observation_offset         = np.eye(num_steps*1)*0,
  predicted_state_mean       = forecast.values,
  predicted_state_covariance = np.eye(num_steps*2)*0.08,
  observation                = observations.values
)

I had 0.08 as overall error of my forecast so I used it as transition covariance.I have no offset since there is any other parameters or noise I am taking into account.
Am I doing something wrong?

_filter_correct parameters

Hi there!
I have done a forecast using ARIMA and collected the forecast data and the actual (real) data in a DataFrame. I have used univariate historical past data of active power and I suppose that there was no error during measurement so my observation covariance is 0.
I took into account that all the parameters in this functions will be scalar.
I want now to correct my forecast in order to have lower error.Here is what I did considering observation as my real data and forecast as my forecast data (both in a DataFrame).

num_steps = len(observations) - 1

_filter_correct(observation_matrix = np.eye(num_steps_2)_1
,observation_covariance = np.eye(num_steps_2)_0,
observation_offset =np.eye(num_steps_1)_0,
predicted_state_mean = forecast.values,
predicted_state_covariance = np.eye(num_steps_2)_0.08,
observation = observations.values)

I had 0.08 as overall error of my forecast so I used it as transition covariance.I have no offset since there is any other parameters or noise I am taking into account

How to run Kalman Filter on a single observation?

Hi together,

I really appreciate it, if someone lets me know hoe to use kalman filter with only one observation. However, if I provide only a single observation, I will encounter the following error:
"ValueError: could not broadcast input array from shape (2,2) into shape (2,1)" .

Bests
@duckworthd

Support continuous time models

They say there's no harm in asking so, if you could add support for continuous time models that would be super. It makes a difference when your observations aren't equally spaced in time.

Wrong initialization of the predicted state mean and covariance in the Kalman filter

In the module standard.py, the predicted state mean and covariance are initialized respectively to the initial state mean and to the initial state covariance.
This is not in accordance with the "standard" specification of the filter (see for example the initialiazation proposed in http://en.wikipedia.org/wiki/Kalman_filter#Example_application.2C_technical), where it is the filtered state mean and covariance which are assigned these initial values.

filter RSSI with pykalman

Can somebody explain me, how can I use pykalman for filtering RSSI numbers from Bluetoot Low Energy device?
So I need to filter the rssi at runtime.
Any suggestion is help for me.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.