Giter VIP home page Giter VIP logo

particles's People

Contributors

adriencorenflos avatar armavica avatar finncatling avatar hai-dang-dau avatar maxjcohen avatar nchopin avatar postylem avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

particles's Issues

Missing feature: Multivariate normal distribution with a different covariance matrix for each particle

Ah, sorry, my bad. One way to fix my code :

def multi_dist(xp, beta, gamma, dt):  # xp means X_{t-1}, defines the BM parts
    locS = xp['S'] - beta*xp['S']*xp['I']*dt
    locI =  xp['I'] + (beta*xp['S']*xp['I'] - gamma*xp['I'])*dt
    d = {'S': dists.Normal(loc=locS,
                             scale=(1/10)*np.sqrt(beta*xp['S']*xp['I']*dt)),
           'I': dists.Cond(lambda x: dists.Normal(loc=locI - x['S'] + locS, 
                                        scale = (1/10)*np.sqrt(beta*xp['S']*xp['I']*dt))
          }

where basically I replaced $B_1$ by $S$ minus its expectation.

The current version of MvNormal does not allow for a covariance matrix that varies across the particles. I implemented this for a colleague, but I don't like it so much because there is a loop over $n$, so iit may be slow:


class Generalized_MV_Normal(dists.ProbDist):
    """Multivariate normal, with dim >=2, allowing for a different cov matrix for each
    particle. 
    """
    def __init__(self, loc=None, cov=None):
        self.loc = loc
        self.cov = cov
        self.dim = self.cov.shape[-1]

    def rvs(self, size=1):
        x = np.empty((size, self.dim))
        for n in range(size):
            x[n, :] = random.multivariate_normal(self.loc[n, :], 
                                                 self.cov[n, :, :])
        return x

    def logpdf(self, x):
        N = x.shape[0]
        l = np.empty(N)
        for n in range(N):
            l[n] = stats.multivariate_normal.logpdf(x[n], mean=self.loc[n, :],
                                                    cov=self.cov[n, :, :])
        return l

Since this is the second time someone is asking for something like this, I am going to open an issue and try to think of ways to make the above code more efficient (numba?).

Originally posted by @nchopin in #54 (comment)

Information request regarding particle attribute and predictive steps

  1. What is not clear in the documentation is whether the X attribute of that object is weighted or unweighted particles. If those are weighted, then I assume that the array in attribute W are the weights that are applied to importance-sampled particles to get X, correct?
  2. Is it possible to perform a predictive step for particle filtering with any of the algorithms in the library for sequential problems? I have tried it with next() methods for particles.SMC class objects, but that seems to be the regular update step similar to the one in SIR algorithm.

Implement fixed-lag smoothing (or more advanced ways to save history)

Option store_history makes it possible to record the complete "history" of the particle system: the particles X_t^n, their weights W_t^n, and the ancestor variables A_t^n at each time t=0, ...,T. It would be nice to make it possible to store only certain times of history, eg:

  • the last k time steps, in order to do fixed-lag smoothing
  • every other k steps: this could be useful for e.g. IBIS or SMC^2, where you may want to inspect the posterior at certain time steps (but not all of them).

Add an improper distribution class

A lot of SSMs start with a propagation step without any update. At the moment using particles for this is only feasible by a hack (e.g., Gaussian with very large noise). Adding an improper distribution that would always return 1/N as a result would (I think) result in the right algorithm (with no bias in the normalizing constant estimate)

WDYT?

Documentation

Firstly, thanks for contributing this, your efforts are appreciated!

I've been trying to test this library and struggled to make headway due to

  1. Missing documentation (not self-contained)

  2. Example code is often non-functional (missing imports/definitions that are critical to understanding it), missing arguments in the docstrings and completely missing docstrings (every function and class should have one).

  3. The library seems designed to process complete data sets, and can't be used as is for live filtering?

Some examples of the above:
1. The documentation often refers to 'the book', and isn't very well self contained. The documentation also often explains things using a particular math notation, without defining that notation. In general it's always better when possible with technical writing to use full words in addition to symbols, rather than replacing words with symbols (e.g. rather than saying "M0(dx0) is the N(0,1) distribution", you could say "The initial distribution M0(dx0) is in this case the normal distribution N(0,1)". That way if it isn't immediately apparent what 'M' or 'N' are meant to be to someone who just picked up the library or doesn't have the book handy, there is enough info to go off of, or to google more information (e.g. wikipedia normal distribution, etc. if needed).

Examples of missing docstrings:
All of these functions from particles.core.SMC are missing docstrings: (compute_summaries(),
generate_particles(), next(), resample_move(), resample_move_qmc()). Some of them, like next(), may be self-explanatory for an experienced python user, but I have no idea what resample_move_qmc() does.

Missing explanation of logG:
Neither the SMC or FeynmanKac classes say they require the logG function to be implemented - just M0, M and G (see here:
https://particles-sequential-monte-carlo-in-python.readthedocs.io/en/latest/_autosummary/particles.core.FeynmanKac.html#particles.core.FeynmanKac )
but then I get
NotImplementedError: method/property logG missing in class my_fk_model
logG would, presumably, be log(G), which the documentation does state - but nothing more. If it's just log(G) why is it needed at all (i.e. if G is given, can't you just compute log(G)?)? In the few examples I could find where logG is used, it is implemented as a piece wise function that's either 0 or -infinity and I'm not sure why. The documentation should explain what the purpose of logG (in addition to G) is, and that it is required along with the other functions mentioned.

2. If you look at the intro in README.md, the example code does not include import statements -what is ssm, what is dists etc. I had to find elsewhere in the docs at some point you did some import ... as ssm so I could translate what you meant.

3. I've been trying to work around this by adding an observations list to my FeynmanKac class, so I can add an observation and update T before calling next(), then use the time step parameter t to retrieve the latest observation for weighting functions. It feels kinda hackish and like I'm not using the library as intended.

I'm coming from having just tested pfilter. The documentation and approach of that one made a lot more sense to me, but it was pretty lacking in features, so I was hoping particles would work out a bit better. I'm not a filtering expert and have only a basic knowledge of particle filters. I'm not trying to become an expert or anything. Just wanted to get something working ,then test out different types of algorithms/resamplings, etc

Feature request: parallelisation over the particles?

Hi there,

This is a great package, thanks for providing it!

Having read the docs and parts of your book, I've been wanting to ask a question about the parallelisation of the your SMC routines. I see that it's possible to run multiple different (or the same) SMC simulations with 'multiSMC', however, is it possible to parallelise over the particles within a single run? For example, if I have $1000$ particles can I utilise $1000$ compute cores to parallelise over the particles? I notice you mention this briefly in your book (Chapter 19.2) but it is not discussed any further. Is this something that is possible to implement? I was thinking about having a go at implementing this myself but I thought I would ask first in case there was a reason it has not yet been tackled.

For context, my application using 'Particles' is a state space model (with $\mathcal{O}(10^3)$ states), with many particles, and forward/measurement models that take $\mathcal{O}(10^1 - 10^2)$ seconds to run. Essentially I have a two-dimensional scalar field (where the spatial grid points each represent a state) to infer over time. Please do let me know if this is something that is possible or if I should perhaps be looking at another package or approach!

Restricting sampling of parameters

Hello again,

I am still trying to implement particle filters for the following SDE-model:

$$ \begin{cases} dS(t) = -\alpha S(t)I(t)dt+\frac{1}{\sqrt{100}}\sqrt{\alpha S(t)I(t)}dB_1(t) \\ dI(t) = \alpha S(t)I(t)-\gamma I(t)dt-\frac{1}{\sqrt{100}}\sqrt{\alpha S(t)I(t)}dB_1(t)+\frac{1}{\sqrt{100}}\sqrt{\gamma I(t)}dB_2(t) \end{cases} $$

For $X=(S,I)^T$ this can be written as

$$ dX(t) = \mu(X(t))dt+\sigma(X(t))dB(t) $$

with

$$ \mu(X)=\left(\begin{array}{c} -\alpha X[1]X[2] \\ \alpha X[1]X[2]-\gamma*X[2] \end{array}\right) \text{ and } \sigma(X)=\frac{1}{\sqrt{100}}\left(\begin{array}{cc} \sqrt{\alpha X[1]X[2]} & 0\\ -\sqrt{\alpha X[1]X[2]} & \sqrt{\gamma X[2]} \end{array}\right) $$

The symmetric product of the diffusion coefficient would then be the positive (semi-)definite matrix

$$ \Sigma(X)=\frac{1}{100}\left(\begin{array}{cc} \alpha X[1]X[2] & -\alpha X[1]X[2]\\ -\alpha X[1]X[2] & \alpha X[1]X[2]+\gamma X[2] \end{array}\right) $$

I already saw the new multivariate distribution to deal with varying covariance matrices across particles, thanks for that. I use a similar, self-implemented one.
My problem is now, that when I want to run the following code for the guided particle filter:

import warnings; warnings.simplefilter('ignore')  # hide warnings

# standard libraries
from matplotlib import pyplot as plt
import numpy as np
import seaborn as sb
import pandas as pd
import scipy
import scipy.stats as stats
from scipy import linalg
import code

from collections import OrderedDict

# modules from particles
import particles  # core module
from particles import distributions as dists  # where probability distributions are defined
from particles import state_space_models as ssm  # where state-space models are defined
from particles.collectors import Moments
from particles import mcmc
from particles import kalman

from visual_checks import sir_trajectory_check


# New try without increasing dimensions

def multi_dist(xp, beta, gamma, dt):  # xp means X_{t-1}, defines the BM parts
    loc = np.empty_like(xp)
    loc[:,0] = xp[:,0] + np.array([-np.exp(beta)*xp[:,0]*xp[:,1], np.exp(beta)*xp[:,0]*xp[:,1]-np.exp(gamma)*xp[:,1]])[0]*dt
    loc[:,1] = xp[:,1] + np.array([-np.exp(beta)*xp[:,0]*xp[:,1], np.exp(beta)*xp[:,0]*xp[:,1]-np.exp(gamma)*xp[:,1]])[1]*dt
    cov = np.zeros((xp.shape[0], xp.shape[1], xp.shape[1]))
    cov[:,0,0] = np.exp(beta)*xp[:,0]*xp[:,1]
    cov[:,0,1] = -np.exp(beta)*xp[:,0]*xp[:,1]
    cov[:,1,0] = -np.exp(beta)*xp[:,0]*xp[:,1]
    cov[:,1,1] = np.exp(beta)*xp[:,0]*xp[:,1]+np.exp(gamma)*xp[:,1]
    cov = (dt/100)*cov 
    return dists.MvNormal_new(loc=loc, cov=cov, trunc_rv=True)

 
class SIRModel(ssm.StateSpaceModel):
    default_params = {'beta': -1.60944,
                      'gamma': -2.9957,
                      'dt': 0.1,
                      'sigma': 0.01}

    dx = 2 #dimension of x
    N = 100 #number of particles to use

    # @property

    def f_next(self, xp):
        loc = np.empty_like(xp)
        loc[:,0] = xp[:,0] + np.array([-np.exp(self.beta)*xp[:,0]*xp[:,1], np.exp(self.beta)*xp[:,0]*xp[:,1]-np.exp(self.gamma)*xp[:,1]])[0]*self.dt
        loc[:,1] = xp[:,1] + np.array([-np.exp(self.beta)*xp[:,0]*xp[:,1], np.exp(self.beta)*xp[:,0]*xp[:,1]-np.exp(self.gamma)*xp[:,1]])[1]*self.dt
        return loc

    def SigX(self, xp):
        out = np.zeros((self.N, self.dx, self.dx))
        out[:,0,0] = np.exp(self.beta)*xp[:,0]*xp[:,1]
        out[:,0,1] = -np.exp(self.beta)*xp[:,0]*xp[:,1]
        out[:,1,0] = -np.exp(self.beta)*xp[:,0]*xp[:,1]
        out[:,1,1] = np.exp(self.beta)*xp[:,0]*xp[:,1]+np.exp(self.gamma)*xp[:,1]
        out = (self.dt/100)*out
        return out

    def SigY(self):
        out = np.zeros((self.N, self.dx, self.dx))
        out [:,0,0] = self.sigma*np.array([1]*100)
        out[:,1,1] = self.sigma*np.array([1]*100)
        return out

    def SigOpt(self, Sx, Sy):
        K = np.empty_like(Sx)
        for i in range(K.shape[0]):
            if np.all(Sx[i]==0): # I=0 makes the SigX = 0
                K[i] = Sy[i]
            elif np.any(Sx[i]==0): #S=0 so just some of Sx is zero
                SxInv = 1/Sx[i] # elementwise inverse
                SxInv[SxInv == np.inf] = 0 # set 0 before to 0 after
                K[i] = np.linalg.inv(SxInv+np.linalg.inv(Sy[i]))
            else:
                K[i] = np.linalg.inv(np.linalg.inv(Sx[i])+np.linalg.inv(Sy[i]))
        return K

    def predmean(self, t, xp, data):
        SigmaX = self.SigX(xp)
        SigmaY = self.SigY()
        SigmaOpt = self.SigOpt(SigmaX, SigmaY)
        f = self.f_next(xp)
        out = np.empty_like(xp)
        for i in range(out.shape[0]):
            if np.all(SigmaX[i]==0): # I=0 makes the SigX = 0 and no dynamics in the process, so reflect data
                out[i] = data[t]
            elif np.any(SigmaX[i]==0): # S=0 so just some of Sx is zero
                SxInv = 1/SigmaX[i] # elementwise inverse
                SxInv[SxInv == np.inf] = 0 # set 0 before to 0 after
                out[i] = np.dot(SigmaOpt[i], np.dot(SxInv, f[i])+np.dot(np.linalg.inv(SigmaY[i]), data[t]))
            else:
                out[i] = np.dot(SigmaOpt[i], np.dot(np.linalg.inv(SigmaX[i]), f[i])+np.dot(np.linalg.inv(SigmaY[i]), data[t]))
        return out

    def PX0(self):
        return multi_dist(np.array([[0.96, 0.04]]), self.beta, self.gamma, self.dt)
    def PX(self, t, xp):
        return multi_dist(xp, self.beta, self.gamma, self.dt)
    # def PY(t, xp, x, sigma):
    #     d = {'Y1': dists.Normal(loc=x[:,0], scale=sigma),
    #          'Y2': dists.Normal(loc=x[:,1], scale=sigma)
    #         }
    #     return dists.StructDist(d) 
    # def PY(self, t, xp, x):
    #     d = {'Y': dists.Normal(loc=x[:,0], scale=self.sigma)}
    #     return dists.StructDist(d)    
    
    def PY(self, t, xp, x):
        return dists.MvNormal_new(loc=x,
                                  cov=np.array([(self.sigma**2)*np.eye(2)]*x.shape[0]),
                                  trunc_rv=True
                                  )

    def proposal0(self, data):
            return multi_dist(np.array([[0.96, 0.04]]), self.beta, self.gamma, self.dt)

    def proposal(self, t, xp, data):
        SigmaX = self.SigX(xp)
        SigmaY = self.SigY()
        mean = self.predmean(t, xp, data)
        cov = self.SigOpt(SigmaX, SigmaY) 
        return dists.MvNormal_new(loc=mean, cov=cov, trunc_rv=True)



prior_dict = {'beta': dists.LogD(dists.TruncNormal(mu=0.2, sigma=0.1, a=1e-5, b=1.0)),
              'gamma': dists.LogD(dists.TruncNormal(mu=0.05, sigma=0.03, a=1e-5, b=1.0)),
              'sigma': dists.TruncNormal(mu=.01, sigma=0.005, a=1e-5, b=1.0)
              }

my_prior = dists.StructDist(prior_dict)

# true value is 0.2 and 0.05
data = pd.read_csv('/home/vinc777/masterthesis/two_variant_model/own_simulation/results/SIR_1000_simulation_noisy.csv')
data_array = np.array(data)

pmmh = mcmc.PMMH(ssm_cls=SIRModel, prior=my_prior, data=data_array, Nx=100, niter=10000, fk_cls=ssm.GuidedPF)
# pmmh.run()

pmmh.step0()
for i in range(100):
    try:
        pmmh.step(i)
    except:
        code.interact(banner=str(i), local=locals())

# save results
results = pmmh.chain.theta
results_df = pd.DataFrame(results, columns=['beta', 'gamma', 'sigma'])
results_df.to_csv('output/SIR_guided'+str(10000)+'iterations.csv')

I eventually get the error message

    return self.ssm.proposal(t, xp, self.data).rvs(size=xp.shape[0])
  File "/home/vinc777/masterthesis/two_variant_model/thesis_venv/lib/python3.8/site-packages/particles/distributions.py", line 1019, in rvs
    x[n, :] = np.maximum(0, random.multivariate_normal(self.loc[n, :], self.cov[n, :, :]))
  File "mtrand.pyx", line 4132, in numpy.random.mtrand.RandomState.multivariate_normal
  File "<__array_function__ internals>", line 180, in svd
  File "/home/vinc777/masterthesis/two_variant_model/thesis_venv/lib/python3.8/site-packages/numpy/linalg/linalg.py", line 1648, in svd
    u, s, vh = gufunc(a, signature=signature, extobj=extobj)
  File "/home/vinc777/masterthesis/two_variant_model/thesis_venv/lib/python3.8/site-packages/numpy/linalg/linalg.py", line 97, in _raise_linalgerror_svd_nonconvergence
    raise LinAlgError("SVD did not converge")
numpy.linalg.LinAlgError: SVD did not converge

So I still encounter problems with the covariance matrix. In this case I guess it is because of the parameter values being quite close or below zero. That is why I already gave truncated and log-transformed prios on them to ensure that they don't "destroy" my covariance matrix.
But looking on the pmmh.theta.chain that is created until I get the error, I see that the parameters are first always being zero and then being values in the order of $10^{-60}$ or even a lot smaller. And this makes my covariance matrix then being either unsolvable for the decomposition or singular since the values are too close together.

So my question is, how does the chain sample the parameter vector $\theta=(\beta, \gamma,\sigma)^T$ and is there a way to restric this to some particular space, so that I can avoid to small values for example.

Thanks for your your ideas and help again.

HALFLOG2PI in file distributions.py

In the file, particles/particles/distributions.py, line 191. The error is self-explanatory. This leads to wrong logpdf function in the MvNormal class.

Recognition of auxiliary filters

File core.py, lines 185-188, an auxiliary particle filter is recognised by testing the logetat attribute. However, in line 322, the attribute logeta of self.fk is called instead.

This could lead to APF FeynmanKac objects not being executed properly if they contain logeta attribute but not logetat. This is the case for APF objects created via state_space_models.py.

Seems like the correct fix is to put logeta at line 188 (in coherence with state_space_models.py), but more testing might need to be done (e.g. existing codes).

Problem with missing data

According to this tutorial, missing data can be modelled as a FlatNormal, then I did so for my first observations. The algorithm I run is Bootstrap, and then SMC.

When I run the algorithm, all the weights are nan for every time and the result is meaningless. To fix the problem I return the distribution Dirac(loc=np.zeros_like(x)) for the first observations. The algorithm runs well and the results are nice, but I think this is not the expected behaviour with FlatNormal.

===========

I have other question. I fit my experimental data to my model using cmdstanpy, and I plan to use particles to filter online data in production. The output of stan are samples of parameters, so I think I should run the particle filtering with different samples of the parameters fitted in stan. Does it make sense? is there a way to run the same model with different parameters in particles? I can use a for loop, but they are slow in Python. I am not a Bayesian, but a user, so naive questions.

Thank you for this amazing package!

Bayesian inference for some parameters.

I have troubles with parameter estimation. I would like to use PMMH to get an estimate of some (but not all) of my models parameters. To begin, I tried that for the stochastic volalitility model from the basic tutorial. Instead of defining three priors for mu, sigma and rho, I only defined priors for mu and sigma and hand them over to the PMMH object:

prior_dict = {'mu':dists.Normal(),  'sigma': dists.Gamma(a=1., b=1.)}
my_prior = dists.StructDist(prior_dict)
pmmh = mcmc.PMMH(ssm_cls=StochVol, prior=my_prior, data=data, Nx=50, niter = 1000)

This only works, if I define default parameters for the StochVol model - at least for rho, for which I did not set up a prior.

class StochVol(ssm.StateSpaceModel):
    default_params = {'rho': 0.9}

    def PX0(self):  # Distribution of X_0
        return dists.Normal(loc=self.mu, scale=self.sigma / np.sqrt(1. - self.rho ** 2))

    def PX(self, t, xp):  # Distribution of X_t given X_{t-1}=xp (p=past)
        return dists.Normal(loc=self.mu + self.rho * (xp - self.mu), scale=self.sigma)

    def PY(self, t, xp, x):  # Distribution of Y_t given X_t=x (and possibly X_{t-1}=xp)
        return dists.Normal(loc=0., scale=np.exp(x))

For my model, I use an __init__ constructor instead of default_params, since my model parameters are a bit more involved. I don't understand, how I can define priors for only some of my model parameters in that case. What I tried was something analogue to

pmmh = mcmc.PMMH(ssm_cls=StochVol(rho=0.9), prior=my_prior, data=data, Nx=50, niter = 1000)

but this fails, since PMMH expects a class, rather than an object.

My question is: Is it possible to hand over priors for some of the parameters without introducing default parameters? Or do I have to introduce default parameters?

Fixed-lag smoothing crashes with QMC

If store_history is an int, a history object of type RollingParticleHistory is created.
This object doesn't have the function save_h_order, which is called when qmc=True, causing a crash.

(ignore the line numbers, as I have other non-related modifications in core.py that affect them)

~/particles/particles/core.py in __next__(self)
    443             self.setup_auxiliary_weights()  # APF
    444             if self.qmc:
--> 445                 self.resample_move_qmc()
    446             else:
    447                 self.resample_move()

~/particles/particles/core.py in resample_move_qmc(self)
    407         h_order = hilbert.hilbert_sort(self.X)
    408         if self.hist:
--> 409             self.hist.save_h_order(h_order)
    410         self.A = h_order[rs.inverse_cdf(u[tau, 0], self.aux.W[h_order])]
    411         self.Xp = self.X[self.A]

AttributeError: 'RollingParticleHistory' object has no attribute 'save_h_order'

bug: module HMM

Module: HMMC
function: BaumWelch.filt_step

Calculation of logpyt (log of marginal density of Y_t given Y_0,..., Y_{t-1}) seems incorrect, as reported by an user.

Not working given example in the Kalman file

The following example given in the documentation of kalman file does not work. There is an inconsistent-shape problem.
ssm = kalman.MVLinearGauss(F=np.ones((1, 2)), G=np.eye(2), covX=np.eye(2), covY=.3)

Offline smoothing breaks with states with more than one dimension

I just started using this library, and I'm no expert in PFs, so excuse me if I'm making any wrong assumption. But it seems I've found a bug, where offline smoothing breaks with states with more than one dimension.
This happens with backward_sampling linear, ON2, as well as backward_sampling_qmc.

When the state has multiple variables (i.e., IndepProd distribution), trying to run smoothing fails with the following error:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-242-3ee6f0106081> in <module>
----> 1 smooth_trajectories = pf.hist.backward_sampling(100)

venv/lib/python3.7/site-packages/particles/smoothing.py in backward_sampling(self, M, linear_cost, return_ar)
    177             ar = self._backward_sampling_ON(M, idx)
    178         else:
--> 179             self._backward_sampling_ON2(M, idx)
    180         # When M=1, we want a list of states, not a list of arrays containing
    181         # one state

venv/lib/python3.7/site-packages/particles/smoothing.py in _backward_sampling_ON2(self, M, idx)
    221             for t in reversed(range(self.T - 1)):
    222                 lwm = (self.wgt[t].lw + self.model.logpt(t + 1, self.X[t],
--> 223                                                      self.X[t + 1][idx[t + 1, m]]))
    224                 idx[t, m] = rs.multinomial_once(rs.exp_and_normalise(lwm))
    225 

venv/lib/python3.7/site-packages/particles/state_space_models.py in logpt(self, t, xp, x)
    342     def logpt(self, t, xp, x):
    343         """PDF of X_t|X_{t-1}=xp"""
--> 344         return self.ssm.PX(t, xp).logpdf(x)
    345 
    346     def upper_bound_trans(self, t):

venv/lib/python3.7/site-packages/particles/distributions.py in logpdf(self, x)
    834 
    835     def logpdf(self, x):
--> 836         return sum([d.logpdf(x[:, i]) for i, d in enumerate(self.dists)])
    837 
    838     def ppf(self, u):

venv/lib/python3.7/site-packages/particles/distributions.py in <listcomp>(.0)
    834 
    835     def logpdf(self, x):
--> 836         return sum([d.logpdf(x[:, i]) for i, d in enumerate(self.dists)])
    837 
    838     def ppf(self, u):

IndexError: too many indices for array

It appears that the logpdf function is expecting an array of shape (1, d) in the last argument (x), but an array of shape (d,) is being passed to it (d being number of state variables).

I tried modifying the _backward_sampling_ON2 function by adding a reshape, as such:
Where it is currently:
lwm = (self.wgt[t].lw + self.model.logpt(t + 1, self.X[t], self.X[t + 1][idx[t + 1, m]]))
I replaced with:
lwm = (self.wgt[t].lw + self.model.logpt(t + 1, self.X[t], self.X[t + 1][idx[t + 1, m]].reshape(1, -1)))

This simple fix seems to have fixed it, and the function doesn't crash anymore.

I imagine this "fix" will create problems in other situations, so it's not really a fix, but this does suggest that there is some problem here.

multiSMC functions not ok

results = particles.multiSMC(fk=fk_model, N=100, nruns=30, qmc={'SMC':False, 'SQMC':True})

C:\ProgramData\Anaconda3\envs\particles\lib\multiprocessing\popen_spawn_win32.py in init(self, process_obj)
91 try:
92 reduction.dump(prep_data, to_child)
---> 93 reduction.dump(process_obj, to_child)
94 finally:
95 set_spawning_popen(None)

C:\ProgramData\Anaconda3\envs\particles\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
58 def dump(obj, file, protocol=None):
59 '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60 ForkingPickler(file, protocol).dump(obj)
61
62 #

PicklingError: Can't pickle <function _identity at 0x000002D7B4A67A60>: it's not the same object as particles.core._identity

Distribution sometimes "collapses"

I think this is a fairly common problem in PFs, from what I've read, but the model I'm testing is really simple: Location and Velocity make up the two values in the state. Each step has some Gaussian noise added to both it's location and velocity. Gaussian noise corrupts the observations.
I've made a google colab to demonstrate.

The problem I have is it sometimes seems to 'lose' the true latent function, e.g. the velocity state seems to sort of get 'stuck' on a single value, if that makes sense? (the random seed choice in the colab creates such an example).
demo

I'm new to particle filtering, and quite new to monte-carlo approaches. This seems a simple problem that should be quite soluble.
Do I need to somehow tell it to resample more? The particles don't look like they're "replaced", when I zoom in & plot their trajectories from pf.hist.X:
demo2

Or maybe in reality the transition noise scale needs to be bigger than the true noise?

I'm aware this isn't a bug or issue with your software (thanks for writing it by the way!) but maybe there are settings or configurations I'm not aware of in your tool (e.g. how/when it resamples etc).

Make mcmc.PMMH work with any Feynman-Kac class

By default, PMMH runs the bootstrap filter associated to the considered state-space model. It is possible to use a different algorithm, by setting argument fk_cls to another Feynman-Kac class; however, this works only for FK classes that have the same structure as state_space_models.Bootstrap or state_space_models.GuidedPF (taking as arguments data and ssm, for the state-space model). What if the user wants to specify a FK class with a different structure?

Handling missing observations

I'm working with a two leg model of (human) motion. The model has a 18-dimensional state space:

  • two dimensional position of hip
  • four angles: vertical - left femur, left femur - left fibula, vertical - right femur, right femur - right fibula
  • first and second derivatives of the above six quantities

The observation space is 36 dimensional:

  • four IMUs at femur and fibula, left and right. Each IMU measures three linear accelerations and three angular velocities.
  • two pressure sensors at the heel, left and right

While the measurements of the IMUs come fairly regularly, the pressure sensors provide no information as long as the corresponding foot has no floor contact. This means, I have to handle missing data, concretely, my data array contains NaNs.

I'm not sure, how to implement this. My first intuition was to handle two cases in the PY method:

def PY(self, t, xp, x):
    if data_is_complete:
        return dists.MvNormal(loc=self.state_to_observation(x), cov=self.H)
    else:
        return dists.MvNormal(loc=self.state_to_observation_restricted(x), cov=self.H)

However, at this level (when defining my state space model class), I don't have any information about my data to define the condition data_is_complete. In the proposal method, this is possible, because there, my data is accessible.

So far, my only solution is to include my data in the initialization of my StateSpaceModel, which is certainly not ideal. Also, I don't want to mess around in the particle code.

Is there a better way, how I can handle missing data?

Edit: I added the code for my proposal distribution. Since my distributions are multivariate Gaussian, I use an EKF for my proposal. Having available my data in the proposal method, I handle missing data the following way:

    def compute_ekf_proposal(self, xp, data_t):
        x_hat = self.state_transition(xp)
        df = self.compute_observation_derivatives(x_hat)

        innovation_inv = np.linalg.inv(np.matmul(df, np.matmul(self.Q, np.transpose(df, (0, 2, 1)))) + self.H)
        kalman_gain = np.matmul(self.Q, np.matmul(np.transpose(df, (0, 2, 1)), innovation_inv))
        prediction_err = np.nan_to_num(data_t - self.state_to_observation(x_hat), nan=0.0)

        mu = x_hat + np.einsum('ijk, ik -> ij', kalman_gain, prediction_err)
        sigma = np.matmul(np.eye(self.dim_states) - np.matmul(kalman_gain, df), self.Q)
        return mu, sigma

    def proposal(self, t, xp, data):
        mean, kalman_covs = self.compute_ekf_proposal(xp, data[t])
        covar = self.factor_proposal * np.mean(kalman_covs, axis=0)
        return dists.MvNormal(loc=mean, cov=covar)

Automate upper_bound_log_pt

This isn't a bug but rather a suggestion. Currently the user needs to code themselves the method upper_bound_log_pt(self, t) of the class StateSpaceModel in order to use smoothing algorithms. However most models in this package have Gaussian transitions, for which the upper bound could be calculated automatically.

Forum Creation

I am very interested in this topic and I have been exploring the library in the past months in my free time. I find it hard to find people knowledgeable in the topic. Maybe a forum for questions might be created.

I am asking anyway my question here in the meantime:

As far as I understand all particle filters rely on the assumption that we have a model (even vague) of the underlying process. Has it ever been tried to learn the model directly from data and then apply the particle filters? I have a very complicated process with many observations and I think particle filters are the answer. But creating a model is basically imposible.

Mixture error?

I think line 793 should be
self.k = self.pk.shape[-1].

Even if I'm wrong the following code makes this package unhappy

class MyModel(ssm.StateSpaceModel):
def PX0(self):
return dists.Normal(loc=1.0, scale=0.1)
def PX(self, t, xp):
return dists.Mixture([0.95,0.05],dists.Normal(loc=xp,scale=0.01),dists.Normal(loc=xp,scale=1.0))
def PY(self, t, xp, x):
return dists.Normal(loc=x, scale=0.1)

model = MyModel()
true_states,data = model.simulate(100)

Bug in BayesianVS_gprior

In BayesianVS_gprior def set_constants(self): is uninentionally indented and therefore defined only for the scope of the constructor. It should be unindented to comply with the BayesianVS interface.

E.g:

class BayesianVS_gprior(BayesianVS):
    """
    Same model as parent class, except:
    beta | sigma^2 ~ N(0, g sigma^2 (X'X)^-1)

    """

    def __init__(self, data=None, prior=None, nu=4.0, lamb=None, g=None, jitted=False):
        self.g = self.n if g is None else g
        super().__init__(
            data=data, prior=prior, nu=nu, lamb=lamb, iv2=0.0, jitted=jitted
        )

    def set_constants(self):
        self.coef_len = 0.5 * np.log(1 + self.g)
        self.coef_log = 0.5 * (self.n + self.nu)
        self.coef_in_log = nu * self.lamb + self.yty
        self.gogp1 = self.g / (self.g + 1.0)

Bug: numpy import missing in collectors.py

numpy is used in the online smoother classes but it is not being imported in the module.

(sorry for not opening a PR, but the fix is trivial and I don't have the repo ready at hand)

multiSMC seeding does not work with scipy frozen distribution

The code and its comments are self-explanatory. Try running it.

import particles
import scipy.stats as st
import numpy as np

class MyFK(particles.FeynmanKac):
    def __init__(self):
        self.distribution_of_M0 = st.norm(loc=2, scale=3)

    def M0(self, N):
        res = self.distribution_of_M0.rvs(size=N)
        print(res)
        return res

    def M(self, t, xp):
        pass

    def logG(self, t, xp, x):
        return np.zeros(len(x))

    def done(self, smc):
        return smc.t >= 1

fk = MyFK()
pf = particles.multiSMC(nruns=2, nprocs=2, fk=fk, N=5)
# so, seeding done by `particles` has no effect ???

# What is surprising is that, in single core setting, scipy does indeed use numpy seeds
my_dist = st.norm(loc=2, scale=3)

np.random.seed(1234)
print(my_dist.rvs(size=5))

np.random.seed(1234)
print(my_dist.rvs(size=5)) # same as above

Module smc_samplers: 'random_walk' or 'random walk'?

In the module smc_samplers.py, at lines 383 and 477, the documentation says that type_prop is random_walk. However, at lines 364 and 366, the variable type_prop is set to or compared with random walk instead.

Bug in state_space_models.AuxiliaryPF

Bug pointed out by @jorgemcgomes in module state_space_models:
missing argument ins AuxiliaryPF.logetat and BootstrapAuxiliaryPF.logetat.
Consequence: numerical results obtained when running an APF (auxiliary PF, where the resampling weights are modified by a certain function eta_t(x_t)) are currently incorrect.

Categorical distribution

File distributions.py, the Categorical logpdf is currently defined as

class Categorical(DiscreteDist):
    """Categorical distribution.

    Parameter
    ---------
    p:  (k,) or (N,k) float array
        vector(s) of k probabilities that sum to one
    """
    def __init__(self, p=None):
        self.p = p

    def logpdf(self, x):
        return np.log(self.p[x])

The logpdf function here does not seem to correctly support "array distributions" used as Markov kernels in state space models. For instance

from particles.distributions import Categorical
import numpy as np

multi_parameters = np.array([[0.1, 0.9],
                             [0.2, 0.8]])
multi_dist = Categorical(p=multi_parameters)
res = multi_dist.logpdf([0,1])
print(res)
# [[-2.30258509 -0.10536052]
#  [-1.60943791 -0.22314355]]
expected_res = [np.log(0.1), np.log(0.8)]
print(expected_res)  # [-2.3025850929940455, -0.2231435513142097]

This also causes smoothing for HMM to fail, i.e.

import numpy as np
from particles import hmm, SMC
from particles.state_space_models import Bootstrap

tm = np.array([[0.9, 0.1], [0.2, 0.8]])
my_hmm = hmm.GaussianHMM(mus=np.array([0., 1.]), sigmas=np.ones(2),
                         trans_mat=tm)
fk = Bootstrap(ssm=my_hmm, data=my_hmm.simulate(10)[1])
pf = SMC(fk, store_history=True)
pf.run()
res = pf.hist.backward_sampling_mcmc(100)  # error!

Potential Bug in SMC2

In the first step (t=0) of SMC2, the code seems to wrongly compute both G(x_0) (in the current_target function of _M0) and G(x_1) in the logG function.

If so, the code could be fixed by changing "self.current_target(0, self.init_Nx)(x0)" to "self.current_target(-1, self.init_Nx)(x0)" in the _M0 function on line 1134 of the smc_samplers.py script.

Kalman smoother does not work

from particles import kalman

ssm = kalman.LinearGauss()
x, y = ssm.simulate(25)
kf = kalman.Kalman(ssm=ssm, data=y)
kf.filter()  # OK
kf.smoother()  # => error

Callback : checkpoints

Firstly, Thank you for contributing this library. Both books and code help me a lot.

I am trying to use SMC2 method to fit my model of interest. However, the methods always take 12 hrs+ for multiSMC with 25 runs which often makes the Python kernel shut itself down. ( I have tried to decrease the N size but I realized that the library can be improved)

It would be nice for people who do not own the permanent computation resource to run their model (including me) and save their state occasionally before their kernel gets kick-off for any reason. By using callback :

The idea of model.fit(epochs=EPOCHS, callbacks=[model_checkpoint_callback]) is came form TensorFlow. Which is a library for Neural networks that often requires high amount of compuational resources and training times.

Tensorflow docs : https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint
Example of callback : https://www.pyimagesearch.com/2021/06/30/how-to-use-the-modelcheckpoint-callback-with-keras-and-tensorflow/

Bayesian inference with multidimensional prior

When running the PMMH algorithm with verbose non-zero and a prior for a non-scalar parameter, there's a bug in the print_progress method in the MCMC class:

    def print_progress(self, n):
        params = self.chain.theta.dtype.fields.keys()
        msg = 'Iteration %i' % n
        if hasattr(self, 'nacc') and n > 0:
            msg += ', acc. rate=%.3f' % (self.nacc / n)
        for p in params:
            msg += ', %s=%.3f' % (p, self.chain.theta[p][n])
        print(msg)

In the case of a prior for a non-scalar parameter, self.chain.theta[p][n] is an array and one has to use another kind of string formatting.

NB: It's not much of a problem. If verbose=0 or one can factor the multidimensional parameter into scalar parameters, everything works fine.

nsteps in backward_sampling_mcmc

File smoothing.py, class ParticleHistory. Implementation of the method backward_sampling_mcmc does not correctly take into account the parameter nsteps (currently it has no effect at all).

Improve accuracy for np.log(1 + ... )

In the file "./particles/resampling.py", line 272: the use of np.log(1 + ...) should be replaced by np.log1p(...). The documentation of numpy says that np.log1p(x) is more accurate than np.log(1+x). Here is my experiment:

np.log(1 + np.exp(-3))
Out[66]: 0.04858735157374196
np.log1p(np.exp(-3))
Out[67]: 0.04858735157374206

`d` should be `self.dim` in `GenericRWHM`

Thanks for this excellent library!

A bug crept into GenericRWHM in v0.3, where d was replaced (with self.dim) but then is still used later in the class despite now not being defined.

SMC^2 State Inference

Thank you for developing such a helpful library!

I had a quick question on the implementation of the SMC^2 algorithm for state and parameter inference. In running the algorithm for my case study, I generate an SMC2 sampler object on the states and use it to run the SMC algorithm on the parameters, in general just using the standard options as described in the accompanying textbook.

The reports generated in the SMC run give me summaries on the inferred parameters, but I'm unable to find any information (particles, weights, moments, etc.) on the inferred states. Is there any way that I can recover this information?

SSM for SIR-Model

Dear all,

My goal is to do parameter inference for the SIR-model, which I described with the following SDE system:

$$ \begin{cases} dS(t) = -\alpha S(t)I(t)dt+\frac{1}{\sqrt{100}}\sqrt{\alpha S(t)I(t)}dB_1(t) \\ dI(t) = \alpha S(t)I(t)-\gamma I(t)dt-\frac{1}{\sqrt{100}}\sqrt{\alpha S(t)I(t)}dB_1(t)+\frac{1}{\sqrt{100}}\sqrt{\gamma I(t)}dB_2(t) \end{cases} $$

For $X=(S,I)^T$ this can be written as

$$ dX(t) = \mu(X(t))dt+\sigma(X(t))dB(t) $$

with

$$ \mu(X)=\left(\begin{array}{c} -\alpha X[1]X[2] \\ \alpha X[1]X[2]-\gamma*X[2] \end{array}\right) \text{ and } \sigma(X)=\frac{1}{\sqrt{100}}\left(\begin{array}{cc} \sqrt{\alpha X[1]X[2]} & 0\\ -\sqrt{\alpha X[1]X[2]} & \sqrt{\gamma X[2]} \end{array}\right) $$

The symmetric product of the diffusion coefficient would then be the positive definite matrix

$$ \Sigma(X)=\frac{1}{100}\left(\begin{array}{cc} \alpha X[1]X[2] & -\alpha X[1]X[2]\\ -\alpha X[1]X[2] & \alpha X[1]X[2]+\gamma X[2] \end{array}\right) $$

I use a simple Euler-scheme discretisation to make it accesable as an autoregressive process of order 1, such that I can specify it in terms of distributions.

$$ S_t = S_{t-1}-\beta S_{t-1}I_{t-1}dt+\frac{1}{\sqrt{N}}\sqrt{S_{t-1}I_{t-1}}*\sqrt{dt}Z_1 $$

$$ I_t = I_{t-1}+(\beta S_{t-1}I_{t-1}-\gamma I_{t-1})dt-\frac{1}{\sqrt{N}}(\sqrt{S_{t-1}I_{t-1}}Z_1+\sqrt{\gamma I_{t-1}} \sqrt{dt}Z_2) $$

whith $Z_1, Z_2\sim\mathcal{N}(0,1)$ being independent.
Or in different notation one can write $X=(S,I)^T$ as being multivariate normal distributed.

$$
X_t\sim \text{MultiNormal}(X_{t-1}+\mu(X_{t-1})dt, \sigma(X_{t-1})\sqrt{(dt)})
$$

Since particle MCMC methods would be my favourite choice to perform inference I tried to implement this model as a SSM and ended up with:

def multi_dist(xp, beta, gamma, dt):  # xp means X_{t-1}, defines the BM parts
    d = {'B1': dists.Normal(loc=0, scale=(1/10)*np.sqrt(beta*xp['S']*xp['I']*dt)),
         'B2': dists.Normal(loc=0, scale=(1/10)*np.sqrt(beta*xp['S']*xp['I']*dt)),
         'S': dists.Cond(lambda x: dists.Dirac(xp['S']-beta*xp['S']*xp['I']*dt + x['B1'] )),
         'I': dists.Cond(lambda x: dists.Dirac( xp['I'] + (beta*xp['S']*xp['I']-gamma*xp['I'])*dt-x['B1']+x['B2']))
        }
    return dists.StructDist(d)

class SIRModel(ssm.StateSpaceModel):
    default_params = {'beta': 0.08,
                      'gamma': 0.02,
                      'dt': 0.1,
                      'sigma': 0.01}

    def PX0(self):
        return multi_dist({'S': 0.96, 'I': 0.04}, self.beta, self.gamma, self.dt)
    def PX(self, t, xp):
        return multi_dist(xp, self.beta, self.gamma, self.dt)
    def PY(self, t, xp, x):
        return dists.Normal(loc=x['I'], scale=self.sigma)  # whatever

SIR = SIRModel()
results, observations = SIR.simulate(3000)

This approach worked (at least for the simulation, did not try the ifnerence yet), but it increases the dimension of the vector $x$ by the number of transitions in the system.
Since my overall goal is to perform inference for multivariate models with a far öarger number of compartments and transitions, I would like to avoid this.

So my question is, whether there is a smarter way to implement this multivariate process as a state-space model based on multivariate normal distributions or something else.

Thanks for any advice
Vincent

Error in guided particle filtering for pre-implemented stochastic volatility model.

I am currently trying to implement a guided filter for a SDE-model. In order to do so I try to understand more how one needs to set the proposal distribution. So far I understood that it gets as an input $xp$, which is an $(N,d)$ shaped array with $N$ being the number of particles and $d$ the dimension of the state space. And the output should be a distribution of $X_T|X_{t-1}$.
To better understand this I tried to executed the pre-implemented stochastic volatility model and got several questions and one unexplainable error message:
The code I tried to run is:

# standard libraries
from matplotlib import pyplot as plt
import numpy as np
import seaborn as sb
import pandas as pd
import scipy
import scipy.stats as stats
from scipy import linalg

from collections import OrderedDict

# modules from particles
import particles  # core module
from particles import distributions as dists  # where probability distributions are defined
from particles import state_space_models as ssm  # where state-space models are defined
from particles.collectors import Moments
from particles import mcmc
from particles import kalman


class StochVol(ssm.StateSpaceModel):
    r"""Univariate stochastic volatility model.
    .. math::
        X_0 & \sim N(\mu, \sigma^2/(1-\rho^2)) \\
        X_t & = \mu + \rho(X_{t-1}-\mu) + \sigma U_t, \quad U_t\sim N(0,1) \\
        Y_t|X_t=x_t & \sim N(0, e^{x_t}) \\
    """
    default_params = {'mu': -1.02, 'rho': 0.9702, 'sigma': .178}
    # values taken from Pitt & Shephard (1999)

    def sig0(self):
        """std of X_0"""
        return self.sigma / np.sqrt(1. - self.rho**2)

    def PX0(self):
        return dists.Normal(loc=self.mu, scale=self.sig0())

    def EXt(self, xp):
        """compute E[x_t|x_{t-1}]"""
        return (1. - self.rho) * self.mu + self.rho * xp

    def PX(self, t, xp):
        return dists.Normal(loc=self.EXt(xp), scale=self.sigma)

    def PY(self, t, xp, x):
        return dists.Normal(loc=0., scale=np.exp(0.5 * x))

    def _xhat(self, xst, sig, yt):
        return xst + 0.5 * sig**2 * (yt**2 * np.exp(-xst) - 1.)

    def proposal0(self, data):
        # Pitt & Shephard
        return dists.Normal(loc=self._xhat(0., self.sig0(), data[0]),
                            scale=self.sig0())

    def proposal(self, t, xp, data):
        # Pitt & Shephard
        return dists.Normal(loc=self._xhat(self.EXt(xp),
                                           self.sigma, data[t]),
                            scale=self.sigma)

fk_guided = ssm.GuidedPF(ssm=StochVol)

And I got the error:

TypeError                                 Traceback (most recent call last)
/home/vinc777/masterthesis/two_variant_model/particles/guided_filter.ipynb Cell 29 in <module>
----> [1](vscode-notebook-cell://wsl%2Bubuntu/home/vinc777/masterthesis/two_variant_model/particles/guided_filter.ipynb#X40sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0) fk_guided = ssm.GuidedPF(ssm=StochVol)

File ~/src/particles/particles/state_space_models.py:321, in Bootstrap.__init__(self, ssm, data)
    319 self.ssm = ssm
    320 self.data = data
--> 321 self.du = self.ssm.PX0().dim

TypeError: PX0() missing 1 required positional argument: 'self'

This error seems onlogic for me and I dont know how to resolve it, since normally the self` is not an argument that needs to be passed.

Now the other questions:

The distribution, which is returned by proposal, should just generates one particle $X_t^i$ given its ancestor $X_{t-1}^i$ correct? What I don't understand is where the transition from the $(N,d)$ shaped input $xp$ to the just $d$-dimensional $xp^i$ happens and how then the $i$-th particle is generated exactly to its ancestor. Does this mean that the mean and cov given to the distribution are $(N,d)$ resp. $(N,d,d)$ shaped or just $d$ or $d\times d$ dimensional?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.