Giter VIP home page Giter VIP logo

delfi's Introduction

delfi's People

Contributors

dgreenberg avatar jajcayn avatar jan-matthis avatar janfb avatar kaandocal avatar mnonnenm avatar ppjgoncalves avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

delfi's Issues

Saving/restoring posteriors

Hi all, I'm playing with delfi as an alternative to my old GA methods. I usually run an optimization overnight on the cluster/external nodes (lots of cores and powerful GPUs) and then download the results and run the best locally.

I understand that I can pick the posterior[-1] object, run a sample generator, and save samples. However, is it possible to serialize all parameters used in the generator somehow (better in JSON format) and reconstruct the generator somewhere else?

It seems I can save the inference object and reconstruct it on a local computer using delfi.io, but I cannot find a way to get back posteriors from it without calling the run() function again.

Could you please provide an example of how to save and restore the result of delfi's inference: posteriors?

using uniform priors in a moG results in error

I want to use SNPE in a more involved problem, and as a first step I constructed this toy problem to fit a multidimensional gaussian.
If I use a gaussian prior everything works quite well, but switching to an uniform prior, defined via the MixedDistribution class, I get an error message at the beginning of the training of round 2.


ValueError Traceback (most recent call last)
in ()
54 # define and run SNPE
55 inf_snpe = SNPE(generator=g, n_components=1, n_hiddens=[10], obs=xo, pilot_samples=100)
---> 56 logs, tds, posteriors = inf_snpe.run(n_train=[1000, 1000], n_rounds=10, stop_on_nan=True)
57 posterior = posteriors[-1]

~/Documents/Ribbon_PR_project/Fitting_New/fitting_git/DELFI/DELFI_package/delfi-master/delfi/inference/SNPE.py in run(self, n_train, n_rounds, epochs, minibatch, round_cl, stop_on_nan, proposal, monitor, **kwargs)
180 verbose = '(round {}) '.format(self.round) if self.verbose else False
181
--> 182 trn_data = self.gen(n_train_round, prior_mixin=self.prior_mixin, verbose=verbose)
183 n_train_round = trn_data[0].shape[0]
184

~/Documents/Ribbon_PR_project/Fitting_New/fitting_git/DELFI/DELFI_package/delfi-master/delfi/inference/BaseInference.py in gen(self, n_samples, n_reps, prior_mixin, verbose)
108 """
109 verbose = self.verbose if verbose is None else verbose
--> 110 params, stats = self.generator.gen(n_samples, prior_mixin=prior_mixin, verbose=verbose)
111
112 # z-transform params and stats

~/Documents/Ribbon_PR_project/Fitting_New/fitting_git/DELFI/DELFI_package/delfi-master/delfi/generator/BaseGenerator.py in gen(self, n_samples, n_reps, skip_feedback, prior_mixin, minibatch, keep_data, verbose)
106 skip_feedback=skip_feedback,
107 prior_mixin=prior_mixin,
--> 108 verbose = verbose)
109
110 # Run forward model for params (in batches)

~/Documents/Ribbon_PR_project/Fitting_New/fitting_git/DELFI/DELFI_package/delfi-master/delfi/generator/BaseGenerator.py in draw_params(self, n_samples, skip_feedback, prior_mixin, verbose)
52 proposed_param = self.prior.gen(n_samples=1) # dim params,
53 else:
---> 54 proposed_param = self.proposal.gen(n_samples=1)
55
56 # check if parameter vector is valid

~/Documents/Ribbon_PR_project/Fitting_New/fitting_git/DELFI/DELFI_package/delfi-master/delfi/distribution/mixture/StudentsTMixture.py in gen(self, n_samples)
64
65 ns = [np.sum((ii == i).astype(int)) for i in range(self.n_components)]
---> 66 samples = [x.gen(n) for x, n in zip(self.xs, ns)]
67 samples = np.concatenate(samples, axis=0)
68 self.rng.shuffle(samples)

~/Documents/Ribbon_PR_project/Fitting_New/fitting_git/DELFI/DELFI_package/delfi-master/delfi/distribution/mixture/StudentsTMixture.py in (.0)
64
65 ns = [np.sum((ii == i).astype(int)) for i in range(self.n_components)]
---> 66 samples = [x.gen(n) for x, n in zip(self.xs, ns)]
67 samples = np.concatenate(samples, axis=0)
68 self.rng.shuffle(samples)

~/Documents/Ribbon_PR_project/Fitting_New/fitting_git/DELFI/DELFI_package/delfi-master/delfi/distribution/StudentsT.py in gen(self, n_samples)
65 u = self.rng.chisquare(self.dof, n_samples) / self.dof
66 y = self.rng.multivariate_normal(np.zeros(self.ndim),
---> 67 self.S, (n_samples,))
68 return self.m + y / np.sqrt(u)[:, None]

mtrand.pyx in mtrand.RandomState.multivariate_normal()

/usr/local/lib/python3.5/dist-packages/scipy/linalg/decomp_svd.py in svd(a, full_matrices, compute_uv, overwrite_a, check_finite, lapack_driver)
107
108 """
--> 109 a1 = _asarray_validated(a, check_finite=check_finite)
110 if len(a1.shape) != 2:
111 raise ValueError('expected matrix')

/usr/local/lib/python3.5/dist-packages/scipy/_lib/_util.py in _asarray_validated(a, check_finite, sparse_ok, objects_ok, mask_ok, as_inexact)
236 raise ValueError('masked arrays are not supported')
237 toarray = np.asarray_chkfinite if check_finite else np.asarray
--> 238 a = toarray(a)
239 if not objects_ok:
240 if a.dtype is np.dtype('O'):

/usr/local/lib/python3.5/dist-packages/numpy/lib/function_base.py in asarray_chkfinite(a, dtype, order)
459 if a.dtype.char in typecodes['AllFloat'] and not np.isfinite(a).all():
460 raise ValueError(
--> 461 "array must not contain infs or NaNs")
462 return a
463

ValueError: array must not contain infs or NaNs

I guess it is because of NaNs in the Cov matrix of the MoG I want to fit.
But am I doing something wrong? Or is this due to another problem I am not aware?

Here is my code to reproduce the error:


import delfi
import delfi.distribution as dd
from delfi.inference import SNPE
from delfi.simulator.BaseSimulator import BaseSimulator
from delfi.summarystats import Identity
from delfi.generator import Default
from delfi.distribution import MixedDistribution


#  define simulator

class MultiDGauss(BaseSimulator):
    """
    draw a random number from a multidim gaussian with Id Cov matrix
    ----------
    params: 1dim array, mean of the distribution marginals
    seed :  int or None
            If set, randomness is seeded
    """
    
    def __init__(self, dim=1, seed=None):   
        super().__init__(dim_param=dim, seed=seed)
        self.dim = dim
        
        
    def gen_single(self, params):
        """
        draw one sample
        ------
        params : 1 dim array
        """        
        params = np.asarray(params).reshape(-1)
        assert params.ndim == 1
        assert params.shape[0] == self.dim_param

        sample = scp.stats.norm.rvs(loc=[params],size=self.dim)

        return {'data': sample.reshape(-1)}

# define model
dimensions = 2
m = MultiDGauss(dim =dimensions)

# define prior
pgauss = delfi.distribution.Gaussian(m=np.zeros(dimensions), S=np.eye(dimensions)*5)
puniform = MixedDistribution ([dd.Uniform(lower=[-5], upper=[5])]* dimensions)


# define summary statistics
s = Identity()

# initialize 
# choose prior
prior = puniform #pgauss
g = Default(model=m, prior=prior, summary=s)

# 'observed' value
xo = np.array([[-1,2]])

# check if data has right dimension
if np.shape(xo)[1] != dimensions:
    print('observed data does not match the dimension of parameters.')
else:
    # define and run SNPE
    inf_snpe = SNPE(generator=g, n_components=1, n_hiddens=[10], obs=xo, pilot_samples=100)
    logs, tds, posteriors = inf_snpe.run(n_train=[1000, 1000], n_rounds=10, stop_on_nan=True)
    posterior = posteriors[-1]`
```

observation dimensions

Hi,
I was wondering if you have considered amending the script to be able to do inference with multiple observations. Or if I am missing how to do this. Thank you for your help!

As a simple example, I'm simulating taking a sample from normal population with some mean mu and some standard deviation sigma.
The sample will have summary statistics mean xbar and standard deviation s.
The sample size is small (n=10), so we would benefit from the observed data being able to include multiple samples. When I try to make the observed data multiple samples, I get this error:


AssertionError Traceback (most recent call last)
in ()
10 n_mades=n_mades,
11 prior_norm=prior_norm,
---> 12 density=density)
13 # train
14 log, _, posterior = res.run(

~/delfi/delfi/inference/APT.py in init(self, generator, obs, prior_norm, pilot_samples, reg_lambda, seed, verbose, add_prior_precision, Ptol, **kwargs)
67 if self.obs.ndim == 1:
68 self.obs = self.obs.reshape(1, -1)
---> 69 assert self.obs.shape[0] == 1
70
71 if np.any(np.isnan(self.obs)):

AssertionError:

Here is the code I used to generate this:

import numpy as np

def SampleSimulator(parameters, N=1e6, seed=None):
    """ Sample simulator
    Simulates taking a sample of size 10 from population of size N with mean mu and standard dev sigma 
    Returns sample as 1d np.array of length 10

    Parameters
    ------------------- 
    mu : float
        pop mean  
    sigma : float 
        pop standard deviation   
    N : int
        population size 
    seed : int
    """
    if seed is not None:
        np.random.seed(seed=seed)
    else:
        np.random.seed()
    
    # generate population
    mu, sigma = parameters
    N = np.uint64(N)
    population = np.random.normal(mu, sigma, N)
    
    # take a random sample of size 10 from the population without replacement
    sam = np.random.choice(population, 10, replace=False)

    return sam

from delfi.simulator.BaseSimulator import BaseSimulator

class Sampler(BaseSimulator):
    def __init__(self, N, seed=None):
        """ Sample simulator
        Simulates taking a sample of size 10 from population of size N with mean mu and standard dev sigma 
        Returns sample as 1d np.array of length 10
    
        Parameters
        -------------------
        N : int
            population size  
        seed : int or None
            If set, randomness across runs is disabled
        """
        dim_param = 2

        super().__init__(dim_param=dim_param, seed=seed)
        self.N = N
        self.SampleSimulator = SampleSimulator

    def gen_single(self, param_set):
        """Forward model for simulator for single parameter set

        Parameters
        ----------
        params : list or np.array, 1d of length dim_param
            Parameter vector

        Returns
        -------
        dict : dictionary with data
            The dictionary must contain a key data that contains the results of
            the forward run. Additional entries can be present.
        """
        params = np.asarray(param_set)

        assert params.ndim == 1, 'params.ndim must be 1'

        sim_seed = self.gen_newseed()
        states = self.SampleSimulator(param_set, N, seed=sim_seed)
        
        return {'data': states,
                'N': self.N}

## priors ##
import delfi.distribution as dd

seed_p = 2
prior_min = np.array([0,0])
prior_max = np.array([40,8])
prior = dd.Uniform(lower=prior_min, upper=prior_max,seed=seed_p)

## summary stats ##
from delfi.summarystats.BaseSummaryStats import BaseSummaryStats
from scipy import stats as spstats

class SampleStats(BaseSummaryStats):
    """SummaryStats class for the sample from a normal distribution

    Calculates summary statistics
    """
    def __init__(self, n_summary=2, seed=None):
        """See SummaryStats.py for docstring"""
        super(SampleStats, self).__init__(seed=seed)
        self.n_summary = n_summary

    def calc(self, repetition_list):
        """Calculate summary statistics

        Parameters
        ----------
        repetition_list : list of dictionaries, one per repetition
            data list, returned by `gen` method of Simulator instance

        Returns
        -------
        np.array, 2d with n_reps x n_summary
        """
        stats = []
        for r in range(len(repetition_list)):
            samp = np.transpose(repetition_list[r]['data'])
            xbar = np.mean(samp)
            s = np.std(samp)
            ss = np.array([xbar, s])
            stats.append(ss) ##TEST THIS ALL OUT

        return np.asarray(stats)

## generator ##
import delfi.generator as dg

N = 1e6

# summary statistics hyperparameters
n_summary = 2

seed_m = 3
m = Sampler(N, seed=seed_m)
s = SampleStats(n_summary = n_summary)
g = dg.Default(model=m, prior=prior, summary=s)

## true parameters and respective labels ##
true_params = np.array([28, 2.9])       
labels_params = ['mu', 'sigma']

# observed data: simulation given true parameters
#### this is the multiple observations #####
obs = []
for i in range(0,25):
    obs.append(m.gen_single(true_params))

## summary stats for >1 observation ##
obs_stats = s.calc(obs)

## Hyperparameters ##
seed_inf = 1
pilot_samples = 2000
# training schedule
n_train = 2000
n_rounds = 1
# fitting setup
minibatch = 256
epochs = 100
val_frac = 0.05
# network setup
n_hiddens = [50,50]
# convenience
prior_norm = True
# MAF parameters
density = 'maf'
n_mades = 5

## Inference ##
import delfi.inference as infer

# inference object
res = infer.APT(g,
                  obs=obs_stats,
                  n_hiddens=n_hiddens,
                  seed=seed_inf,
                  pilot_samples=pilot_samples,
                  n_mades=n_mades,
                  prior_norm=prior_norm,
                  density=density)
# train
log, _, posterior = res.run(
                    n_train=n_train,
                    n_rounds=n_rounds,
                    minibatch=minibatch,
                    epochs=epochs,
                    silent_fail=False,
                    proposal='prior',
                    val_frac=val_frac,
                    verbose=True,)

Class inheritance

Hi
I'm wondering why you require class inheritance for the generator etc.? You can just require that the generator object has a gen_single() method that returns a dictionary, etc. In Python, this is usually the preferred way to implement what other languages use interfaces for. This would make implementing generators a bit easier. You always check that a generator has the desired attribute using hasattr.

BTW I also don't see that delfi.generator.Default.Default has gen_single.

[Code Quality] Incomplete Repo information

Your repo does not comply to the standards we defined in the lab.
Make sure for your repo to have:

  • a description including the github handle of the owner
  • a > 3 line README.md

If you don't update your repo, it will be disabled and then archived.

Theano type conversion error in CDELFI

Thank you for this excellent implementation of Papamakarios and Murray's paper on likelihood free inference. I'm having an issue running the code with more than two components, for example:

from delfi.inference import Basic, CDELFI, SNPE
inf_basic = CDELFI(generator=g, obs=x_test.reshape(1,-1), n_components=2, n_hiddens=[24], svi=False)
log, train_data, _ = inf_basic.run(n_train=1000, epochs=200)

Returns:
TypeError: ('GpuArrayType<None>(float32, matrix) cannot store a value of dtype float64 without risking loss of precision.', 'Container name "None"')

This does not happen for only one component in the mixture. Any ideas?

Thanks!

Broken pipe error while running the tutorial

I am using the windows version and while running the Hodgkin Huxley example from the tutorial with spyder(same exact code of the tutorial) I receive this error:


Traceback (most recent call last):

File "E:\courses\mini-project\HH.py", line 441, in
density=density)

File "c:\users\mosi\delfi\delfi\inference\APT.py", line 65, in init
verbose=verbose, **kwargs) # initializes network

File "c:\users\mosi\delfi\delfi\inference\BaseInference.py", line 61, in init
params, stats = generator.gen(1, skip_feedback=True, verbose=False)

File "c:\users\mosi\delfi\delfi\generator\MPGenerator.py", line 197, in gen
return self.run_model(params, skip_feedback=skip_feedback, verbose=verbose, **kwargs)

File "c:\users\mosi\delfi\delfi\generator\MPGenerator.py", line 210, in run_model
self.start_workers()

File "c:\users\mosi\delfi\delfi\generator\MPGenerator.py", line 134, in start_workers
w.start()

File "C:\Users\mosi\anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)

File "C:\Users\mosi\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)

File "C:\Users\mosi\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)

File "C:\Users\mosi\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)

File "C:\Users\mosi\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)

BrokenPipeError: [Errno 32] Broken pipe


I don't know exactly what the problem is but I guess it may be related to my cpu or multiprocessing implementation of python on windows.

I tried to limit the number of threads to one with the following code but that didn't solve the problem either.


import os
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["VECLIB_MAXIMUM_THREADS"] = "1"


Another thing I did was reducing the number of generators processor to 1(number of my processor cores is 2) which didn't work either, but one thing was added to the error explenation:


Exception ignored in: <function MPGenerator.del at 0x000001B6432E4828>
Traceback (most recent call last):
File "c:\users\mosi\delfi\delfi\generator\MPGenerator.py", line 299, in del
self.stop_workers()
File "c:\users\mosi\delfi\delfi\generator\MPGenerator.py", line 139, in stop_workers
if self.workers is None:
AttributeError: 'MPGenerator' object has no attribute 'workers'


I would be grateful if anyone could share ideas about this issue

Two moons - discrepancy with APT/SNPE-C paper

theta[i, 0] = c * q[0] - s * q[1]

Inverting equation (26) in the paper "Automatic Posterior Transformation for Likelihood-free Inference", we get:
p_0 \frac{|\theta_0 + \theta_1|}{\sqrt2}=x_0
p_1 + \frac{-\theta_0 + \theta_1}{\sqrt2}=x_1

Moving sides and defining the rhs as q's:
\frac{|\theta_0 + \theta_1|}{\sqrt2}=p_0-x_0:=q_0
\frac{-\theta_0 + \theta_1}{\sqrt2}=x_1-p_1:=q_1
\frac{\theta_0 + \theta_1}{\sqrt2}= \pm q_0

subtracting/adding to cancel the variables we get:
\sqrt2 \theta_1=\pm q_0 + q_1
\sqrt2\theta_0=\pm q_0 - q_1

Now c is simply 1/\sqrt2, and s is -1/\sqrt2. So it comes out that the code is putting theta1 in theta0, and putting -theta0 in theta1.

Scaling of input and output to the inference algorithm

Hi, thanks for providing this nicely structured inference module.

I would like to use the package to infer parameters from a inter spike interval distribution. When I do inference on a single parameter of the distribution, I get a nice posterior for a parameter that varies between 0 and 1, but for another parameter that varies in larger range 200 to 2000, the posterior is located at smaller values.

Do you have an idea, why this might happen? Could this be a scaling issue, some internal scaling to keep values in a certain range?

A short overview of the code snippets used. Basically, when I mask all but the w parameter, inference seems to work nicely. When doing the same for tau_e, inference goes wrong.

Simulator

# Full ISI distribution with four parameters
number_of_params = 4
duration = 40000
isi_simulator = DistributionBasedISIGenerator(number_of_params, MixedLimitCylceAndFixPointISI, duration)

# Mask all but one parameter, either w or tau_e (here 
tau_lc = 100 # fixed param
sigma_lc = 10 # fixed param
tau_e = 250 # parameter between 200 and 400 
w = 0.5 # parameter between 0 and 1

masked_isi_simulator = MaskedSimulator(sim=isi_simulator, mask=np.array([False, False, False, True]), obs=np.array([w, tau_lc, sigma_lc, tau_e]))

Prior

prior = Uniform(lower= [ 200], upper=[ 400])
# or Uniform(lower= [ 0], upper=[ 1]) when masking all but w 

Summary statistics

from summary_stats.isi_stats import ISIStats
our_isi_summary_stats = [np.mean, stats.variation]
s=ISIStats(our_isi_summary_stats, input_is_spike_train=False)

Generator

from delfi.generator import Default
g = Default(masked_isi_simulator, prior, s)

The generated data looks nice in both cases, that is the input is sampled from the given prior and the output lies in the expected range.

params, isi_stats = g.gen(1000)

Basic inference

from delfi.inference import Basic
inf = Basic(generator=g, n_components = 1, n_hiddens = [10])

I then train the network with about 1000 samples and try it on a data point. In the case of masking all but w, I take w=0.5, where I get the expected Gaussian centered close to 0.5. In the case of masking all but tau_e, I feed in tau_e = 300 and I get a Gaussian that is far off centered at 0.

Is the low sampling acceptance rate normal when doing sampling using CMAF density and atomic proposal?

Hi, thanks in advance for your attention. I just got a new problem when I simulated the HH model with 7 parameters. During the 2nd round drawing parameters, it was quite slow due to the fact that the majority of the parameters sampled was out of range. Is this case normal or do you have any suggestions to solve the problem?

Also, the problem occurs in other models as well. Looking forward to your suggestions.

getting started example gives runtime error because of mkl version 2018

Hey,
I tried the example from the "Getting started" page and got the following runtime error:

home-path/.local/lib/python3.6/site-packages/theano/configdefaults.py in check_mkl_openmp()
   1250         import mkl
   1251         if '2018' in mkl.get_version_string():
-> 1252             raise RuntimeError('To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environement.')
   1253     except ImportError:
   1254         raise RuntimeError("""

RuntimeError: To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environement.

when running the following code:

from delfi.inference import Basic

inf_basic = Basic(generator=g, n_components=2, n_hiddens=[10])

which I think is related to this theano issue: Theano/Theano#6568

Downgrading to mkl 2017 via "conda install mkl=2017" solved the issue for me so maybe you want to put that into your dependencies?

Cheers,
Andrej

Mixture weights are NaNs for MoGs

Hi,

When I use a Mixture of Gaussians with at least 2 components and I set svi=True in SNPE, then the mixture weights a sometimes become NaNs. I assume this happens in the softmax-function, but I'm not sure. The error is reproducible and does not happen (with the same data) if I have only 1 component. It also does not happen when I set svi=False.

Best,
Jonathan

Here, the full error message.

~/berens/python_libs/delfi/delfi/inference/SNPE.py in run(self, n_train, n_rounds, epochs, minibatch, round_cl, stop_on_nan, proposal, text_verbose, monitor, load_trn_data, save_trn_data, append_trn_data, init_trn_data_folder, **kwargs)
    247             if text_verbose: print('Done!')
    248             try:
--> 249                 posteriors.append(self.predict(self.obs))
    250             except np.linalg.LinAlgError:
    251                 posteriors.append(None)

~/berens/python_libs/delfi/delfi/inference/BaseInference.py in predict(self, x, deterministic)
    350         """
    351         x_zt = (x - self.stats_mean) / self.stats_std
--> 352         posterior = self.network.get_mog(x_zt, deterministic=deterministic)
    353         return posterior.ztrans_inv(self.params_mean, self.params_std)
    354 

~/berens/python_libs/delfi/delfi/neuralnet/NeuralNet.py in get_mog(self, stats, deterministic)
    300         Us = [comps['U' + str(i)][0] for i in range(self.n_components)]
    301 
--> 302         return dd.MoG(a=a, ms=ms, Us=Us, seed=self.gen_newseed())
    303 
    304     def reseed(self, seed):

~/berens/python_libs/delfi/delfi/distribution/mixture/GaussianMixture.py in __init__(self, a, ms, Ps, Us, Ss, xs, seed)
     50                 ndim=np.asarray(
     51                     ms[0]).ndim,
---> 52                 seed=seed)
     53 
     54             if Ps is not None:

~/berens/python_libs/delfi/delfi/distribution/mixture/BaseMixture.py in __init__(self, a, ncomp, ndim, seed)
     36             self.rng = np.random.RandomState()
     37 
---> 38         self.discrete_sample = Discrete(p=self.a, seed=self.gen_newseed())
     39 
     40     @abc.abstractmethod

~/berens/python_libs/delfi/delfi/distribution/Discrete.py in __init__(self, p, seed)
     19         p = np.asarray(p)
     20         assert p.ndim == 1, 'p must be a 1-d array'
---> 21         assert np.isclose(np.sum(p), 1), 'p must sum to 1 but sum is ' + str(np.sum(p)) + ' with elements ' + str(p)
     22         self.p = p
     23 

AssertionError: p must sum to 1 but sum is nan with elements [nan nan nan]

Questions / enhancement proposals

Dear developers,

I started playing around with delfi after I heard a wonderful talk by Jacob Macke on neuromatch. I ran the tutorial and went over the code and I have a couple of questions/enhancement proposals:

  • not sure what are your plans for the future of delfi but as you probably already know, theano is deprecated thus for the future maintenance you should switch your computational backend. Do you have any plans to do that? I worked a bit in the past with MDN in tensorflow 2+ and they are very easy to do.. also, tensorflow-probability implements masked autoregressive flow which should be directly usable. I am saying this because even today I had problems installing theano on both macos and ubuntu, hence it's starting to be the problem.

  • when I try to plot, I am getting errors, because matplotlib switched normed=True in histograms to density=True, I was able to fix it in my local install, would you like me to do a PR?

  • it would be really helpful if there would be an option to save the posteriors and/or the trained weights of the network? in many cases, I would like to train the net to get the posterior but then have the possibility to get back to my posteriors for plotting purposes, or to get back to the saved state of the net in order to train it a bit more.

  • I wanted to plot my posterior using viz.plot_pdf when I used SNPEC with MAF, and although the MAFconditional object resembles distributions due to gen and eval methods, the eval method has different calling signature, and thus cannot be plotted using plot_pdf. Is this expected?

Thanks a lot,
best regards!

N.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.