Giter VIP home page Giter VIP logo

celerite's Introduction

This project has been superseded by celerite2. This project will continue to be maintained at a basic level, but no new features will be added and I have limited capacity so you're encouraged to check out the new version.

celerite: Scalable 1D Gaussian Processes in C++, Python, and Julia

Read the documentation at: celerite.rtfd.io.

image

image

image

image

image

image

image

The Julia implementation is being developed in a different repository: ericagol/celerite.jl. Issues related to that implementation should be opened there.

If you make use of this code, please cite the following paper:

@article{celerite,
    author = {{Foreman-Mackey}, D. and {Agol}, E. and {Angus}, R. and
              {Ambikasaran}, S.},
     title = {Fast and scalable Gaussian process modeling
              with applications to astronomical time series},
      year = {2017},
   journal = {AJ},
    volume = {154},
     pages = {220},
       doi = {10.3847/1538-3881/aa9332},
       url = {https://arxiv.org/abs/1703.09710}
}

celerite's People

Contributors

andycasey avatar dependabot[bot] avatar dfm avatar eggplantbren avatar ericagol avatar farr avatar guillochon avatar iancze avatar phaustin avatar ruthangus avatar vandalt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

celerite's Issues

Time scaling with J for Julia version of celerite

I looked at the J-scaling of the
Julia version, and the NR band-diagonal
solver scales like J^3 for large J; clearly not optimal! This plot is for N = 1024 data points, and can be found in time_julia_nexp.jl

nr_solver_time

Need look into optimized band-diagonal solvers.

Comments from Megan Bedell

  • Discuss smaller uncertainty on the rotation period
  • Vet for missing commas. Sentences with "but"s
  • IPython notebook for simulated data examples
  • Respond to email

Changes to make in the appendix

1). Equation (A2) needs to have (a_j-ib_j) multiplying the second exponential.
2). The second sentence of 5th paragraph ("We introduced complex auxiliary vectors..."), the r_n = 1/2(u_n-iv_n) to get rid of factors of 1/2 in equations (A3), (A6) & (A7) (and the 1/2 factors need to be deleted in these three).
3). The indexing ofr_n (and hence u_n, v_n) needs to be decreased by one to start at one rather than two, so after the definition of r_n = 1/2(u_n-iv_n) the index n should be over {2,...,N-1} and r_N = 0. Likewise, equations (A6) & (A7) should be for n from 1 to N-1. I'm unclear on whether the indices in the equations should be increased by one, but I think that they should.
4). The diagonal term d in equation (A3) should probably be redefined since we are already using d_j to represent the imaginary component of the damping coefficient (i.e. the oscillation frequency).
5). The definition of y_{ex} needs to get rid of factors of /2.

Consider definitions of S_0

See #43.

The peak amplitude of the SHO model is currently a function of Q. This introduces covariances between Q and S_0 so it might be better to re-define as:

S_0 -> S_0 * (4*Q^2 - 1) / (4*Q^4)

or simply

S_0 -> S_0 / Q^2

Quality factor

@ericagol: you mentioned that the Q-factor should be dimensionless. I think I should actually be d/c or d/2c. This will have the right units and it is (omega_0) / (delta omega) as it should be!

Reference(s) to add

http://arxiv.org/abs/1608.01549 x

Asteroseismology with TESS:
http://arxiv.org/abs/1608.01138 added

http://arxiv.org/abs/cond-mat/0504025 x

http://arxiv.org/abs/0709.0262 x

http://www.pnas.org/content/110/43/17259.full x

Activity correction in RV with GPs:
http://mnras.oxfordjournals.org/content/452/3/2269.short added

RV analysis of Corot-7 with GPs:
http://adsabs.harvard.edu/abs/2014MNRAS.443.2517H added

http://arxiv.org/abs/1609.06680 x

http://arxiv.org/abs/1609.06129 x

Flickering in a CV modeled with GPs:
http://arxiv.org/abs/1609.06143 added

GPs used to regress flux against time, x- & y- centroid:
http://adsabs.harvard.edu/abs/2016MNRAS.459.2408A added

Asteroseismology made easy (AME):
https://arxiv.org/abs/1404.2099 x

Variability of red giants:
https://arxiv.org/abs/1610.08688 x

Modeling stellar granulation:
http://adsabs.harvard.edu/abs/2014A&A...570A..41K x

Asteroseismology with Kepler:
https://arxiv.org/abs/1611.08776 x

Gravitational waves:
https://arxiv.org/abs/1509.04066

Kepler companion stars:
https://arxiv.org/abs/1612.02392 x

Damped, stochastically-driven harmonic oscillator model for stellar variability:
http://adsabs.harvard.edu/abs/1990ApJ...364..699A added

Solar oscillations viewed in reflection from Neptune:
http://iopscience.iop.org/article/10.3847/2041-8213/833/1/L13/pdf x

GPs used to model Proxima Cen b RVs:
https://arxiv.org/abs/1612.03786 x

LSST variables: quasars, Miras, modeled with DRW/QPO:
https://arxiv.org/abs/1612.04834 added

Time series analysis:
https://books.google.com/books?isbn=0898719240 x
https://books.google.com/books?isbn=3319298542 x

Asteroseismic modeling of full Kepler dataset:
https://arxiv.org/abs/1612.08990 x

Press (1978) discussion of 'flicker' noise (aka 1/f, or pink noise):
http://adsabs.harvard.edu/abs/1978ComAp...7..103P x

Asteroseismology in binaries:
https://arxiv.org/abs/1612.09408 x

Non-linear chaos in variable stars:
http://www.sciencedirect.com/science/article/pii/S0167278915300269?np=y x

Bayesian inference of asteroseismology - DIAMONDS:
https://arxiv.org/abs/1408.2515 added

Relation between granulation timescale & surface gravity:
http://adsabs.harvard.edu/abs/2016SciA....250654K added

Relation of granulation & flicker:
http://adsabs.harvard.edu/abs/2014ApJ...781..124C x

Relation of granulation & oscillation:
http://adsabs.harvard.edu/abs/2014A&A...570A..41K x

Amplitude of solar-like oscillations:
http://adsabs.harvard.edu/abs/2008ApJ...682.1370K x
http://adsabs.harvard.edu/abs/2011ApJ...743..143H x

Comparison of granulation theory to Kepler data:
http://adsabs.harvard.edu/abs/2013A&A...559A..40S x
http://adsabs.harvard.edu/abs/2013A&A...559A..39S x

Bayesian inference of asteroseismic amplitudes:
http://adsabs.harvard.edu/abs/2013MNRAS.430.2313C added

Bayesian inference of stellar parameters from asteroseismic frequencies:
https://arxiv.org/abs/1701.04791 x

Joel Hartmann's VARTOOLS exoplanet light curve simulation/analysis software uses GPs
to generate noise (I'm not sure if they use GPs for analysis, though):
http://www.astro.princeton.edu/~jhartman/vartools.html x
http://www.sciencedirect.com/science/article/pii/S221313371630049X x

GPs for measuring time delays in multiply-imaged gravitationally lensed
quasars (which is what Press & Rybicki did in the first place):
https://arxiv.org/abs/1208.5598 added

GPs for characterizing variability of brown dwarfs:
https://arxiv.org/abs/1703.01245 x

Choose parameterization

I think that for my purposes, the best parameterization would be: log_amplitude, log_qfactor, and log_frequency. In this parameterization, alpha = exp(log_amplitude), beta.real = exp(-log_qfactor), and beta.imag = 2*pi*exp(log_frequency).

What would complex amplitudes be used for?

Prepare to send manuscript to collaborators and friends

Things to do before we send the manuscript around:

  • Give some more motivation in the intro and add some citations
  • Make benchmark plots comparing the different solvers and write descriptions of the tests.
  • Move discussion of Sturm's theorem to appendix? I'm not sure that it's worth including this in the main text because I'm not actually sure that we want to advocate for using Sturm's theorem but I do think that it's important to have some discussion of how to choose valid parameters. Perhaps this section should include the analytic results for small J.
  • Include examples of inference with simulated data: (a) a known celerite process where our model will be correct, and (b) another QP model where the model is wrong but we can show that we still reproduce the correct PSD and period measurement.
  • Finish text for examples with real data.
  • Add discussion of comparison to other methods including limitations of celerite. Interpretability, etc. Maybe make a benchmark plot with HODLR and CARMA included.
  • Cut kernel approximation section? @ericagol: I think that if we want something like this we should just include an example where we simulate data from one of these standard processes and demonstrate that we can reproduce it using celerite. I'm not a fan of fitting the autocorrelation function.
  • Edit and expand summary section.
  • Add a section about parameterization and API? Optional.

Are covariances with celerite terms complete wrt to some larger covariance family?

First of all, this is great - I've been working on a collaboration with folks here at NOAO, and I think we might have a chance to use some of this.

My question might be betraying my unfamiliarity with the area, but I'm curious about the tradeoffs in the choice of using celerite covariance terms to begin with. Given sufficiently many terms in the sums in Eqs. 7, 8, 9 (I think you call these celerite terms), does this formulation converge to some larger-scale covariance family?

More concretely, if you think of Nystrom methods as a sequence of progressively higher-rank approximations to any desired covariance function, then maybe there's a similar comparison to be made with a progressively larger number of celerite terms. That would allow more direct comparisons of the computational tradeoffs of:

  • working with exact the covariance function you want but getting an approximate solution
  • working with not exactly the covariance function you want but getting an exact solution

Hopefully my comment makes sense. Thanks for sharing all of this work in the open, too.

[FYI] Conda-forge install with existing scipy

[This isn't a bug or issue with celerite, just letting you know in case you get bug reports from others!]

In a Python 3.6 conda environment with numpy and scipy already installed, I tried:

% conda install -c conda-forge celerite

Which seemed to run fine and installed the following, replacing numpy and scipy:

The following NEW packages will be INSTALLED:

    blas:        1.1-openblas                    conda-forge
    celerite:    0.1.3-np112py36_blas_openblas_0 conda-forge [blas_openblas]
    libgfortran: 3.0.0-0                         conda-forge
    openblas:    0.2.19-1                        conda-forge

The following packages will be UPDATED:

    numpy:       1.12.1-py36_0                               --> 1.12.1-py36_blas_openblas_200      conda-forge [blas_openblas]
    scipy:       0.19.0-np112py36_0                          --> 0.19.0-np112py36_blas_openblas_200 conda-forge [blas_openblas]

Replacing numpy forces the scipy update, but the newly installed version of scipy from conda-forge seems broken:

% python -c "import scipy.optimize"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/adrian/anaconda/envs/comoving-rv/lib/python3.6/site-packages/scipy/optimize/__init__.py", line 232, in <module>
    from .optimize import *
  File "/Users/adrian/anaconda/envs/comoving-rv/lib/python3.6/site-packages/scipy/optimize/optimize.py", line 37, in <module>
    from .linesearch import (line_search_wolfe1, line_search_wolfe2,
  File "/Users/adrian/anaconda/envs/comoving-rv/lib/python3.6/site-packages/scipy/optimize/linesearch.py", line 18, in <module>
    from scipy.optimize import minpack2
ImportError: dlopen(/Users/adrian/anaconda/envs/comoving-rv/lib/python3.6/site-packages/scipy/optimize/minpack2.cpython-36m-darwin.so, 2): Library not loaded: /Users/ray/mc-x64-3.5/conda-bld/gcc-4.8_1477649012852/_b_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac/lib/libgcc_s.1.dylib
  Referenced from: /Users/adrian/anaconda/envs/comoving-rv/lib/python3.6/site-packages/scipy/optimize/minpack2.cpython-36m-darwin.so
  Reason: image not found

It seems like this is fixed just by doing a conda install scipy to replace with the default channel scipy.

Thanks for providing a conda install option!

Implement the R&P method as a special case

For a single real term, the original R&P method should be somewhat more efficient. I've been playing around with it a bit today but it's not entirely trivial (but still simple) to implement for heteroskedastic noise so I'm going to leave it for now and get back to the paper. But this might be a nice thing to implement some day!

Derivative of GP

Rajpaul et al. claim that various transformations applied to GPs are GPs; this is important, for example, in detrending RV data due to stellar activity by modeling the RV and activity signals (e.g. log(R'_HK) & BIS). We should add this capability to Generalized RP code if possible. They used this formalism to show that Alpha Cen b was not a planet, but instead is due to a combination of stellar variability & the window function.. They complain about the lack of scalability of GPs, so this may be a place we can make an impact if we can figure out how to incorporate the kernel time derivatives into our formalism. This may introduce sub-matrices into the formalism, but I'm unsure how the multiple kernel components & time-series components will couple; it also introduces sinusoids (but I think it's the sinusoid of the absolute value of the time differences, so the kernel is still symmetric). Another issue is that the exponential kernel is not differentiable at zero offset; however, the approximate Matern kernel is singly differentiable, so this may solve this problem.

Fully specify examples in paper

The referee (totally correctly!) asks us to include a full specification of the data size, parameters, priors, initialization method, and convergence diagnostics for each example. We could also include some discussion of computational cost. I'm totally on board with this so I'll do it. I feel like it might distract from the discussion that we have now so I'm thinking of putting it in an appendix. We'll see how that reads...

Equation (A4)

I'm confused by equation (A4). Equations (A2) & (A3) both give lower limits for a_jc_j, so I don't see how these turn into an upper limit that appears in equation (A4). In fact, if b_jd_j >0, then the condition a_jc_j < |b_jd_j| will cause the \omega^2 term to be negative, which will cause the power spectrum to go negative in the limit of \omega -> \infty.

Summary section

  • Tighten language in summery section about - too speculative? Change title? Or add new section/subsection? Already demonstrated that GPs are applicable; this is another approach. Summary -> discussion?

Matern kernel limit

The limit of Q=1/2 that produces the Matern kernel can be taken from the sinh/cosh side to use just the exponential kernel components which slightly speeds up the solve time. One caveat to check is whether this is more stable.

Terminology for each complex exponential term

  • Rather than damped sinusoid, perhaps "damped harmonic"?
  • Rather than "parameterized by a mean function", perhaps "consisting of a mean function"
  • the simplest possible example -> a simple example
  • To demonstrate motivate this method -> To demonstrate this method
  • After equations (17) and (34) we should perhaps remind the reader that we are assuming a mean function of zero for these examples, or we can replace y with r_theta.
  • Notation ambiguity alert: in section 4, x_i refers to the solution of K^{-1} y, while in section 2 it refers to the independent variable.

Comments from Will Farr

  • rational function fit to get the CARMA parameters
  • physical interpretation of CARMA models
  • flexibility of CARMA PSD vs. celerite. e.g. f^{-10}
  • respond to email

Simulating GP

I'm having trouble figuring out how to simulate a GP by simulating sums of exponential processes. Some components have a negative amplitude, like the approximate Matern 3/2 kernel, and so the square root of this would be imaginary when taking the Cholesky decomposition (i.e. the kernel is not positive definite). This despite the fact that the summed kernel is positive definite.

Figure 7 high frequency noise

Figure 7 seems to show high-frequency correlated noise in the maximum likelihood model plotted in the top panel. I'm wondering if this is due to the fact that the rotation kernel, equation (63), has a non-zero derivative at zero time-lag. One possibility would be to replace the exp(-c*\tau) term with the approximate Matern-3/2 term, c/(c-z)exp(-z\tau)-z/(c-z)exp(-c\tau), with, say, z ~ 0.99 c. Of course this would slow down the solver by a factor of ~4 since J would be doubled. Is this just an aesthetic issue, or is this trading off with some of the white noise?

Comments from Dennis Stello

  • Fig 9: indicate the mode IDs (spherical degree, l) at each of the tick marks indicating the freq location of the modes - added discussion instead.
  • Fig 9: plot a smoothed version of the power spectrum on top of the raw spectrum in the middle panel and perhaps also in the top panel
  • Respond to the email

Power spectrum estimation with celerite

In light of our conversation yesterday, perhaps we should emphasize power-spectrum estimation more in the paper. This is effectively what your stellar-oscillation experiments demonstrate, and as you mentioned, is a better way to approach power-spectrum estimation compared with Lomb-Scargle or FFT. The celerite method makes the Bayesian approach more computationally tractable (assuming J isn't too large!). Perhaps a modified title could be:

"Fast and scalable Gaussian process modeling with applications
to astronomical time series and power spectrum estimation"

We may want to add a short section to discuss this, and compare with other techniques for power spectrum estimation. This is related to Andrew Gordon Wilson's non-parametric (i.e. Gaussian-decomposition) of power spectra, but with a different kernel basis.

Running python scripts

Hi Dan -

I just tried to run your rotation.py script, but the genrp install failed. The red portion of the messages after running pip install genrp are:

Failed building wheel for genrp

Command "/Users/ericagol/anaconda/bin/python -u -c "import setuptools, tokenize;file='/private/var/folders/tw/fq977lgn1ks_n0jwcd3tl1mc0000gn/T/pip-build-yez5p7ls/genrp/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /var/folders/tw/fq977lgn1ks_n0jwcd3tl1mc0000gn/T/pip-s_z2dq2h-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/tw/fq977lgn1ks_n0jwcd3tl1mc0000gn/T/pip-build-yez5p7ls/genrp

I'm using version 8.0.2 of pip

-Eric

Description of Choleksy solver technique

The new section on the Cholesky solver technique looks great! A couple of ideas/questions:

1). Should we describe how the vectors change when we have a mixture of complex and real components?

2). Add equations for simulation: y = L sqrt(D) {\cal N}(0,I) (i.e. a vector of N normal deviates with mean zero & covariance of the identity matrix).

3). Add in description of prediction in O(N J^2).

Minor edit

Just after equation (22) I think should read "Equation (18) through Equation (22)"

citations

1). model residual citation - introductory paragraph;
2). bayesian estimation of power spectrum - second to last paragraph in summary

Switch to Cholesky solver for the paper

@sivaramambikasaran have decided that it does make sense to switch the paper to using the Cholesky factorization. I'll take a shot at drafting a discussion and ask for feedback.

I think that the current discussion of the extended matrix version is worth keeping as an appendix so I'll move it there for now, but I'm open to removing it.

Problem reading header in transit_sample.py - fixed with condo install.

I'm having trouble reading the data using fitsio. I updated the fitsio package using pip. I'm using python 3.5. Here's the error:

data, hdr = fitsio.read("data/kplr001430163-2013011073258_llc.fits",
... header=True)
Traceback (most recent call last):
File "", line 2, in
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 103, in read
h = fits[item].read_header()
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 1387, in read_header
return FITSHDR(self.read_header_list(), convert=True)
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 3822, in init
self.add_record(r, convert=convert)
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 3847, in add_record
record = FITSRecord(record_in, convert=convert)
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 4129, in init
self.set_record(record, convert=convert)
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 4163, in set_record
self.set_record(record['card_string'])
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 4151, in set_record
card=FITSCard(record)
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 4245, in init
self.set_card(card_string)
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 4269, in set_card
self._set_as_key()
File "/Users/ericagol/anaconda/lib/python3.5/site-packages/fitsio/fitslib.py", line 4291, in _set_as_key
res=_fitsio_wrap.parse_card(card_string)
OSError: FITSIO status = 204: keyword value is undefined

I tried reinstalling fitsio using conda:
$ conda install -c openastronomy fitsio=0.9.8

(as found here: https://anaconda.org/OpenAstronomy/fitsio)

and it works!

Rename

Genrp is a really stupid name. I think we should consider re-branding before publication. Let's keep track of ideas here. Here are a few to get us started:

  • signpost
  • signpost
  • flagpole
  • flagship

Riffing on "scalable/fast/flexible GP".

Comments from Andrew Gordon Wilson

  • discuss SKI/KISS-GP since the approaches are quite complementary, where SKI achieves O(N) scaling [and O(1) test time predictions] through exploiting kernel structure in conjunction with kernel interpolation. SKI also seems quite natural for physical applications. I think kernel interpolation could be combined with your approach, which would be particularly useful for faster test-time predictions.
  • 1D Gauss-Markov processes (similar to CARMA models, perhaps?) : yes. OU process == Gauss-Markov
  • Respond to email

Comments from Jake VanderPlas

  • Mention N^3 in abstract
  • In intro: include a high-level summary of the algorithmic tricks the method depends on (e.g. band solver, extended system, etc.)
  • Emphasize physical interpretation more in abstract
  • Sect 4.2: explicit formula for the size of K_{ext}
  • Sect 4.2: explicit NNZ/sparsity numbers
  • Sect 7.3: another potential application, Wang 2012
  • typography for "celerite term" vs "celerite"
  • Respond to email

Fourier transform of damped cosine

The damped cosine is the sum of two exponentials, so the Fourier transform is the sum of two Lorentzians. This needs to be fixed in the paper.

Comments from Ian Czekala

  • Section 2: "parameterized by the parameters"
  • Pg. 3: "The physical interpretation of this model..." the phrasing of this sentence doesn't sound quite right. Perhaps try breaking it up into two sentences?
  • Appendix: 'Defined as expected the squared amplitude'
  • Respond to email

Typo in equation 3 of appendix?

I get:

\sum_{j=1}^p \alpha_i l_{k,j}^R + d \, x_k + \sum_{j=1}^p \left[
\gamma_{k,j}^R r_{k+1,j}^R + \gamma_{k,j}^I r_{k+1,j}^I\right] = b_k,

โ€“ I don't agree with your factors of 2 but maybe I'm missing something!

Problem running transit_sample.py with import command

When running transit_sample.py with the import command, the running hangs with no CPU activity:

import transit_sample.py
Initial log-likelihood: 1620.8609237832466
Maximum log-likelihood: 1681.4668190922685
MCMC save file exists. Overwrite? (type 'yes'): yes
Running MCMC sampling...

I tried running two other ways:
1). I copied the lines of transit_sample.py to the python prompt, and then it ran fine. The one strange thing is that the eight threads that were spawned only took ~30% of the CPU for each thread.

2). I ran it from the shell prompt:

figures$ python transit_sample.py
Initial log-likelihood: 1620.8609237832466
Maximum log-likelihood: 1681.4668190922685
MCMC save file exists. Overwrite? (type 'yes'): yes
Running MCMC sampling...
15%|#################7 | 2198/15000 [01:00<04:59, 42.71it/s]

This also runs fine, but also only gives ~30% of the CPU available for each thread.

In the import case that didn't run, the eight python threads all appeared to show up, but they only executed zero runtime, so I'm wondering if it is an issue with the multi-threading allocation?

Add posterior predictive checks

The referee asks us to include posterior predictive checks for the examples in the paper. It sounds like they would be satisfied by plots showing residuals between the data and the posterior model predictions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.