theogf / bayesianquadrature.jl Goto Github PK
View Code? Open in Web Editor NEWIs there anything we can't make Bayesian?
License: MIT License
Is there anything we can't make Bayesian?
License: MIT License
Generally one is concerned about expensive functions, therefore not saving the function evaluations does not seem optimal.
I would suggest to rename BMC.
Bayesian Monte Carlo, the simplest approach for Bayesian Quadrature. Assumes that the prior is Gaussian and that the integrand is as well
https://github.com/theogf/BayesianQuadrature.jl/blob/d621e84566cbcafa7cbe13a6036873479b19471d/src/integrators/bmc.jl#LL1-L7
It is true that it is motivated by the example from the paper and it should be referenced (I suggest in the readme). It is misleading to use the name BMC in the quadrature part (BMC refers more on how to sample the states and than use BQ).
I would refer instead to the distribution (or even kernel-distribution pair), where the distribution is the one we are importance re-weighting "against" (e.g. Gaussian in the simple case (SE kernel - Gaussian)).
See also 4.2 Tractable and Intractable Kernel Means
Briol, F. X., Oates, C. J., Girolami, M., Osborne, M. A., & Sejdinovic, D. (2019). Probabilistic integration: A role in statistical computation?. Statistical Science, 34(1), 1-22.
Remark: The idea is having kernel-distribution pairs with a closed form kernel mean.
the case (SE kernel - Gaussian) is called Bayes–Hermite quadrature
O’Hagan, A. (1991). Bayes–Hermite quadrature.J. Statist. Plann. Inference, 29:245–260.
Since the math are a bit heavy we should have proper docs since the README is too limited
So Bayesian quadrature is quite unlike other quadrature packages, therefore we need to think how to approach it.
On top of the likelihood/prior input, there are two important playing parameters:
I think for simplicity it is better to pass these two separately as there is no real benefit to embed the first one in the second.
Using the logarithm of the integrand in the BayesModel
provides a more stable way of computing functions regarding the integrand (e.g. logjoint
)
Maybe move the one from the README to the docs. Add other examples
The estimated variance is sometimes to small, so the Normal distribution can't be constructed.
ERROR: LoadError: ArgumentError: Normal: the condition σ >= zero(σ) is not satisfied.
Stacktrace:
[1] macro expansion
@ ~/.julia/packages/Distributions/cNe2C/src/utils.jl:6 [inlined]
[2] #Normal#112
@ ~/.julia/packages/Distributions/cNe2C/src/univariate/continuous/normal.jl:37 [inlined]
[3] Normal(μ::Float64, σ::Float64)
@ Distributions ~/.julia/packages/Distributions/cNe2C/src/univariate/continuous/normal.jl:37
[4] quadrature(bquad::BayesQuad{SqExponentialKernel, Float64, Float64}, model::BayesModel{ZeroMeanDiagNormal{Tuple{Base.OneTo{Int64}}}, typeof(log_f)}, samples::Vector{Vector{Float64}})
@ BayesianQuadrature ~/.julia/packages/BayesianQuadrature/eNnqB/src/bayesquads/bayesquad.jl:58
[5] (::BayesQuad{SqExponentialKernel, Float64, Float64})(rng::MersenneTwister, model::BayesModel{ZeroMeanDiagNormal{Tuple{Base.OneTo{Int64}}}, typeof(log_f)}, sampler::PriorSampling; x_init::Vector{Any}, nsamples::Int64, callback::Nothing)
@ BayesianQuadrature ~/.julia/packages/BayesianQuadrature/eNnqB/src/interface.jl:10
[6] top-level scope
@ ~/Documents/uni/master/master-thesis-code-base/src/min_example_normal.jl:13
[7] include(fname::String)
@ Base.MainInclude ./client.jl:444
[8] top-level scope
@ REPL[8]:1
using BayesianQuadrature
using Distributions
using KernelFunctions
using Random
rng = Random.MersenneTwister(42)
p_0 = MvNormal(ones(2)) # As for now the prior must be a MvNormal
log_f(x) = logpdf(MvNormal(0.5 * ones(2)), x)
m = BayesModel(p_0, log_f)
bquad = BayesQuad(SEKernel(); l=10.0, σ=1.0)
sampler = PriorSampling()
p_I, _ = bquad(rng, m, sampler; nsamples=200) # Returns a Normal distribution
@show p_I
We could probably find some inspiration for the API in this other implementation of BayesianQuadrature : http://www.probabilistic-numerics.org/en/latest/api/quad.html
Soss, Turing, etc...
However, we would need first an (adapted) auto-weighting of the prior
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
We should allow the possibility to give a series of evaluated samples (x, y) before even sampling.
We should have an automatic fallback for when the prior is not Gaussian:
f(x) * p(x) -> (f(x) * p(x)) / q(x) * q(x)
where q(x)
is MvNormal
We can of course have the default q(x) = N(0, I)
but there are easy heuristics to get a better proposal given p(x)
Implementation of real valued integrands with the Type GeneralModel
. (BayesModel
assumes positive integrands see #13)
How should samples be stored? Since we need to go iteratively over them and do not necessarily know their type in advance it is not possible to allow a type stable storage in either the sampler or the integrator.
One package to look at for inspiration is https://github.com/TuringLang/AbstractMCMC.jl
I feel a callback function would be nice to have. I think we can simply do it by passing it to the AbstractMCMC.sample
method. I will make a PR for it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.