Giter VIP home page Giter VIP logo

soss.jl's Introduction

Soss

Stable Dev Build Status Build Status Codecov Coveralls

Soss is a library for probabilistic programming.

Getting started

Soss is an officially registered package, so to add it to your project you can type

]add Soss

within the julia REPL and your are ready for using Soss. If it fails to precompile, it could be due to one of the following:

  • You have gotten an old version due to compatibility restrictions with your current environment. Should that happen, create a new folder for your Soss project, launch a julia session within, type
]activate .

and start again. More information on julia projects here.

  • You have set up PyCall to use a python distribution provided by yourself. If that is the case, make sure to install the missing python dependencies, as listed in the precompilation error. More information on PyCall's python version here.

Let's jump right in with a simple linear model:

using Soss

m = @model X begin
    β ~ Normal() |> iid(size(X,2))
    y ~ For(eachrow(X)) do x
        Normal(x' * β, 1)
    end
end;

In Soss, models are first-class and function-like, and "applying" a model to its arguments gives a joint distribution.

Just a few of the things we can do in Soss:

  • Sample from the (forward) model
  • Condition a joint distribution on a subset of parameters
  • Have arbitrary Julia values (yes, even other models) as inputs or outputs of a model
  • Build a new model for the predictive distribution, for assigning parameters to particular values

Let's use our model to build some fake data:

julia> import Random; Random.seed!(3)

julia> X = randn(6,2)
6×2 Array{Float64,2}:
  1.19156    0.100793  
 -2.51973   -0.00197414
  2.07481    1.00879   
 -0.97325    0.844223  
 -0.101607   1.15807   
 -1.54251   -0.475159  
julia> truth = rand(m(X=X));

julia> pairs(truth)
pairs(::NamedTuple) with 3 entries:
  :X => [1.19156 0.100793; -2.51973 -0.00197414;  ; -0.101607 1.15807; -1.5425
   => [0.0718727, -0.51281]
  :y => [0.100793, -2.51973, 2.07481, 0.844223, 1.15807, -0.475159]
julia> truth.β
2-element Array{Float64,1}:
  0.07187269298745927
 -0.5128103336795292 
julia> truth.y
6-element Array{Float64,1}:
  0.10079289135480324
 -2.5197330871745263 
  2.0748097755419757 
  0.8442227439533416 
  1.158074626662026  
 -0.47515878362112707

And now pretend we don't know β, and have the model figure it out. Often these are easier to work with in terms of particles (built using MonteCarloMeasurements.jl):

julia> post = dynamicHMC(m(X=truth.X), (y=truth.y,));

julia> particles(post)
(β = Particles{Float64,1000}[0.538 ± 0.26, 0.775 ± 0.51],)

For model diagnostics and prediction, we need the predictive distribution:

julia> pred = predictive(m,)
@model (X, β) begin
        y ~ For(eachrow(X)) do x
                Normal(x' * β, 1)
            end
    end

This requires X and β as inputs, so we can do something like this to do a posterior predictive check

ppc = [rand(pred(;X=truth.X, p...)).y for p in post];

truth.y - particles(ppc)
6-element Array{Particles{Float64,1000},1}:
 -0.534 ± 0.55
 -1.28 ± 1.3  
  0.551 ± 0.53
  0.918 ± 0.91
  0.624 ± 0.63
  0.534 ± 0.53

These play a role similar to that of residuals in a non-Bayesian approach (there's plenty more detail to go into, but that's for another time).

With some minor modifications, we can put this into a form that allows symbolic simplification:

julia> m2 = @model X begin
    N = size(X,1)
    k = size(X,2)
    β ~ Normal() |> iid(k)
    yhat = X * β
    y ~ For(N) do j
            Normal(yhat[j], 1)
        end
end;

julia> 
symlogpdf(m2)
                                                    N                         
                                                   ___                        
                                                   ╲                          
                                                    ╲                         
-0.918938533204673N - 0.918938533204673k - 0.5   ╱    (y[_j1] - 1.0yhat[_j
                                                   ╱                          
                                                   ‾‾‾                        
                                                 _j1 = 1                      

              k           
             ___          
             ╲            
   22
1])  - 0.5   ╱    β[_j1] 
             ╱            
             ‾‾‾          
           _j1 = 1        

There's clearly some redundant computation within the sums, so it helps to expand:

julia> symlogpdf(m2) |> expandSums |> foldConstants
                                                    N                         
                                                   ___                        
                                                   ╲                          
                                                    ╲                         
-0.918938533204673N - 0.918938533204673k - 0.5   ╱    (y[_j1] - 1.0yhat[_j
                                                   ╱                          
                                                   ‾‾‾                        
                                                 _j1 = 1                      

              k           
             ___          
             ╲            
   22
1])  - 0.5   ╱    β[_j1] 
             ╱            
             ‾‾‾          
           _j1 = 1        

We can use the symbolic simplification to speed up computations:

julia> using BenchmarkTools

julia> 
@btime logpdf($m2(X=X), $truth)
  1.863 μs (47 allocations: 1.05 KiB)
-15.84854642585797

julia> @btime logpdf($m2(X=X), $truth, $codegen)
  288.658 ns (5 allocations: 208 bytes)
-15.848546425857968

What's Really Happening Here?

Under the hood, rand and logpdf specify different ways of "running" the model.

rand turns each v ~ dist into v = rand(dist), finally outputting the NamedTuple of all values it has seen.

logpdf steps through the same program, but instead accumulates a log-density. It begins by initializing _ℓ = 0.0. Then at each step, it turns v ~ dist into _ℓ += logpdf(dist, v), before finally returning _ℓ.

Note that I said "turns into" instead of "interprets". Soss uses GG.jl to generate specialized code for a given model, inference primitive (like rand and logpdf), and type of data.

This idea can be used in much more complex ways. weightedSample is a sort of hybrid between rand and logpdf. For data that are provided, it increments a _ℓ using logpdf. Unknown values are sampled using rand.

julia> ℓ, proposal = weightedSample(m(X=X), (y=truth.y,));

julia>-33.647614702926504

julia> proposal.β
2-element Array{Float64,1}:
 -1.216679880035586  
  0.42410088891060693

Again, there's no runtime check needed for this. Each of these is compiled the first time it is called, so future calls are very fast. Functions like this are great to use in tight loops.

To Do

We need a way to "lift" a "Distribution" (without parameters, so really a family) to a Model, or one with parameters to a JointDistribution

Models are "function-like", so a JointDistribution should be sometimes usable as a value. m1(m2(args)) should work.

This also means m1 ∘ m2 should be fine

Since inference primitives are specialized for the type of data, we can include methods for Union{Missing, T} data. PyMC3 has something like this, but for us it will be better since we know at compile time whether any data are missing.

There's a return available in case you want a result other than a NamedTuple, but it's a little fiddly still. I think whether the return is respected or ignored should depend on the inference primitive. And some will also modify it, similar to how a state monad works. Likelihood weighting is an example of this.

Rather than having lots of functions for inference, anything that's not a primitive should (I think for now at least) be a method of... let's call it sample. This should always return an iterator, so we can combine results after the fact using tools like IterTools, ResumableFunctions, and Transducers.

This situation just described is for generating a sequence of samples from a single distribution. But we may also have models with a sequence of distributions, either observed or sampled, or a mix. This can be something like Haskell's iterateM, though we need to think carefully about the specifics.

We already have a way to merge models, we should look into intersection as well.

We need ways to interact with Turing and Gen. Some ideas:

  • Turn a Soss model into an "outside" (Turing or Gen) model
  • Embed outside models as a black box in a Soss model, using their methods for inference

Stargazers over time

Stargazers over time

soss.jl's People

Contributors

cscherrer avatar sethaxen avatar thautwarm avatar github-actions[bot] avatar catethos avatar baggepinnen avatar juliatagbot avatar tlienart avatar vargonis avatar willtebbutt avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.