Giter VIP home page Giter VIP logo

Comments (9)

syclik avatar syclik commented on May 18, 2024

+1. I'm having trouble with probably the same system of equations.

On Sun, Nov 9, 2014 at 1:04 AM, Bob Carpenter [email protected]
wrote:

It would be nice to have a way of limiting the number of steps used in the
ODE integrator. The integrator could throw an exception if it hasn't
converged within tolerance within a preconfigured upper bound of steps.

As is, the integrator can effectively hang while trying to achieve a
desired tolerance (now hardcoded at 1e-6 for both relative and absolute).

This looks like it may be difficult with Boost odeint, at least the way
we're calling it now. Maybe when we swap in an integrator that can handle
stiffer systems, this problem will go away.


Reply to this email directly or view it on GitHub
stan-dev/stan#1127.

from math.

syclik avatar syclik commented on May 18, 2024

From @betanalpha on November 9, 2014 21:50

Exceptions during sampling are bad mojo all around (it would imply
that the within the typical set the problem is stiff and early termination
isn’t going to help fix that). But if this is just during warmup then I
agree that it will be helpful. For the moment, however, you can also
help the chain along by setting algorithm=hmc stepsize=0.001 or something
similar. This was necessary to get the toy PK/PD models fitting quickly.

On Nov 9, 2014, at 3:38 PM, Daniel Lee [email protected] wrote:

+1. I'm having trouble with probably the same system of equations.

On Sun, Nov 9, 2014 at 1:04 AM, Bob Carpenter [email protected]
wrote:

It would be nice to have a way of limiting the number of steps used in the
ODE integrator. The integrator could throw an exception if it hasn't
converged within tolerance within a preconfigured upper bound of steps.

As is, the integrator can effectively hang while trying to achieve a
desired tolerance (now hardcoded at 1e-6 for both relative and absolute).

This looks like it may be difficult with Boost odeint, at least the way
we're calling it now. Maybe when we swap in an integrator that can handle
stiffer systems, this problem will go away.


Reply to this email directly or view it on GitHub
stan-dev/stan#1127.


Reply to this email directly or view it on GitHub.

from math.

syclik avatar syclik commented on May 18, 2024

From @bob-carpenter on November 9, 2014 22:37

I managed to get things stabilized with stronger priors.
The posterior is still really fat and the Metropolis I'm
comparing against didn't get anywhere near the posterior
uncertainty I calculated in Stan (and that's with the strong
priors). So it's another one of these fairly overparameterized
and underdetermined systems with a broad range of parameter
values that produce the same data predictions.

The basic setup is a two compartment soil carbon model with
decomposition out of each compartment and transfer between
the compartments. The twist is that only their sum is
measured, including for the initial condition. So there's
a parameter gamma in [0,1] that determines the initial split
in carbon between the two compartments.

After warmup, it cranks along at 2000 iterations in 90 seconds, but it's
only getting effective sample sizes of around 50 and tree depths are
mostly 7 or 8, but it hits 11 every few hundred iterations.

I'm hitting n_divergent__ = 1 every 20 or 30 iterations after
warmup, but the tree depths aren't high there. Is this a problem?
And what exactly is it reporting? Is it something like one of the
functions throwing an exception? This happens even when I set
stepsize to 0.001 and adapt_delta=0.95 (I'm doing this in RStan).

Should we be setting the initial stepsize lower in general? I'd be
happy to trade speed for more stability in general. It takes about
twice as long to run with stepsize 0.001 and adapt_delta=0.95.

  • Bob

On Nov 9, 2014, at 4:50 PM, Michael Betancourt [email protected] wrote:

Exceptions during sampling are bad mojo all around (it would imply
that the within the typical set the problem is stiff and early termination
isn’t going to help fix that). But if this is just during warmup then I
agree that it will be helpful. For the moment, however, you can also
help the chain along by setting algorithm=hmc stepsize=0.001 or something
similar. This was necessary to get the toy PK/PD models fitting quickly.

On Nov 9, 2014, at 3:38 PM, Daniel Lee [email protected] wrote:

+1. I'm having trouble with probably the same system of equations.

On Sun, Nov 9, 2014 at 1:04 AM, Bob Carpenter [email protected]
wrote:

It would be nice to have a way of limiting the number of steps used in the
ODE integrator. The integrator could throw an exception if it hasn't
converged within tolerance within a preconfigured upper bound of steps.

As is, the integrator can effectively hang while trying to achieve a
desired tolerance (now hardcoded at 1e-6 for both relative and absolute).

This looks like it may be difficult with Boost odeint, at least the way
we're calling it now. Maybe when we swap in an integrator that can handle
stiffer systems, this problem will go away.


Reply to this email directly or view it on GitHub
stan-dev/stan#1127.


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub.

from math.

syclik avatar syclik commented on May 18, 2024

From @betanalpha on November 9, 2014 23:8

I managed to get things stabilized with stronger priors.
The posterior is still really fat and the Metropolis I'm
comparing against didn't get anywhere near the posterior
uncertainty I calculated in Stan (and that's with the strong
priors). So it's another one of these fairly overparameterized
and underdetermined systems with a broad range of parameter
values that produce the same data predictions.

The basic setup is a two compartment soil carbon model with
decomposition out of each compartment and transfer between
the compartments. The twist is that only their sum is
measured, including for the initial condition. So there's
a parameter gamma in [0,1] that determines the initial split
in carbon between the two compartments.

Ouch, that’s going to hurt.

After warmup, it cranks along at 2000 iterations in 90 seconds, but it's
only getting effective sample sizes of around 50 and tree depths are
mostly 7 or 8, but it hits 11 every few hundred iterations.

I'm hitting n_divergent__ = 1 every 20 or 30 iterations after
warmup, but the tree depths aren't high there. Is this a problem?
And what exactly is it reporting? Is it something like one of the
functions throwing an exception? This happens even when I set
stepsize to 0.001 and adapt_delta=0.95 (I'm doing this in RStan).

n_divergent is incremented only when the integrator jumps to a NaN,
nothing else. This is indicative of a neighborhood in the typical set
that you can’t explore that biases the resulting expectations — it may
require a much higher average acceptance probability (try 0.999).

Should we be setting the initial stepsize lower in general? I'd be
happy to trade speed for more stability in general. It takes about
twice as long to run with stepsize 0.001 and adapt_delta=0.95.

Matt had originally designed the adaptation to be pretty aggressive,
which is fine provided that overaggressive step sizes don’t cause
freezes (jump too far into a bad place -> stiff ODE -> freeze, so a
fairly new problem with the introduction to ODEs).

Setting the stepsize initially just tweaks the starting guess for the
step size, essentially adapting from below instead of above. I don’t
think it’s worth changing the defaults yet, but we should watch the
performance of these models carefully.

from math.

syclik avatar syclik commented on May 18, 2024

From @bob-carpenter on November 10, 2014 0:18

On Nov 9, 2014, at 6:08 PM, Michael Betancourt [email protected] wrote:

...

The basic setup is a two compartment soil carbon model with
decomposition out of each compartment and transfer between
the compartments. The twist is that only their sum is
measured, including for the initial condition. So there's
a parameter gamma in [0,1] that determines the initial split
in carbon between the two compartments.

Ouch, that’s going to hurt.

To make matters worse, there's serious measurement error in the
data (it involves sucking gas out jars and putting it through
a chromatograph). So I think what they really want is a measurement
error model. Some of the data's even "illegal" in the sense that
they start with 7.7 mg C g-1 and then get measurements of 8.1 mg C g-1
loss (or evolved, as they call it).

...

n_divergent is incremented only when the integrator jumps to a NaN,
nothing else. This is indicative of a neighborhood in the typical set
that you can’t explore that biases the resulting expectations — it may
require a much higher average acceptance probability (try 0.999).

Thanks --- that makes sense.

...

  • Bob=

from math.

syclik avatar syclik commented on May 18, 2024

From @betanalpha on November 10, 2014 0:36

To make matters worse, there's serious measurement error in the
data (it involves sucking gas out jars and putting it through
a chromatograph). So I think what they really want is a measurement
error model. Some of the data's even "illegal" in the sense that
they start with 7.7 mg C g-1 and then get measurements of 8.1 mg C g-1
loss (or evolved, as they call it).

Put some measurement modeling on it! Until there is a consistent measurement
model, arguing precise details of the physics model is somewhat irrelevant and we
certainly won’t be able to make solid criticisms of the full model posterior.=

from math.

syclik avatar syclik commented on May 18, 2024

From @andrewgelman on November 10, 2014 1:3

Measurement error models are totally under-used in statistics (and I’m an example of it!)
A

On Nov 9, 2014, at 7:36 PM, Michael Betancourt [email protected] wrote:

To make matters worse, there's serious measurement error in the
data (it involves sucking gas out jars and putting it through
a chromatograph). So I think what they really want is a measurement
error model. Some of the data's even "illegal" in the sense that
they start with 7.7 mg C g-1 and then get measurements of 8.1 mg C g-1
loss (or evolved, as they call it).

Put some measurement modeling on it! Until there is a consistent measurement
model, arguing precise details of the physics model is somewhat irrelevant and we
certainly won’t be able to make solid criticisms of the full model posterior.=

Reply to this email directly or view it on GitHub.

from math.

syclik avatar syclik commented on May 18, 2024

I'm hijacking this issue and renaming it to "stiff ode solver."

from math.

syclik avatar syclik commented on May 18, 2024

Duplicate of #179.

from math.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.