Giter VIP home page Giter VIP logo

sciml / ordinarydiffeq.jl Goto Github PK

View Code? Open in Web Editor NEW
503.0 19.0 193.0 29.92 MB

High performance ordinary differential equation (ODE) and differential-algebraic equation (DAE) solvers, including neural ordinary differential equations (neural ODEs) and scientific machine learning (SciML)

Home Page: https://diffeq.sciml.ai/latest/

License: Other

Julia 100.00%
ordinary-differential-equations differentialequations ode event-handling adaptive high-performance differential-equations scientific-machine-learning hacktoberfest julia sciml

ordinarydiffeq.jl's Introduction

OrdinaryDiffEq.jl

Join the chat at https://julialang.zulipchat.com #sciml-bridged Global Docs

codecov Build Status Build status

ColPrac: Contributor's Guide on Collaborative Practices for Community Packages SciML Code Style

OrdinaryDiffEq.jl is a component package in the DifferentialEquations ecosystem. It holds the ordinary differential equation solvers and utilities. While completely independent and usable on its own, users interested in using this functionality should check out DifferentialEquations.jl.

Installation

Assuming that you already have Julia correctly installed, it suffices to import OrdinaryDiffEq.jl in the standard way:

import Pkg;
Pkg.add("OrdinaryDiffEq");

API

OrdinaryDiffEq.jl is part of the SciML common interface, but can be used independently of DifferentialEquations.jl. The only requirement is that the user passes an OrdinaryDiffEq.jl algorithm to solve. For example, we can solve the ODE tutorial from the docs using the Tsit5() algorithm:

using OrdinaryDiffEq
f(u, p, t) = 1.01 * u
u0 = 1 / 2
tspan = (0.0, 1.0)
prob = ODEProblem(f, u0, tspan)
sol = solve(prob, Tsit5(), reltol = 1e-8, abstol = 1e-8)
using Plots
plot(sol, linewidth = 5, title = "Solution to the linear ODE with a thick line",
    xaxis = "Time (t)", yaxis = "u(t) (in μm)", label = "My Thick Line!") # legend=false
plot!(sol.t, t -> 0.5 * exp(1.01t), lw = 3, ls = :dash, label = "True Solution!")

That example uses the out-of-place syntax f(u,p,t), while the inplace syntax (more efficient for systems of equations) is shown in the Lorenz example:

using OrdinaryDiffEq
function lorenz!(du, u, p, t)
    du[1] = 10.0(u[2] - u[1])
    du[2] = u[1] * (28.0 - u[3]) - u[2]
    du[3] = u[1] * u[2] - (8 / 3) * u[3]
end
u0 = [1.0; 0.0; 0.0]
tspan = (0.0, 100.0)
prob = ODEProblem(lorenz!, u0, tspan)
sol = solve(prob, Tsit5())
using Plots;
plot(sol, idxs = (1, 2, 3));

Very fast static array versions can be specifically compiled to the size of your model. For example:

using OrdinaryDiffEq, StaticArrays
function lorenz(u, p, t)
    SA[10.0(u[2] - u[1]), u[1] * (28.0 - u[3]) - u[2], u[1] * u[2] - (8 / 3) * u[3]]
end
u0 = SA[1.0; 0.0; 0.0]
tspan = (0.0, 100.0)
prob = ODEProblem(lorenz, u0, tspan)
sol = solve(prob, Tsit5())

For "refined ODEs", like dynamical equations and SecondOrderODEProblems, refer to the DiffEqDocs. For example, in DiffEqTutorials.jl we show how to solve equations of motion using symplectic methods:

function HH_acceleration!(dv, v, u, p, t)
    x, y = u
    dx, dy = dv
    dv[1] = -x - 2x * y
    dv[2] = y^2 - y - x^2
end
initial_positions = [0.0, 0.1]
initial_velocities = [0.5, 0.0]
prob = SecondOrderODEProblem(HH_acceleration!, initial_velocities, initial_positions, tspan)
sol2 = solve(prob, KahanLi8(), dt = 1 / 10);

Other refined forms are IMEX and semi-linear ODEs (for exponential integrators).

Available Solvers

For the list of available solvers, please refer to the DifferentialEquations.jl ODE Solvers, Dynamical ODE Solvers, and the Split ODE Solvers pages.

ordinarydiffeq.jl's People

Contributors

arnostrouwen avatar ayushinav avatar biswajitghosh98 avatar chriselrod avatar chrisrackauckas avatar cwittens avatar d-netto avatar deeepeshthakur avatar devmotion avatar dextorious avatar dgan181 avatar erikqqy avatar gaurav-arya avatar gstein3m avatar henrylangner avatar huanglangwen avatar jd-lara avatar junpenggao233 avatar kanav99 avatar mseeker1340 avatar oscardssmith avatar ranocha avatar saurabhkgp21 avatar shreyas-ekanathan avatar sipah00 avatar theozeud avatar tkf avatar utkarsh530 avatar vpuri3 avatar yingboma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ordinarydiffeq.jl's Issues

Add `@reexport using DiffEqBase`

It would be a good and non-invasive change that would allow one to simply using OrdinaryDiffEq to solve the problem.

I could do a PR for it since it needs the addition of only 2 lines, however I would also had to change REQUIRE. Maybe it;s best for the owner to do that since tagging new versions is super strict on the REQUIRE files (as I've come to know).

Make Rosenbrocks cache-changable

Currently they are not cache-changable because they need to resize the Jacobian parts and deal with the reshaping. Fixing this will allow there be stiff solvers for changing-size problems.

Making threaded methods compatible with Unitful units

DP5Threaded fails on the units tests. Its only difference from DP5 is that it changes loops that look like:

for i in eachindex(u)
  tmp[i] = a1*k1 + a2*k2 + a3*k3
end

to

Threads.@threads for i in eachindex(u)
  f(tmp,a1,k1,a2,k2,a3,k3)
end
f(tmp,a1,k1,a2,k2,a3,k3) = (tmp[i] = a1*k1 + a2*k2 + a3*k3)

i.e. the same code (with a function barrier for threading type issues).

@ajkeller34

Reduce memory footprint of RK methods

There are tricks that can be done to reduce the memory footprint of the RK methods. Many times a certain k vector is unused after a step, and there there's no need to allocate memory for every step. Instead, some of the ks can actually point to the same piece of memory (but can be named differently for easy maintenance since there's no cost).

In the current setup, this is as simple as identifying the vectors for which this can be done and doing something like k13 = k3, and getting rid of the line where k13 is first allocated. Care needs to be taken to note that the folded vectors aren't used in interpolations. Hairer's dopri5 and dop853 implementations have some examples of where this can be done (though it's hard to know it's there unless you already know about it!)

Stepsize numerical stability

User @Datseris reported the following test code (modified)

using JLD, OrdinaryDiffEq, DiffEqBase, ODE

t = 0.0:0.01:1000.0
const B = 0.01
d0 = 0.3   #antidot diameter
c = 0.1  #cutoff distance, must be 0 < c < 0.5-d0/2 !!!

function velocitydervs_e0(x, y, d0::Float64, c::Float64)
  δ = 4
  # Calculate quadrant of (x,y):
  U = Uy = Ux = 0.0
  xtilde = x - round(x);  ytilde = y - round(y)
  ρ = sqrt(xtilde^2 + ytilde^2)
  # Calculate derivatives and potential:
  if ρ < 0.5*d0 + c
    U = (1/(c^4))*(0.5*d0 + c - ρ)^δ #potential
    sharedfactor = -(4*(c + d0/2 - ρ)^-1))/(c^δ*ρ)
    Ux = sharedfactor*xtilde
    Uy = sharedfactor*ytilde
  end
  return U, Ux, Uy
end

function potentialenergy_e0(x::Float64, y::Float64, d0::Float64, c::Float64)
  δ = 4
  # Calculate quadrant of (x,y):
  xtilde = x - round(x);  ytilde = y - round(y)
  ρ = sqrt(xtilde^2 + ytilde^2)
  # Check distance:
  if ρ > 0.5*d0 + c
    pot = 0.0
  else
    pot = (1/(c^δ))*(0.5*d0 + c - ρ)^δ
  end
  return pot
end

function initial(potentialenergy::Function)
  var0 = zeros(Float64, 4)
  var0[1] = rand()
  var0[2] = rand()
  # Check if the initial condition is valid:
  pot0 = potentialenergy(var0[1], var0[2])
  while pot0 >= 0.99
    var0[1] = rand()
    var0[2] = rand()
    pot0 = potentialenergy(var0[1], var0[2])
  end

  θ0 = -π + 2*π*rand()
  var0[3] = cos(θ0)
  var0[4] = sin(θ0)
  return var0
end

velocitydervs = (x, y) -> velocitydervs_e0(x, y, d0, c)
potentialenergy = (x, y) -> potentialenergy_e0(x, y, d0, c)

function derivatives!(t, u, du)
  x = u[1]; y = u[2]; vx = u[3]; vy = u[4]
  pot, U_x, U_y = velocitydervs(x, y)
  der = (vx*U_y - vy*U_x + 2*B)/(1-pot)

  du[1] = u[3]; du[2] = u[4]
  du[3] = +u[4]*der; du[4] = -u[3]*der
  #@show u,du
end

reltol = 0.0
abstol = 1e-9
using ODEInterfaceDiffEq
u0 = initial(potentialenergy)
@show "newrun"
prob = ODEProblem(derivatives!,u0, (t[1],6.0)) # try other maxes
sol2 = solve(prob, dopri5(),
            abstol = 1e-1, reltol = 0, maxiters = typemax(Int))
sol = solve(prob, DP5(),
            abstol = 1e-1, reltol = 0, maxiters = typemax(Int),dt=sol2.t[2])#,qmax=10,qmin=0.1)

sol.t[3]
sol2.t[3]
sol.t[4]
sol2.t[4]

(sol2.t[3] - sol2.t[2])/(sol2.t[4] - sol2.t[3])

In this test code, when ran to t=1000, DP5() can sometimes be unstable, while dopri5() does not. But the behavior is bizarre. dopri5() does okay because (sol2.t[3] - sol2.t[2])/(sol2.t[4] - sol2.t[3]) == 1 whenever the error is sufficiently low. The issue with this problem is that, if the stepsize is just a little too large, then it steps into a space where the potential quickly goes to infinity. Therefore there are some almost absurd variations which allow dt=0.03 to have EEst = 1e-14, while dt=0.3 makes EEst = 1e14.

This behavior confuses the different solvers in different ways. DP5() oscillates a bit, usually finds a good dt, but then has a chance of overshooting. This is because it's following the error estimate. dopri5 is funny. When the EEst gets so low, for some reason its FAC11 = EEst ^ beta1 seems to compute 0 (floating point issue Julia doesn't have but this Fortran STD library has? Fastmath issue? Who knows...). This means that when the error estimate is essentially zero, it gives it zero gain, and does not increases the stepsize. This makes dopri5 almost fixed stepsize, which is terrible but works here. This is bad behavior for almost every problem, but here we go, this is a counter example.

What is there to learn from this? Well, I am not going to copy dopri5()'s behavior here since that seems to be a fluke which almost always is not a good thing. That said, how can this be fixed?

For one, the user should be solving this problem with a symplectic integrator. It's inherently a bad problem when energy is not conserved, so it should be using an integrator that will conserve energy.

Secondly, there's an easy user-fix by just setting dtmax.

sol = solve(prob, DP5(),
            abstol = 1e-9, reltol = 0, maxiters = typemax(Int),dt=sol2.t[2],dtmax=0.2)

That works well.

Lastly, the only actionable item is that there is a slight different in the error estimator calculation between the two. DP5 calculates the next u, then utilde, and does u-utilde. dopri differences at the coefficients, so then it's dt*(E1*k[1] + ... ). This is slightly more numerically stable when dt is orders of magnitude smaller than the error. So we should make this change. However, it actually makes no real difference on this problem, and correcting for this didn't fix anything. So I'll do this when I have time, but since it's just an error estimator which only holds to a low order, the numerical error is far below what it can actually read, so in almost every case it's a junk difference.

Just thought I should document this. So... this is one case where dopri5's default stepping is a little better than us, but this peculiar behavior makes it do worse on almost every problem... @Datseris I'm heading off to JuliaCon so just set a dtmax and call it a day. We should talk about getting you setup with symplectic integrators though.

Set event_occured to false at the end of @ode_event

Hi @ChrisRackauckas,

It turns out that I could not add more than one event to the callback function. After looking at the code, I realized that the problem is that event_occured is set to false at @ode_callback macro and the macro @ode_event does not set it to false at the end. Hence, if we have:

callback = @ode_callback begin
    @ode_event event_f1 ...
    @ode_event event_f2 ...
end

Then the event_f2 will always be executed when event_f1 happened. A workaround it to manually set event_occured to false between those two calls:

callback = @ode_callback begin
    @ode_event event_f1 ...
    event_occured = false
    @ode_event event_f2 ...
end

See an example here:

https://gist.github.com/ronisbr/a89021585266af326df7c77d1f0e9943

By the way, i think event_occured is misspelled.

Element-wise absolut and relative tolerances for in-place functions

Hi!

According to the documentation it should be possible to use element-wise absolut and relative tolerances:

  • abstol: Absolute tolerance in adaptive timestepping. Defaults to 1e-6. In addition, tolerances can be specified element-wise by passing a vector whose size matches u0.
  • reltol: Relative tolerance in adaptive timestepping. Defaults to 1e-3. In addition, tolerances can be specified element-wise by passing a vector whose size matches u0.

However, the following example throws an error on the master branch:

julia> using OrdinaryDiffEq
julia> f(t,u,du) = begin
       du[1] = -u[1]
       du[2] = t
       end
julia> prob = ODEProblem(f,[1.,1.],(0.,10.))
julia> sol = solve(prob,Tsit5(); abstol=[1e-6,1e-6])
ERROR: MethodError: Cannot `convert` an object of type Array{Float64,1} to an object of type Float64
This may have arisen from a call to the constructor Float64(...),                                                    
since type constructors fall back to convert methods.                                                                
Stacktrace:
 [1] ode_determine_initdt(::Array{Float64,1}, ::Float64, ::Float64, ::Float64, ::Array{Float64,1}, ::Float64, ::DiffEqBase.#ODE_DEFAULT_NORM, ::DiffEqBase.ODEProblem{Array{Float64,1},Float64,true,#f,Void,UniformScaling{Int64}}, ::Int64) at /home/david/.julia/v0.6/OrdinaryDiffEq/src/initdt.jl:9
 [2] #init#134(::Int64, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Void, ::Bool, ::Void, ::Bool, ::Bool, ::Bool, ::Float64, ::Bool, ::Rational{Int64}, ::Array{Float64,1}, ::Void, ::Int64, ::Rational{Int64}, ::Rational{Int64}, ::Bool, ::Rational{Int64}, ::Rational{Int64}, ::Int64, ::Float64, ::Float64, ::DiffEqBase.#ODE_DEFAULT_NORM, ::DiffEqBase.#ODE_DEFAULT_ISOUTOFDOMAIN, ::DiffEqBase.#ODE_DEFAULT_UNSTABLE_CHECK, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Int64, ::String, ::DiffEqBase.#ODE_DEFAULT_PROG_MESSAGE, ::Void, ::Void, ::Bool, ::Bool, ::Array{Any,1}, ::DiffEqBase.#init, ::DiffEqBase.ODEProblem{Array{Float64,1},Float64,true,#f,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.BS3, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at /home/david/.julia/v0.6/OrdinaryDiffEq/src/solve.jl:137
 [3] (::DiffEqBase.#kw##init)(::Array{Any,1}, ::DiffEqBase.#init, ::DiffEqBase.ODEProblem{Array{Float64,1},Float64,true,#f,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.BS3, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at ./<missing>:0
 [4] #solve#133(::Array{Any,1}, ::Function, ::DiffEqBase.ODEProblem{Array{Float64,1},Float64,true,#f,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.BS3, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at /home/david/.julia/v0.6/OrdinaryDiffEq/src/solve.jl:6
 [5] (::DiffEqBase.#kw##solve)(::Array{Any,1}, ::DiffEqBase.#solve, ::DiffEqBase.ODEProblem{Array{Float64,1},Float64,true,#f,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.BS3, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at ./<missing>:0 (repeats 2 times)

It similarly fails when one tries to specify element-wise relative tolerances. Moreover, using a non-inplace variant of f a solution is computed (also with different solvers like BS3).

In my branch I added some quick fixes for the in-place example. For the cases of Tsit5 and BS3 changing only the calculation of the initial time step was not sufficient but I had to correct the calculations of every other time step as well. I assume one might need to fix these calculations also for other solvers, but unfortunately I haven't had time to look into that yet. At least for this toy example my fixes do not yield any performance issues, however there might be another, better solution to that problem.

solve seems to be ignoring dtmin

Hi @ChrisRackauckas,

I am trying to use this package with a inferior limit for the dt. However, it seems that this limits is not being used. Consider, please, the following code:

https://gist.github.com/ronisbr/99cc40134d9a5551738e22622b47c3f4

which is the same as the last one you sent to me, but with display(t) at the dynamic function.

I set dtmin = 0.5, but I am seeing at the output the following:

0.0
0.02
0.0
0.02297442656090463
0.034461639841356945
0.09189770624361852
0.10210856249290945
0.11487213280452314
0.11487213280452314
0.11487213280452314
0.28244333504772334
0.3662289361693234
0.7851569417773239
0.8596330316631905
0.952728144020524
0.952728144020524
0.952728144020524
1.1731621821946665
1.283379201281738
1.8344642967170945
1.9324349803500467
2.054898334891237
2.054898334891237
2.054898334891237
2.3309932248453284
2.4690406698223737
3.1592778947076017
3.281986734687198
3.435372784661693
3.435372784661693
3.435372784661693
3.735038328945375
3.8848711010872155
4.634034961796419
4.767219648144723
4.9337005060801005
4.9337005060801005
4.9337005060801005
4.946960404864081
4.95359035425607
4.98674010121602
4.992633389564456
5.0
5.0
5.0
5.132598987839799
5.198898481759699
5.530395951359196
5.589328834843552
5.662994939198995
5.662994939198995
5.662994939198995
5.868122275443408
5.970685943565614
6.483504284176646
6.574671989174162
6.688631620421058
6.688631620421058
6.688631620421058
6.950905296336846
7.08204213429474
7.737726324084211
7.854292402269007
8.0
8.0
8.0
8.306606564893508
8.459909847340262
9.226426259574033
9.362695843971148
9.53303282446754
9.53303282446754
9.53303282446754
9.626426259574032
9.673122977127278
9.906606564893508
9.948114758274171
10.0
10.0

It seems that the time step is sometimes much lower than the 0.5. Am I doing something wrong?

Using the iterators has a slight performance disadvantage

So after a quite long tedious process, I refactored the code to get rid of (almost) all @def macro usage, put things into an ODEIntegrator type, and re-wrote everything to use this interface. What this means is that events/callbacks will have full access to everything (in fact, one can event think of the ODE method itself as an "event").

Once I got here, I realized that putting an iterator interface wasn't far off, so I went and did that. Because of the testing, I know that there are no performance degredations to get to this setup, BUT, using the iterator interface directly has a performance hit.

Here's what I mean, the setup is as follows.

#Initialize
integrator = init(prob,BS3();dt=1//2^(4),tstops=[0.5],saveat=0:0.01:1)
# Can do 12 iterations
for i in take(integrator,12) end
# Alternative iterator syntax due to @pwl
while !done(integrator)
  @show integrator.u
end

# Manual step
step(integrator)
# Interpolate, plot recipes, etc. all work on the integrator
integrator(0.5) # Interpolate / Extrapolate from the current interval
plot(integrator) # Plot the current interval
@unpack tprev,uprev,t,u,dt = integrator # Pull out standard information about the interval
integrator.abstol = 1e-6 # Modify options

Now, you would hope at this point that solve becomes init and then while !done(integrator) end, but sadly, there is a small performance disadvantage to actually using the iterator interface.

Here's it using a dummy loop which is essentially the iterator interface: (https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/solve.jl#L244)

bench_pre_iter

And here's actually using the iterator interface

bench_post_iter

The first has no performance degredations compared to before the change, so yay for sure I'm keeping this style. But I'd hope to make the iterator interface usage just as fast. As you can see, I've gotten it down to be not too much, but want to find out what that last little bit is. I wonder if @mauro3 or @pwl will know where I should look.

Limit a state variable using callbacks

Hi @ChrisRackauckas ,

I was thinking about control problems using this package like you asked me. Currently, I can't think about a model that will make things easier to implement like your models. However, there are some interesting points we can adjust for control system simulations.

Today, I was wondering how can I simulate a limit to a variable using callbacks. For example, let's consider the example about the moving particle I posted in my blog. Maybe I want to define a maximum allowed velocity. I tried to implement this with callbacks using the following code:

# Condition function.
function condition_vel_limit(t,u,integrator)
    vl - abs(u[2])
end

function vel_limit!(integrator)
    integrator.u[2] = sign(integrator.u[2])*vl-0.001
end

where vl is the velocity limit.

It is working fine but notice the 0.001. Without that the callback does not work. I suspect it is because the way how the condition function is evaluated. The full source code is here:

https://gist.github.com/ronisbr/ff3e8899cb19926d0d3fb9cedd5d0c8a

The result I got is:

figure

Is there a better way to implement this limit using callbacks?

IMEX Methods

Rosenbrock23/32 addsteps!

Right now, after an event, it adds to k using the Hermite form, not the Rosenbrock form, which causes odd gaps in the interpolant. An addsteps! dispatch for the Rosenbrock equations needs to be implemented.

Strong Stability Presurving RK Methods

This was brought up by @ranocha. These methods are good for discretizations of hyperbolic PDEs.

http://epubs.siam.org/doi/abs/10.1137/07070485X
http://www.worldscientific.com/worldscibooks/10.1142/7498
http://www.cfm.brown.edu/people/sg/SSPpage/sspsite/erk_nl.html

The RK methods can be implemented like any other RK method. However, the higher order RK methods can use something called a limiter. Since this is defined by the parts of the ODEProblem, it can be a toggle in the algorithm. Example:

SSPRK3(limiter=:highly_conservative)

These will need a little bit of extra documentation for that, but that should be fine. The limiter only needs to do things in the perform_step method, so nothing about the implementation needs to be specific. The RK method without the limiters should be added first, and tested, and then the limiters should be added.

A side note for making this simpler to work with. The symbol should be saved as a type parameter in the algorithm. A helper function should probably be added in alg_utils.jl that looks kind of like this:

https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/alg_utils.jl#L22

which is used to grab the symbol from the type information of the symbol. This will allow the branches for/not-for using the limiter to be compile-time information and compile away depending on the algorithm choice.

From what I can see, there are not specific dense outputs for these, and so a simple Hermite polynomial should be sufficient. These do not have an error estimator from what I can see, which means they may not be able to be adaptive.

Conversion of optional value for solve

The maxiters option for solve receives an integer. One correct instruction is

solve(prob, maxiters=Int(1e6))

I was biased to use 1e6 but this gives a matching method error that is not informative. Conversion to integer and appropriate massage could help.
Cheers.

Complex time parameters

It appears that solve sometimes produces complex time parameters. Here is a sample that illustrates the issue:

# Test.jl
module Test
using DifferentialEquations

function hamiltonian(q::Real,U::Real)
    return SymTridiagonal([(q+k)^2 for k=-10.:10.],fill(U/2.,(20,)))
end

function solver(U::Real=1.)
    function D(t,u)
        println("In D")
        println(t)
        return -im*hamiltonian(t,U)*u
    end
    u0 = complex([-10.:10.;])
    prob = ODEProblem(D,u0,promote(0.,1.))
    sol = solve(prob,dt=1./100.)
    return prob,sol
end

end

The println calls show what time parameter is being passed to the derivative function D. The details of the hamiltonian function are unimportant, except that it expects to get real arguments.

You can fix the problem by hand by only passing the real part of the time parameter to hamiltonian, as follows:

# Test.jl
module Test
using DifferentialEquations

function hamiltonian(q::Real,U::Real)
    return SymTridiagonal([(q+k)^2 for k=-10.:10.],fill(U/2.,(20,)))
end

function solver(U::Real=1.)
    function D(t,u)
        println("In D")
        println(t)
        return -im*hamiltonian(real(t),U)*u
    end
    u0 = complex([-10.:10.;])
    prob = ODEProblem(D,u0,promote(0.,1.))
    sol = solve(prob,dt=1./100.)
    return prob,sol
end

end

Another problem in this latter example, which is apparently related to the above issue, is that plotting the solution fails:

julia> p,s = Test.solver();

julia> plot(s)
ERROR: MethodError: no method matching isless(::Float64, ::Complex{Float64})
Closest candidates are:
  isless(::Float64, ::Float64) at float.jl:283
  isless(::AbstractFloat, ::AbstractFloat) at operators.jl:40
  isless(::Real, ::AbstractFloat) at operators.jl:41
  ...
 in expand_extrema!(::Plots.Extrema, ::Complex{Float64}) at C:\Users\swan\.julia\v0.5\Plots\src\axes.jl:239
 in expand_extrema!(::Plots.Axis, ::Array{Complex{Float64},1}) at C:\Users\swan\.julia\v0.5\Plots\src\axes.jl:262
 in expand_extrema!(::Plots.Subplot{Plots.PlotlyBackend}, ::Dict{Symbol,Any}) at C:\Users\swan\.julia\v0.5\Plots\sr
c\axes.jl:289
 in _expand_subplot_extrema(::Plots.Subplot{Plots.PlotlyBackend}, ::Dict{Symbol,Any}, ::Symbol) at C:\Users\swan\.j
ulia\v0.5\Plots\src\pipeline.jl:361
 in _process_seriesrecipe(::Plots.Plot{Plots.PlotlyBackend}, ::Dict{Symbol,Any}) at C:\Users\swan\.julia\v0.5\Plots
\src\pipeline.jl:392
 in _plot!(::Plots.Plot{Plots.PlotlyBackend}, ::Dict{Symbol,Any}, ::Tuple{DiffEqBase.ODESolution{Array{Array{Comple
x{Float64},1},1},Array{Float64,1},Array{Array{Array{Complex{Float64},1},1},1},DiffEqBase.ODEProblem{Array{Complex{F
loat64},1},Float64,false,Test.#D#3{Float64}},OrdinaryDiffEq.Tsit5,OrdinaryDiffEq.InterpolationData{Test.#D#3{Float6
4},Array{Array{Complex{Float64},1},1},Array{Float64,1},Array{Array{Array{Complex{Float64},1},1},1},OrdinaryDiffEq.T
sit5ConstantCache{Complex{Float64}}}}}) at C:\Users\swan\.julia\v0.5\Plots\src\plot.jl:227
 in plot(::DiffEqBase.ODESolution{Array{Array{Complex{Float64},1},1},Array{Float64,1},Array{Array{Array{Complex{Flo
at64},1},1},1},DiffEqBase.ODEProblem{Array{Complex{Float64},1},Float64,false,Test.#D#3{Float64}},OrdinaryDiffEq.Tsi
t5,OrdinaryDiffEq.InterpolationData{Test.#D#3{Float64},Array{Array{Complex{Float64},1},1},Array{Float64,1},Array{Ar
ray{Array{Complex{Float64},1},1},1},OrdinaryDiffEq.Tsit5ConstantCache{Complex{Float64}}}}) at C:\Users\swan\.julia\
v0.5\Plots\src\plot.jl:46

Complex comparison in ode_determine_initdt

It seems some changes were made which now has ode_determine_initdt compare complex numbers on line 10:

...
if d₀ < T0(1//10^(5)) || d₁ < T1(1//10^(5))
...

I can't figure out what T0 and T1 are however.

Fix `resize!` methods

The integrator resize! methods do not work right now because they need to be implemented. Thus the growing cell population example from the docs is currently broken. I will fix that while working on MultiScaleModels.jl.

Allow `uEltype` to be an immutable vector, e.g. `SVector` from `StaticArrays.jl`

I'm experimenting with systems of PDEs. After a semidiscretisation, I would like to store the solution as a vector of some (immutable) vectors, e.g. SVector{2,Float64} from StaticArrays.jl. A minimal working example is

using StaticArrays
using DiffEqBase, OrdinaryDiffEq

u0 = zeros(SVector{2,Float64}, 2) + 1

f = (t,u,du) -> du .= u
ode = ODEProblem(f, u0, (0.,1.))
sol = solve(ode, Euler(), dt=1.e-2)

This results in

No precise constructor found. Length of input was 1 while length of StaticArrays.SVector{2,Float64} is 2.

 in #init#84(::Int64, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Void, ::Bool, ::Void, ::Bool, ::Bool, ::Bool, ::Float64, ::Bool, ::Rational{Int64}, ::Void, ::Void, ::Int64, ::Rational{Int64}, ::Rational{Int64}, ::Bool, ::Rational{Int64}, ::Rational{Int64}, ::Int64, ::Float64, ::Float64, ::DiffEqBase.#ODE_DEFAULT_NORM, ::DiffEqBase.#ODE_DEFAULT_ISOUTOFDOMAIN, ::DiffEqBase.#ODE_DEFAULT_UNSTABLE_CHECK, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Int64, ::String, ::DiffEqBase.#ODE_DEFAULT_PROG_MESSAGE, ::Void, ::Void, ::Bool, ::Bool, ::Array{Any,1}, ::DiffEqBase.#init, ::DiffEqBase.ODEProblem{Array{StaticArrays.SVector{2,Float64},1},Float64,true,##1#2,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.Euler, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at /home/hendrik/.julia/v0.5/OrdinaryDiffEq/src/solve.jl:123
 in (::DiffEqBase.#kw##init)(::Array{Any,1}, ::DiffEqBase.#init, ::DiffEqBase.ODEProblem{Array{StaticArrays.SVector{2,Float64},1},Float64,true,##1#2,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.Euler, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at ./<missing>:0
 in #solve#83(::Array{Any,1}, ::Function, ::DiffEqBase.ODEProblem{Array{StaticArrays.SVector{2,Float64},1},Float64,true,##1#2,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.Euler, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at /home/hendrik/.julia/v0.5/OrdinaryDiffEq/src/solve.jl:6
 in (::DiffEqBase.#kw##solve)(::Array{Any,1}, ::DiffEqBase.#solve, ::DiffEqBase.ODEProblem{Array{StaticArrays.SVector{2,Float64},1},Float64,true,##1#2,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.Euler, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at ./<missing>:0 (repeats 2 times)

Here, the problem is uEltype(1). Thus, the problem is essentially the assumption that uEltype is a scalar.

A first workaround may be to define a suitable constructor as follows

using StaticArrays
using DiffEqBase, OrdinaryDiffEq

(::Type{SVector{2,T}}){T}(val::Real) = SVector{2,T}(val, val)

u0 = zeros(SVector{2,Float64}, 2) + 1

f = (t,u,du) -> du .= u
ode = ODEProblem(f, u0, (0.,1.))
sol = solve(ode, Euler(), dt=1.e-2)

However, this results in

indexed assignment not defined for StaticArrays.SVector{2,Float64}

 in copy!(::StaticArrays.SVector{2,Float64}, ::StaticArrays.SVector{2,Float64}) at ./multidimensional.jl:616
 in recursivecopy! at /home/hendrik/.julia/v0.5/RecursiveArrayTools/src/utils.jl:2 [inlined]
 in recursivecopy!(::Array{StaticArrays.SVector{2,Float64},1}, ::Array{StaticArrays.SVector{2,Float64},1}) at /home/hendrik/.julia/v0.5/RecursiveArrayTools/src/utils.jl:7
 in apply_step! at /home/hendrik/.julia/v0.5/OrdinaryDiffEq/src/integrators/integrator_utils.jl:247 [inlined]
 in loopheader! at /home/hendrik/.julia/v0.5/OrdinaryDiffEq/src/integrators/integrator_utils.jl:10 [inlined]
 in solve!(::OrdinaryDiffEq.ODEIntegrator{OrdinaryDiffEq.Euler,Array{StaticArrays.SVector{2,Float64},1},Float64,Float64,Float64,Array{Array{StaticArrays.SVector{2,Float64},1},1},DiffEqBase.ODESolution{StaticArrays.SVector{2,Float64},2,Array{Array{StaticArrays.SVector{2,Float64},1},1},Void,Void,Array{Float64,1},Array{Array{Array{StaticArrays.SVector{2,Float64},1},1},1},DiffEqBase.ODEProblem{Array{StaticArrays.SVector{2,Float64},1},Float64,true,##3#4,Void,UniformScaling{Int64}},OrdinaryDiffEq.Euler,OrdinaryDiffEq.InterpolationData{##3#4,Array{Array{StaticArrays.SVector{2,Float64},1},1},Array{Float64,1},Array{Array{Array{StaticArrays.SVector{2,Float64},1},1},1},OrdinaryDiffEq.EulerCache{Array{StaticArrays.SVector{2,Float64},1},Array{StaticArrays.SVector{2,Float64},1}}}},Array{StaticArrays.SVector{2,Float64},1},##3#4,Void,OrdinaryDiffEq.EulerCache{Array{StaticArrays.SVector{2,Float64},1},Array{StaticArrays.SVector{2,Float64},1}},OrdinaryDiffEq.DEOptions{StaticArrays.SVector{2,Float64},Float64,Float64,Float64,DiffEqBase.#ODE_DEFAULT_NORM,DiffEqBase.CallbackSet{Tuple{},Tuple{}},DiffEqBase.#ODE_DEFAULT_ISOUTOFDOMAIN,DiffEqBase.#ODE_DEFAULT_PROG_MESSAGE,DiffEqBase.#ODE_DEFAULT_UNSTABLE_CHECK,DataStructures.BinaryHeap{Float64,DataStructures.LessThan},Void,Void}}) at /home/hendrik/.julia/v0.5/OrdinaryDiffEq/src/solve.jl:306
 in #solve#83(::Array{Any,1}, ::Function, ::DiffEqBase.ODEProblem{Array{StaticArrays.SVector{2,Float64},1},Float64,true,##3#4,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.Euler, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at /home/hendrik/.julia/v0.5/OrdinaryDiffEq/src/solve.jl:7
 in (::DiffEqBase.#kw##solve)(::Array{Any,1}, ::DiffEqBase.#solve, ::DiffEqBase.ODEProblem{Array{StaticArrays.SVector{2,Float64},1},Float64,true,##3#4,Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.Euler, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at ./<missing>:0 (repeats 2 times)

Hence, the assumption that uEltype is a scalar appears again.

However, if I use an algorithm from ODE.jl,

using StaticArrays
using DiffEqBase, OrdinaryDiffEq, ODE

u0 = zeros(SVector{2,Float64}, 2) + 1

f = (t,u,du) -> du .= u
ode = ODEProblem(f, u0, (0.,1.))
sol = solve(ode, ode4(), dt=1.e-2)

sol[end]

everything works fine, the result is

2-element Array{StaticArrays.SVector{2,Float64},1}:
 [2.70833,2.70833]
 [2.70833,2.70833]

However, I would really like to use the algorithms from OrdinaryDiffEq.jl - of course. I can invest some work and time to get things done. Since the implicit assumption uEltype <: Number seems to be widespread, some guidance from more experienced developers of the DiffEq suite would be very nice.

What do you think of something like uScalarType instead of uEltype for tolerances etc.? Additionally, the copying in RecursiveArrayTools would have to be adapted.

in-place odeinterpolant

SciML/DifferentialEquations.jl#66

odeinterpolant functions are already defined. What needs to happen is they need to change to odeinterpolant! and devectorize the calculation. Then a new odeinterpolant should just make the vector and call that. Then the event handling function should define the vector once and use odeinterpolant! instead for all of the calculations. To get it allocation-free, the callback function should instead make a call-overloaded type with the interpolant cache already in there. Then the event handling is callback free!

Note that nothing is difficult here since everything is already written, things just need to be re-named and a new argument with a mutating vector needs to be put in front.

Nested containers in the system function

Currently, using a nested container such as a Vector{SVector{2,Float64}} produces the following error when using a method with an adaptive timestep:

ERROR: DimensionMismatch("Cannot left-divide transposed vector by matrix")
Stacktrace:
 [1] /(::Array{Float64,1}, ::Array{Float64,1}) at .\linalg\generic.jl:823
 [2] broadcast_t(::Function, ::Type{Any}, ::Tuple{Base.OneTo{Int64}}, ::CartesianRange{CartesianIndex{1}}, ::Array{Array{Float64,1},1}, ::Array{Array{Float64,1},1}) at .\broadcast.jl:256
 [3] broadcast_c at .\broadcast.jl:319 [inlined]
 [4] broadcast at .\broadcast.jl:434 [inlined]
 [5] ode_determine_initdt(::Array{Array{Float64,1},1}, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::DiffEqBase.#ODE_DEFAULT_NORM, ::DiffEqBase.ODEProblem{Array{Array{Float64,1},1},Float64,false,DiffEqBase.ParameterizedFunction{false,BacterialEnsembles.#ensemble_dynamics#2,Tuple{Int64,Float64}},Void,UniformScaling{Int64}}, ::Int64) at C:\Users\admin\.julia\v0.6\OrdinaryDiffEq\src\initdt.jl:57
 [6] #init#134(::Int64, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Void, ::Bool, ::Void, ::Bool, ::Bool, ::Bool, ::Float64, ::Bool, ::Rational{Int64}, ::Void, ::Void, ::Int64, ::Rational{Int64}, ::Rational{Int64}, ::Bool, ::Rational{Int64}, ::Rational{Int64}, ::Int64, ::Float64, ::Float64, ::DiffEqBase.#ODE_DEFAULT_NORM, ::DiffEqBase.#ODE_DEFAULT_ISOUTOFDOMAIN, ::DiffEqBase.#ODE_DEFAULT_UNSTABLE_CHECK, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool, ::Int64, ::String, ::DiffEqBase.#ODE_DEFAULT_PROG_MESSAGE, ::Void, ::Void, ::Bool, ::Bool, ::Array{Any,1}, ::DiffEqBase.#init, ::DiffEqBase.ODEProblem{Array{Array{Float64,1},1},Float64,false,DiffEqBase.ParameterizedFunction{false,BacterialEnsembles.#ensemble_dynamics#2,Tuple{Int64,Float64}},Void,UniformScaling{Int64}}, ::OrdinaryDiffEq.Tsit5, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Type{Val{true}}) at C:\Users\admin\.julia\v0.6\OrdinaryDiffEq\src\solve.jl:137

The problem does not affect methods with a fixed timestep.

TRBDF2

A good implicit solver. We have all of the components to make this easily. Just need the time.

Extra FSAL after internals modification by continuous events

After a continuous event occurs, the k vector needs to be repopulated. However, what isn't taken into account is that this may re-calculate the FSAL. There's a latter stage where the FSAL re-calculates that step, leading to an extra function evaluation. However, the flag is still necessary because of the possibility that discrete callbacks will perform a modification. A very convoluted setup could probably get rid of this one function evaluation which shows up every so often in an event based code. I don't know if it's worth working on (in many algorithms, it's <5% of the function calls that occur during that step so at most you could get a 5% gain if 100% of the algorithm was spent in function calls... and only on steps with a continuous event mind you), but it's documented in case anyone wants to give it a go.

Allocating when resizing Jacobians

The way that Jacobians are currently resized they actually cause a reallocation every time as pointed out by @ScottPJones. A workaround is to create a wrapper type with a vector and indexing like a matrix (row = mod, column = integer division), and define the appropriate dispatches on this (and make it an AbstractMatrix). This will then cause less allocations when resizing since it will allow it to do the normal resize! behavior.

Interpolation (dense) output changes the order of the time points supplied

At the moment the dense interpolation routines change the order of the time points supplied (by sorting the input vector in place, hence silently changing the input unexpectedly). This is unexpected and surprising and can easily lead to errors. Moreover, with the new Julia vectorisation syntax you might expect that sol([1.0, 0.5]) is the same as sol.([1.0, 0.5]) but it is not.

Whilst this might not be a common usage, I have used it repeatedly when randomly sampling from a simulated trajectory.

Cannot use

I just updated with Pkg.update() and I got

INFO: Precompiling module OrdinaryDiffEq.
WARNING: could not import DiffEqBase.solve! into OrdinaryDiffEq
WARNING: could not import DiffEqBase.init into OrdinaryDiffEq
WARNING: could not import DiffEqBase.step! into OrdinaryDiffEq
WARNING: could not import DiffEqBase.cache_iter into OrdinaryDiffEq
WARNING: could not import DiffEqBase.terminate! into OrdinaryDiffEq
WARNING: could not import DiffEqBase.get_du into OrdinaryDiffEq
WARNING: could not import DiffEqBase.get_dt into OrdinaryDiffEq
WARNING: could not import DiffEqBase.get_proposed_dt into OrdinaryDiffEq
WARNING: could not import DiffEqBase.modify_proposed_dt! into OrdinaryDiffEq
WARNING: could not import DiffEqBase.u_modified! into OrdinaryDiffEq
WARNING: could not import DiffEqBase.savevalues! into OrdinaryDiffEq
ERROR: LoadError: LoadError: too many parameters for type ExplicitRKTableau
 in include_from_node1(::String) at .\loading.jl:488 (repeats 2 times)
 in macro expansion; at .\none:2 [inlined]
 in anonymous at .\<missing>:?
 in eval(::Module, ::Any) at .\boot.jl:234
 in process_options(::Base.JLOptions) at .\client.jl:239
 in _start() at .\client.jl:318
while loading C:\Users\John\.julia\v0.5\OrdinaryDiffEq\src\alg_utils.jl, in expression starting on line 22
while loading C:\Users\John\.julia\v0.5\OrdinaryDiffEq\src\OrdinaryDiffEq.jl, in expression starting on line 28
ERROR: LoadError: LoadError: LoadError: Failed to precompile OrdinaryDiffEq to C:\Users\John\.julia\lib\v0.5\OrdinaryDiffEq.ji.
 in compilecache(::String) at .\loading.jl:593
 in require(::Symbol) at .\loading.jl:422
 in include_from_node1(::String) at .\loading.jl:488
 in eval(::Module, ::Any) at .\boot.jl:234
 in require(::Symbol) at .\loading.jl:415
 in include_from_node1(::String) at .\loading.jl:488
 in eval(::Module, ::Any) at .\boot.jl:234
 in require(::Symbol) at .\loading.jl:415
 in include_from_node1(::String) at .\loading.jl:488
while loading C:\Users\John\.julia\v0.5\DifferentialEquations\src\DifferentialEquations.jl, in expression starting on line 12

Force simulation to stop

Hi @ChrisRackauckas,

I have a veeery long simulation that I am analyzing by Monte Carlo. I am interested only on the convergence time, which I can compute. In order to save execution time, is there any way to force the solver to stop if a variable becomes true?

Wrong array access for interpolation out of integration limits

The current master fails if the solution is attempted to be calculated beyond the end of the integration:

tprob = ODEProblem((t,x)->x, 1.0, (0.0,1.0))
tsol = solve(tprob, Euler(), dt=0.1)
tsol(2)

In the current master version this actually leads to a segfalut. If you remove all the "@inbounds" macros in OrdinaryDiffEq/src/dense/generic_dense.jl you will get

ERROR: BoundsError: attempt to access 12-element Array{Int64,1} at index [13]
in ode_interpolation at /home/fedor/.julia/v0.5/OrdinaryDiffEq/src/dense/generic_dense.jl:222 [inlined]

out of this code

Use extrapolation for implicit method initial guesses

This is a common strategy that usually makes them converge much quicker. Since we have the integrator object, integrator(t+dt) will do the extrapolation for initial condition. This just needs to be implemented and tested. This is very simple if you're already familiar with the integrator interface and is great beginner issue!

time-dependent Rosenbrock errors with autodifferentiation

How do I access the time variable when specifying an ODE with time-varying rates. Here is a simple example:

using DiffEqBase
using OrdinaryDiffEq

function myode(t,u,du)
    du[1] = -t
end

u0 = [0.0]
tspan = (0.0,1.0)
prob = ODEProblem(myode,u0,tspan)
sol = solve(prob,Rosenbrock23())

The above gives:

MethodError: Cannot `convert` an object of type ForwardDiff.Dual{1,Float64} to an object of type Float64
This may have arisen from a call to the constructor Float64(...),
since type constructors fall back to convert methods.

in myode(::ForwardDiff.Dual{1,Float64}, ::Array{Float64,1}, ::Array{Float64,1}) ...

Residual Control RK4

I am planning to add a residual defect control to the RK4 to make it adaptive and serve as a basis for state-dependent delay equations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.