I optimized the ODE solvers around these guys a bit. As a test case, I took a linear system on the embryo from the tests. The embryo is 4 layers and a total length of 13, and so it uses very small leaf nodes. This means that, with a cheap problem and a high hierarchy, it should accentuate any differences and give an upper bound on the cost of using a MultiScaleModel.
I then benchmarked the same problem solved with MultiScaleModels and then just having it be a regular length 13 contiguous array:
prob = ODEProblem(f,em,(0.0,1500.0))
@benchmark sol1 = solve(prob,Tsit5(),save_timeseries=false)
prob = ODEProblem(f,em[:],(0.0,1500.0))
@benchmark sol2 = solve(prob,Tsit5(),save_timeseries=false)
to get the results:
BenchmarkTools.Trial:
memory estimate: 230.28 kb
allocs estimate: 10920
--------------
minimum time: 16.469 ms (0.00% GC)
median time: 16.942 ms (0.00% GC)
mean time: 17.033 ms (0.46% GC)
maximum time: 26.385 ms (33.77% GC)
--------------
samples: 294
evals/sample: 1
time tolerance: 5.00%
memory tolerance: 1.00%
BenchmarkTools.Trial:
memory estimate: 196.37 kb
allocs estimate: 10409
--------------
minimum time: 7.500 ms (0.00% GC)
median time: 7.846 ms (0.00% GC)
mean time: 7.907 ms (0.51% GC)
maximum time: 13.730 ms (42.01% GC)
--------------
samples: 632
evals/sample: 1
time tolerance: 5.00%
memory tolerance: 1.00%
This puts an upper bound for the cost of using MultiScaleModels at just over 2x. But this is without saving: saving the timeseries is more costly with a MMM. Measuring the cost with saving every step:
prob = ODEProblem(f,em,(0.0,1500.0))
@benchmark sol1 = solve(prob,Tsit5())
prob = ODEProblem(f,em[:],(0.0,1500.0))
@benchmark sol2 = solve(prob,Tsit5())
BenchmarkTools.Trial:
memory estimate: 3.20 mb
allocs estimate: 43891
--------------
minimum time: 26.454 ms (0.00% GC)
median time: 27.738 ms (0.00% GC)
mean time: 28.962 ms (4.21% GC)
maximum time: 39.190 ms (27.02% GC)
--------------
samples: 173
evals/sample: 1
time tolerance: 5.00%
memory tolerance: 1.00%
BenchmarkTools.Trial:
memory estimate: 1.52 mb
allocs estimate: 21060
--------------
minimum time: 9.414 ms (0.00% GC)
median time: 9.910 ms (0.00% GC)
mean time: 10.252 ms (3.11% GC)
maximum time: 18.033 ms (35.72% GC)
--------------
samples: 488
evals/sample: 1
time tolerance: 5.00%
memory tolerance: 1.00%
This puts an upper bound around 3x. So the maximal cost of the abstraction is about 2x-3x for ODEs. It's actually lower for SDEs and DDEs because more of the calculation in those domains are spent on the interpolation and noise generation, which are actually able to mostly avoid the costs of MMMs. So the final product is something where the abstract cost is less than the performance difference between OrdinaryDiffEq and other packages, meaning using MMMs in OrdinaryDiffEq should still be slightly faster than using other packages with contiguous arrays. I think this counts as well optimized, and so my goal will be to now make this all compatible with the solvers for stiff equations since will make a large impact.
@zahachtah I think you might be interested in these results.