Comments (19)
@lxuechen Benchmarks look rather encouraging for the BrownianInterval. e.g. see below. (Every benchmark exhibits the same behaviour.)
I would note that the number of time steps is very low in the benchmarks (only 100). I expect that larger timesteps will result in the BrownianPath overtaking the BrownianInterval.
from torchsde.
This is encouraging! Would you have an idea as to where the gain is coming from?
I would guess it's coming from 1) using __slots__
, and 2) explicitly modeling increments.
How much different are the plots when these bms are plugged in the solvers?
from torchsde.
I can think of a couple other things for certain cases:
- For sequential access: the algorithm used in
BrownianInterval
is reasonably smart about where it starts searching; IIRCBrownianPath
doesn't use any hinting about where to start searching. - For random access with
BrownianPath
: not having to do list inserts, which are O(n) on average. - For the GPU: not to having to do a
.to()
copy and instead initialising right on the GPU viadevice=
. TBH this is really just a bug in the other implementations though.
The equivalent plots for the solvers look pretty much identical; I just picked one at random.
from torchsde.
Several side comments:
Switch to interval interface.
This breaks backwards compatibility. I'd be nice if we could have tb
as an optional argument for BPath and BTree for now.
Removed 'trapezoidal_approx' as option from SRK solvers
This line might be wrong if the second output of bm
is H in James' thesis, since I_k0 isn't exactly the space-time Levy area.
A side comment is that I don't totally support merging the SRK methods for now. But I think we should be able to get rid of the tableuas folder if you find it disturbing.
Also the docstring needs to change for that file.
from torchsde.
Several side comments:
Switch to interval interface.
This breaks backwards compatibility. I'd be nice if we could have
tb
as an optional argument for BPath and BTree for now.
Done.
Removed 'trapezoidal_approx' as option from SRK solvers
This line might be wrong if the second output of
bm
is H in James' thesis, since I_k0 isn't exactly the space-time Levy area.
I'll add that to the bug list!
A side comment is that I don't totally support merging the SRK methods for now. But I think we should be able to get rid of the tableuas folder if you find it disturbing.
The SRK methods actually aren't merged at the moment; for now at least I've taken the reasonably straightforward approach of unifying as much of the easy code as I can and leaving their step
s separate. The tableuas folder doesn't bother me. :)
Also the docstring needs to change for that file.
Done!
from torchsde.
I can think of a couple other things for certain cases:
- For sequential access: the algorithm used in
BrownianInterval
is reasonably smart about where it starts searching; IIRCBrownianPath
doesn't use any hinting about where to start searching.- For random access with
BrownianPath
: not having to do list inserts, which are O(n) on average.- For the GPU: not to having to do a
.to()
copy and instead initialising right on the GPU viadevice=
. TBH this is really just a bug in the other implementations though.The equivalent plots for the solvers look pretty much identical; I just picked one at random.
I see. For GPUs the copying part makes sense. The list inserts argument doesn't seem to be the most convincing to me, since blist should have O(logn) complexity.
from torchsde.
Overall seems like awesome progress!
from torchsde.
Good point about the blist.
Thanks. I'm expecting to try and tackle most of the bugs first. I've just seen your email so I'll continue responding over there.
from torchsde.
I've left the test and examples alone as I suspect you'll be much more familiar with how those should go than I am.
I'd recommend we first test BrownianInterval and check for its correctness. This actually should have been done before rewriting the solvers. The problem is that now, I couldn't run the rate inspect diagnostics, since the BInterval is raising errors.
I think all the 'SDE' things are the bits that I'm least familiar with, and that you're most familiar with. Can you handle them?
I can do this.
from torchsde.
I've left the test and examples alone as I suspect you'll be much more familiar with how those should go than I am.
I'd recommend we first test BrownianInterval and check for its correctness. This actually should have been done before rewriting the solvers. The problem is that now, I couldn't run the rate inspect diagnostics, since the BInterval is raising errors.
One of the reasons I had it in a separate branch originally!
What error is being raised?
I think all the 'SDE' things are the bits that I'm least familiar with, and that you're most familiar with. Can you handle them?
I can do this.
Excellent, thankyou!
from torchsde.
If my derivation is correct then I think the term needed in the SRK should be:
I_k0 = (t - s) * (H_{s,t} + 0.5 W_{s,t})
But it's also 1am where I am. Does that look about right to you?
from torchsde.
If my derivation is correct then I think the term needed in the SRK should be:
I_k0 = (t - s) * (H_{s,t} + 0.5 W_{s,t})
But it's also 1am where I am. Does that look about right to you?
Yes, this is right. I_k0
is exactly the U_{s, t} term in the note I sent you, and the conversion is eq (4) in that note.
from torchsde.
I think, for the moment, I wouldn't worry too much about rewriting/refactoring more of the existing code.
The thing that I'm worried more about is whether the new BrownianInterval
is resilient enough to stand through numerical tests. The idea here is that I'm seeing subtle issues introduced by this new data structure and the various rewrites, and I don't want to be moving forward before we properly test BrownianInterval
.
This also relates to the habit of mine where I'd like to run the test suit every time I make a minor change, so that I know exactly what could go wrong.
from torchsde.
Here are tests that I think would benefit us when we go forward for BrownianInterval
(and even for BPath and BTree when I add in the L.A. approxs)
- Shape and dtype tests for all cases -- either the L.A. is returned or not.
- The method
to
, i.e. to a new device or dtype should transfer/convert all existing tensors -- including those in the cache if possible. - Determinism of the implementation -- a test like this -- but slightly more complicated. It should involve checking that the spawned increments and L.A. approxs are the same even when the cache is tiny and things are forgotten eventually.
- Normality tests and moment matching tests: This one could be tricky. Is there a way to set the Brownian motion value at a specific time like this so that we could test the conditional moments?
I think once these are in place, I'd have much more confidence in going forward.
from torchsde.
Another note is that is there a way that we can simplify the interface, as right now the user is basically forced to manually supply levy_area_approx='davie'/'foster'/'space-time' just in order to use SRK. Using a high order scheme shouldn't be this difficult!
Also, does Davie approximation have any advantages over Foster? I thought Foster's version is more accurate, at the cost of one extra random number generation, which is totally reasonable considering the costs of other components.
from torchsde.
Your concerns about tests are reasonable - bugs and tests are definitely the next things to handle.
I agree that having to specify levy area to use SRK should be fixed; this is already on the bug list. I'm not sure what the best way to handle this is.
I think my preferred fix would be to upgrade the default sdeint(..., bm=None)
case to initialise a Brownian object with the appropriate levy area, and regard the explicit bm=something
case as more of a power user's case, for which it's up to them to do things correctly.
I also like the ability to then switch between davie/foster approximation independently of the solver, in particular as better approximations may be introduced in the future.
The alternative option is slightly messier, but still doable -
Brownian* should be fixed to a particular choice of levy area approximation, as doing otherwise introduces headaches: what if it's queried with one choice, and then a different one?
Having the choice of levy area approximation be specifiable by the solver, rather than on __init__
, then means that some sort of "deferred __init__
" is necessary, which is called when Brownian* is passed into sdeint. I think that would only be used to set the levy area approximation flag, and nothing else, so this is definitely doable, but I'm not a fan of having partially initialised objects wandering around.
Care to make a judgement one way or the other?
Davie's approximation is slightly faster than Foster, by a bit more than just the random number generation (there's a bit of algebra too), but AFAIK there's not a big difference between them however you slice it. But for forward compatibility with even better approximations in the future, I'd avoid code layouts for which Foster gets baked in as the only option.
from torchsde.
I think my preferred fix would be to upgrade the default sdeint(..., bm=None) case to initialise a Brownian object with the appropriate levy area, and regard the explicit bm=something case as more of a power user's case, for which it's up to them to do things correctly. I also like the ability to then switch between davie/foster approximation independently of the solver, in particular as better approximations may be introduced in the future.
I agree with this, and the plan seems nice.
Davie's approximation is slightly faster than Foster, by a bit more than just the random number generation (there's a bit of algebra too), but AFAIK there's not a big difference between them however you slice it. But for forward compatibility with even better approximations in the future, I'd avoid code layouts for which Foster gets baked in as the only option.
SGTM.
from torchsde.
@lxuechen This is labelled as Ito-specific:
torchsde/torchsde/_core/base_sde.py
Line 53 in 133a745
Is there any part of that which actually relies on that though? It looks like it's just general drift/diffusion operations.
from torchsde.
Is there any part of that which actually relies on that though? It looks like it's just general drift/diffusion operations.
This is removed in #20 and #21.
from torchsde.
Related Issues (20)
- Using "*" in version speficiers is deprecated and breaks some installs HOT 3
- Defining noise type for vector-valued homogeneous SDE HOT 1
- vector-valued SDE cumbersome workflow HOT 1
- Zero drift and zero diffusion matrices lead to non-zero changes of variable
- Irregular data and sampling posterior in latent_sde_lorenz.py
- Low CPU and GPU usage in training sde_gan, Seeking Help to Improve Performance. HOT 2
- Different `t` for data in a minibatch HOT 5
- torchsde pypi package is misformated HOT 26
- Deprecation torchsde version HOT 5
- Something went wrong Expecting value: line 1 column 1 (char 0) HOT 1
- Incorporating real stock time series data
- SDE-LSTM structure for time series forecasting
- Performance Improvement Inquiry: Experiencing Slow Execution with TorchSDE HOT 2
- Latent SDE failed to generate longer samples
- issue with my text to image ai Device type privateuseone is not supported for torch.Generator() api. HOT 1
- 我的 Mac上 只有torchsde-0.2.6.dist-info,怎样才能安装insightface
- 如果 torchsde 当前没有解决这个问题的新版本,你可以联系该项目的维护者或作者
- sdeint_adjoint for f_and_g_prod type SDE, how to set parameter?
- failure to install pip install qiskit-aer to run qsvm on ibm simulator
- learning the generative model of periodical process HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torchsde.