Giter VIP home page Giter VIP logo

hjbsolver.jl's People

Contributors

anriseth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

hjbsolver.jl's Issues

Fix Julia 0.5 Travis warnings and errors

WARNING: Method definition objective(Any) in module HJBSolver at /home/travis/.julia/v0.5/HJBSolver/src/policyiteration.jl:46 overwritten at /home/travis/.julia/v0.5/HJBSolver/src/policyiteration.jl:62.
WARNING: Method definition objective(Any) in module HJBSolver at /home/travis/.julia/v0.5/HJBSolver/src/policyiteration.jl:62 overwritten at /home/travis/.julia/v0.5/HJBSolver/src/policyiteration.jl:86.
Constant policy approximation, Merton
WARNING: could not attach metadata for @simd loop.
 82.104381 seconds (416.48 M allocations: 12.671 GB, 1.27% gc time)
2 facts verified.
Policy iteration, Merton
ERROR: LoadError: LoadError: UndefVarError: hamiltonian not defined
 in updatepol!(::Array{Float64,1}, ::Array{Float64,1}, ::HJBSolver.HJBOneDim{Float64}, ::Float64, ::LinSpace{Float64}, ::Float64) at /home/travis/.julia/v0.5/HJBSolver/src/policyiteration.jl:45

Generalise the boundary value problem

Currently only Dirichlet boundaries are allowed.
Generalise to e.g. using the limiting value of the PDE expression on certain parts of the boundary.

Use second-order finite differences when possible

Make the code use second-order finite differences for the v_x term whenever possible. We can for example follow the approach from the paper below.

http://epubs.siam.org/doi/abs/10.1137/060675186

@article{wang2008maximal,
  title={Maximal use of central differencing for Hamilton-Jacobi-Bellman PDEs in finance},
  author={Wang, J and Forsyth, Peter A},
  journal={SIAM Journal on Numerical Analysis},
  volume={46},
  number={3},
  pages={1580--1601},
  year={2008},
  publisher={SIAM}
}

Store value and policy arrays in forward time

Currently the value and policy arrays are stored in backward time, so v[:, 1] represents the value function at t=T and v[:, end] represents the value function at t=0.

Redo this so we get it the right way.

Optimise the whole control-vector at once?

Currently the policy iteration approach loops over each x-value and optimises the control only at that position.
Is it possible (and faster) to run a larger optimisation over the controls on all x-values instead? Maybe by summing together hamiltonian(i,j) for all the indices i (and j in 2D)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.