Giter VIP home page Giter VIP logo

fastad's Issues

Some general qs. about FastAD (new to C++)

I sent these questions in an email to Dr Yang but I thought I would post here too

I have read the paper on FastAD and I am very interested in using fastAD for my algorithm

I was wondering does FastAD work with Rcpp? If so how can I install it? I think it should be possible but I just wanted to check (i'm new to C++)

I have used "autodiff" library (https://autodiff.github.io/) however I have found it to not be much faster than numerical differentiation for my application - have you used this before? I noticed in the paper you didn't benchmark against it

Also I was wondering if it possible to compute a gradient w.r.t a std::vector filled with eigen matrices? (or any other 3D or higher dimensional structure)? or will all the parameters need to be input into a vector or matrix and then reformatted back into the container needed for the rest of the model afterwards?

Is it possible to do use fastAD just within a standard function (rather than using "int main()" etc)? Im new to C++ and have just been using functions for everything (also using it through R via Rcpp)

Non-templated expression container?

Is there a standard way to store a FastAD expression in a "generic" expression object?

The BaseExpr class is templated on the derived type, which means it's not suitable for use in a class that can store any FastAD expression.

Maybe a variant of the std::any concept?

Reduce standard to C++14

The current library only supports from C++17. It would be nice to have it support from C++14.

Conditional Statements

Some functions may not be continuous or differentiable at a point, but can be made so by analytic continuation. For example, sin(x)/x is undefined at x = 0, but can be made continuous by defining the function at x = 0 to be 1.

Implement conditional-statements such that an expression may be evaluated differently depending on input x.

LeafNode new member function: reset_value_ptr, reset_adj_ptr

Currently, when LeafNodes get copied, their internal value and adjoint pointers get copied to point to the original value and adjoint. This is usually the desired behavior, but sometimes they may be pointing to garbage memory. We should be able to reset these pointers to point to its own value and adjoint.

Remove armadillo dependency

Remove armadillo dependency with self-implementation of matrix.

CMake, include/fastad_bits/ files, and test/ files, should not reference armadillo at all or USE_ARMA preprocessor flag.

How to get nth derivative?

Vec<double> a, b {1,2,3};
auto expr = (a = b*b*b+b);

How do I get 3rd derivative of a wrt b? (d3a/db3)

Var<T> move constructor implementation

#18 this issue is related to current issue.

Should Var move its contents by copying base DualNum and setting the pointers current base members? Otherwise, default move ctor will make them point to de-allocated memory.

Extending black-scholes example

It was pretty straightforward to extend the example to also return a derivative with respect to the volatitliy parameter ("vega") (see here for the Rcpp-wrapped example).

But when I (somewhat mechanically) try to roll it to the time (and rate) derivatives (after taking care of the actual expression, i.e. using ad::sqrt() and ad::exp()) I am hitting a wall in the brace initialiser. Even in the simpler case of just adding "tau":

black_scholes.cpp: In function ‘Rcpp::NumericMatrix black_scholes(double, double, double, double, double)’:
black_scholes.cpp:66:25: error: cannot convert ‘<brace-enclosed initializer list>’ to ‘ad::core::ValueAdjView<double, ad::scl>::ptr_pack_t’ {aka ‘ad::util::PtrPack<double>’}
   66 |     call_expr.bind_cache({del_buf_c.data(), veg_buf_c.data(), tau_buf_c.data()});
      |     ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

biting with

   94 |     ptr_pack_t bind_cache(ptr_pack_t begin)
      |                           ~~~~~~~~~~~^~~~~

I must be doing something obvious wrong. I glanced at the documentation and saw the richer vector and matrix valued expression but here I think I just want a larger list of scalars. Is that possible?

using FastAD were Scalar type is a template parameter ?

Is it possible to use FastAD reverse mode to compute the derivative of an algorithm

template <class Scalar> Scalar f( const std::vector<Scalar>& x)

where f(x) only uses the operators + , -, *, /, and = operations with Scalar objects.
If so, is there an example that demonstrates this ?

adding convenience operators

Hey,
I am trying out your lib and it seems nice. I think it would be nice to have some convenience operators like += etc. Also I was wondering why it is not possible to use both normal floating point values in arithmetic expressions involving ad::Var? I think it would be nice to be able to do something like
Var<double> a, b{2}; a = 5. * b * b;

Problem using example in readme

I am getting the following messages when I try to compile one of the FastAD examples:

temp.cpp: In function ‘int main()’:
temp.cpp:12:22: error: no matching function for call to ‘autodiff(ad::core::ExprBind<ad::core::BinaryNode<ad::core::Add, ad::core::UnaryNode<ad::core::Sin, ad::VarView<double, ad::scl> >, ad::core::UnaryNode<ad::core::Cos, ad::VarView<double, ad::vec> > > >&)’
   12 |     auto f = autodiff(expr_bound);
      |              ~~~~~~~~^~~~~~~~~~~~
In file included from include/fastad_bits/reverse/core.hpp:7,
                 from include/fastad_bits/reverse.hpp:3,
                 from include/fastad:4,
                 from temp.cpp:1:

Below are the steps to reproduce this:

Step 1:

git clone https://github.com/JamesYang007/FastAD.git fastad.git

Step 2:

cd fastad.git ; ./setup.sh

Step 3:
Create the file temp.cpp with the following contents (from one of the FastAD readme examples):

#include <fastad>
#include <iostream>

int main()
{
    using namespace ad;

    Var<double, scl> x(2);
    Var<double, vec> v(5);

    auto expr_bound = bind(sin(x) + cos(v));
    auto f = autodiff(expr_bound);

    std::cout << x.get_adj() << std::endl;
    std::cout << v.get_adj(2,0) << std::endl;

    return 0;
}

Step 4:

g++ temp.cpp -o temp -I include -I libs/eigen-3.3.7/build/include/eigen3

New Features

  • Discrete distributions:
    • Binomial
    • Poisson
  • Continuous distributions:
    • Beta
    • Chi-squared
    • Exponential
    • F
    • Gamma
    • t
  • Simplex distributions:
    • Dirichlet
  • Covariance distributions:
    • Inverse wishart

Enable CMake integration with FetchContent

It would be very convenient to enable CMake integration with FetchContent like so

include(FetchContent)
FetchContent_Declare(
        FastAD
        GIT_REPOSITORY https://github.com/JamesYang007/FastAD
        GIT_TAG v3.2.1
        GIT_SHALLOW TRUE
        GIT_PROGRESS TRUE)
FetchContent_MakeAvailable(FastAD)

However, it would require to put FASTAD_ENABLE_TEST OFF by default in the CMakeLists.

Improvement to Forward mode

Hi @JamesYang007!

This repo seems pretty nice, especially your philosophy as laid out in your paper of arguing the benefit of having a pair of pointers of matrices, rather than a matrix of dual numbers.

I had been using https://github.com/autodiff/autodiff for a while which overloads Eigen's scalar type (i.e. the latter approach) to use a matrix of dual numbers, and I think there are quite a bit of overhead (and cache misses) when compared to the reference function (matrix of doubles). I wanted to test out your repo, but realised that this repo had been mainly focusing on reversed mode rather than forward mode (which is the focus of https://github.com/autodiff/autodiff). Do you have any plans to make some of the implementations in https://github.com/JamesYang007/FastAD/tree/master/include/fastad_bits/reverse/core applicable to both modes? More than that, it seems like forward mode right now only works with scalar types (float/double) rather than matrix/vector type?

Finally, one huge benefit of https://github.com/autodiff/autodiff is that it is immediately applicable to a lot of existing function/solvers (since it is using custom scalar type to do autodiff), while your approach requires manual work to implements different operators (e.g. all the ad::XXX custom operator rather than the Eigen::XXX operator) and https://github.com/autodiff/autodiff immediately works with any function that takes a templated Eigen argument. Do you have any thoughts on that? (I had thought of a possible approach could be that we can extend MatrixBase rather than using custom scalar type to keep track of the adjoints of the input variables.)

Supporting mat transpose.

mat transpose is very common in scientific computing. If we want to support that, can we write a function in reverse/core/unary.hpp to directly transpose underneath values and adjoints?

I would like to make a PR but I'm not sure if the above idea is correct.

Jacobian

Can we get any example when we have an array of functions, rather than a single one. This is quite a common case, when we need to compute Jacobian. I think I have some idea how to do that, but I am not sure this would be efficient.

Delta Function

In practice, it may be useful to have delta-functions.

Implement delta-function.

Remove CRTP and Replace with Concepts

There are many uses of CRTP such as ADExpression that can be simplified and made more robust using self-implemented Concepts.

Types to consider:

  • ADExpression
  • LeafNode
  • UnaryNode
  • BinaryNode
  • EqNode
  • GlueNode
  • SumNode
  • ForEach
  • ProdNode

There are meta-programming tools such as is_glue_eq, glue_size that should be changed/moved.

Checkpoint

Create a new ad expression responsible for representing already-computed ad expression.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.