Giter VIP home page Giter VIP logo

calculus.jl's Introduction

Calculus.jl

Coverage Status

Introduction

The Calculus package provides tools for working with the basic calculus operations of differentiation and integration. You can use the Calculus package to produce approximate derivatives by several forms of finite differencing or to produce exact derivative using symbolic differentiation. You can also compute definite integrals by different numerical methods.

API

Most users will want to work with a limited set of basic functions:

  • derivative(): Use this for functions from R to R
  • second_derivative(): Use this for functions from R to R
  • Calculus.gradient(): Use this for functions from R^n to R
  • hessian(): Use this for functions from R^n to R
  • differentiate(): Use this to perform symbolic differentiation
  • simplify(): Use this to perform symbolic simplification
  • deparse(): Use this to get usual infix representation of expressions

Usage Examples

There are a few basic approaches to using the Calculus package:

  • Use finite-differencing to evaluate the derivative at a specific point
  • Use higher-order functions to create new functions that evaluate derivatives
  • Use symbolic differentiation to produce exact derivatives for simple functions

Direct Finite Differencing

using Calculus

# Compare with cos(0.0)
derivative(sin, 0.0)
# Compare with cos(1.0)
derivative(sin, 1.0)
# Compare with cos(pi)
derivative(sin, float(pi))

# Compare with [cos(0.0), -sin(0.0)]
Calculus.gradient(x -> sin(x[1]) + cos(x[2]), [0.0, 0.0])
# Compare with [cos(1.0), -sin(1.0)]
Calculus.gradient(x -> sin(x[1]) + cos(x[2]), [1.0, 1.0])
# Compare with [cos(pi), -sin(pi)]
Calculus.gradient(x -> sin(x[1]) + cos(x[2]), [float64(pi), float64(pi)])

# Compare with -sin(0.0)
second_derivative(sin, 0.0)
# Compare with -sin(1.0)
second_derivative(sin, 1.0)
# Compare with -sin(pi)
second_derivative(sin, float64(pi))

# Compare with [-sin(0.0) 0.0; 0.0 -cos(0.0)]
hessian(x -> sin(x[1]) + cos(x[2]), [0.0, 0.0])
# Compare with [-sin(1.0) 0.0; 0.0 -cos(1.0)]
hessian(x -> sin(x[1]) + cos(x[2]), [1.0, 1.0])
# Compare with [-sin(pi) 0.0; 0.0 -cos(pi)]
hessian(x -> sin(x[1]) + cos(x[2]), [float64(pi), float64(pi)])

Higher-Order Functions

using Calculus

g1 = derivative(sin)
g1(0.0)
g1(1.0)
g1(pi)

g2 = Calculus.gradient(x -> sin(x[1]) + cos(x[2]))
g2([0.0, 0.0])
g2([1.0, 1.0])
g2([pi, pi])

h1 = second_derivative(sin)
h1(0.0)
h1(1.0)
h1(pi)

h2 = hessian(x -> sin(x[1]) + cos(x[2]))
h2([0.0, 0.0])
h2([1.0, 1.0])
h2([pi, pi])

Symbolic Differentiation

using Calculus

differentiate("cos(x) + sin(x) + exp(-x) * cos(x)", :x)
differentiate("cos(x) + sin(y) + exp(-x) * cos(y)", [:x, :y])

Numerical Integration

The Calculus package no longer provides routines for univariate numerical integration. Use QuadGK.jl instead.

Credits

Calculus.jl is built on contributions from:

  • John Myles White
  • Tim Holy
  • Andreas Noack Jensen
  • Nathaniel Daw
  • Blake Johnson
  • Avik Sengupta
  • Miles Lubin

And draws inspiration and ideas from:

  • Mark Schmidt
  • Jonas Rauch

calculus.jl's People

Contributors

ararslan avatar aviks avatar chrisrackauckas avatar dependabot[bot] avatar eriktaubeneck avatar garrison avatar giordano avatar iainnz avatar inkydragon avatar ivarne avatar jakeconnor avatar jeff-regier avatar joehuchette avatar johnmyleswhite avatar jrevels avatar juliatagbot avatar keno avatar magistere avatar mlubin avatar musm avatar mzaffalon avatar powerdistribution avatar ranocha avatar rened avatar rgiordan avatar stevengj avatar tanmaykm avatar timholy avatar viralbshah avatar yuyichao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

calculus.jl's Issues

Do you want to mentor Equations.jl for JSoC?

Hello Mr White! This package shares similarities with the Equations package and have been helped by you previously, both through blog posts and discussion.

Am therefore wondering if you would like to mentor not only the development of Julia code but also the associated course creation and (developing) developer hiring within the enveloping project hopefully featured in this years JSoC.

As stated in the group: If the project is accepted for JSoC all compensation will be pledged for expanding the project and you will be given power of veto over any suggested expenses.

Support for complex numbers

It would be nice if the package would support the complex numbers. Probably it is difficult to check when the derivative exist, and what path to take on the complex plane to take derivative. However if you will make default derivative path (for example along real axis) it should be good enough (I think) for DifferentialEquations.jl where derivatives are used to calculate Jacobian in implicit ODE integrators. (@ChrisRackauckas is using finite_difference! and finite_difference_jacobian! in DifferentialEquations.jl)
If it would be possible to indicate the curve on complex plane where to take derivative, it also would be awesome. Thanks.

julia> using Calculus; derivative(sin,0.0)
0.9999999999938886
julia> derivative(sin,0.0 + 0im)
ERROR: MethodError: no method matching eps(::Type{Complex{Float64}})
Closest candidates are:
  eps(::Date) at dates/types.jl:231
  eps(::DateTime) at dates/types.jl:230
  eps() at float.jl:499
  ...
 in finite_difference(::Base.#sin, ::Complex{Float64}, ::Symbol) at /home/kosh/.julia/v0.5/Calculus/src/finite_difference.jl:27
 in derivative(::Function, ::Complex{Float64}) at /home/kosh/.julia/v0.5/Calculus/src/derivative.jl:10
 in eval_user_input(::Any, ::Base.REPL.REPLBackend) at ./REPL.jl:64
 in macro expansion at ./REPL.jl:95 [inlined]
 in (::Base.REPL.##3#4{Base.REPL.REPLBackend})() at ./event.jl:68

Proper `epsilon` for complex?

It seems that the finite difference routines would "just work" if epsilon was changed to the "correct value".
Unless there's some kind of norm argument, I would propose changing each part using the current epsilon. For example, if epsilon1 is from real(x) and epsilon2 is from imag(x), then epsilon = epsilon1 + im*epsilon2.

Would this be fine? This definition would make it safe for the case where a user is just "packing" floats into complex numbers, which can be common in physical applications. If this is acceptable I'll make a PR.

Error while determining the derivative at end points

I am new to Julia and so please pardon if the following is too trivial. I am solving an ODE and I am interested in the derivative and second derivative of the solution. In my problem, I need the derivatives as they go into the RHS of a second ODE. There is one way coupling between the two sets of ODEs.

I tried using Calculus.derivative(u,x) etc in the second ODE, where x is a vector and u is the solution from the first ODE. I get the error:"Solution interpolation cannot extrapolate before the first timepoint. Either start solving earlier or use the local extrapolation from the integrator interface.".

I am guessing that the derivative is being calculated at the first point using central differences and hence the problem (we do not know the solution for x<x0 for x-range from x0 to xf). Is there a way of extrapolating the derivative calculated at points close to x0 (and a similar procedure at the end point) while the derivative at rest of the points are determined using central differences? The error message is suggesting a way using the "integrator interface" but I do not understand this very well.

I just checked with a similar function in MATLAB and it appears that the "gradient" function there calculates the central difference numerical derivative, except at the edges or ends, where it calculates a one-sided derivative. Is this an issue here?

PS: I have raised the same issue under JuliaMath/DifferentialEquations.jl. I am not sure which of the two places is the right one for this question.

Calculus.hessian is not symmetric

julia> @time h10 = Calculus.hessian(RB.mylike, RB.troubleParamsSim1.raw)
 209.557745 seconds (2.43 G allocations: 83.867 GB, 10.31% gc time)
 10x10 Array{Float64,2}:
    566.715     -206.282      -50.7658   …    3135.03        -89.8427
   -206.282      566.715       17.7032       -1993.75         47.1135
    -50.7658      17.7032     143.437        -1163.21         33.7016
     17.7032     -50.7658     -47.7584         282.533         1.3897
    182.291     -122.875      -12.5876       -2943.26         27.7885
     -2.95777     -5.09084      2.97638  …    -452.285        15.1723
    -45.7494      12.5752      39.2448           0.613878     24.8811
  -1942.38       804.496      450.315       -35319.9        1935.28
   3135.03     -1993.75     -1163.21         56057.6       -2498.87
    -89.8427      47.1135      33.7016       -2498.87       1041.54

 julia> issym(h10)
 false

Closer inspection of h10 shows all the errors are extremely small; the largest is 4e-8.

My original assumption was that this was the result of numerical noise from calculating the derivatives in different orders. But the code (finite_difference_hessian! in finite_difference.jl) appears to compute only the upper triangle,

   for i = 1:n
       #code omitted
        # and i+1:n appears to be parses (i+1):n, which is right
        for j = i+1:n

and then ends with

    Base.LinAlg.copytri!(H,'U')

So maybe the problem is in copytri!.

Julia Version 0.3.0-prerelease using "pi" gives an error

Hi,
Julia Version 0.3.0-prerelease+3679 (2014-06-14 10:06 UTC) Commit d0eac34* (1 day old master) x86_64-linux-gnu
Just loaded calculus trying to duplicate the README examples. Using numbers seems to be working ok:
julia> # Compare with cos(1.0)
derivative(x -> sin(x), 1.0)
0.5403023058631036
But "pi" is a problem:
julia> # Compare with cos(pi)
derivative(x -> sin(x), pi)
ERROR: no method eps(Type{MathConst{:π}})
in finite_difference at /home/osman/.julia/v0.3/Calculus/src/finite_difference.jl:27
in derivative at /home/osman/.julia/v0.3/Calculus/src/derivative.jl:12

am I missing some important library?
Thanks in advance,
Osman

Tag a new release

It would be great if someone could tag a new release. There are a lot of deprecation warnings emitted from the current release that have been fixed on master.

stabilize derivative_rules?

The DualNumbers package now uses Calculus's unexported derivative_rules. It seems like this list could be useful in general to other packages, so I'd propose exporting it or at least promising to keep it as stable as possible.

cc: @ivarne @scidom

Incremental compilation may be broken for this module?

I'm using Calculus.jl in FEMBasis.jl to calculate partial derivatives of interpolation polynomials symbolically before code generation. During precompilation of package, I get a lot of following messages:

WARNING: eval from module Calculus to FEMBasis:    
Expr(:call, :*, 0.5, 1)
  ** incremental compilation may be broken for this module **
WARNING: eval from module Calculus to FEMBasis:    
Expr(:call, :*, 0.5, 1)
  ** incremental compilation may be broken for this module **
WARNING: eval from module Calculus to FEMBasis:    
Expr(:call, :*, -0.5, 1)
  ** incremental compilation may be broken for this module **
# ... continues thousands of lines ...

For whole build log, see https://travis-ci.org/JuliaFEM/FEMBasis.jl/jobs/418381139.

Here's the lines I use Calculus:

Is this issue of FEMBasis or Calculus and how to solve this, or should I even be worried about this..?

Macro to differentiate function symbolically

Given a function that is simple enough, I'd like to be able to differentiate an existing function symbolically, by simply putting a macro in front of it. It could work something like this:

cubefunc_diff = @differentiate function cubefunc(x)
    return x * x * x
end

# cubefunc_diff(3) would return derivative evaluated at 3

I've started trying to implement this myself, but I'm not sure how best to proceed or if what I'm proposing is even a good idea. Would this be a good idea?

differentiate error

For some reason:

differentiate(:((-2.0 * (1 - a)^-2.0)),:b)
ERROR: no method +()
in simplify at /home/zac/.julia/v0.3/Calculus/src/symbolic.jl:97
in simplify at /home/zac/.julia/v0.3/Calculus/src/symbolic.jl:101
in differentiate at /home/zac/.julia/v0.3/Calculus/src/differentiate.jl:20

adding:
+() = 0
fixes this but I'm not quite sure what's failing here.

[PkgEval] Calculus may have a testing issue on Julia 0.2 (2014-06-16)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.2

  • On 2014-06-15 the testing status was Tests pass.
  • On 2014-06-16 the testing status changed to Tests fail, but package loads.

Tests pass. means that PackageEvaluator found the tests for your package, executed them, and they all passed.

Tests fail, but package loads. means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using worked.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Test log:

INFO: Cloning cache of Calculus from git://github.com/johnmyleswhite/Calculus.jl.git
INFO: Installing Calculus v0.1.4
INFO: REQUIRE updated.
ERROR: test error during :((norm(-(Calculus.finite_difference_hessian(fx,[0.0,0.0]),[$(Expr(:row, :(-(sin(0.0))), 0.0)),$(Expr(:row, 0.0, :(-(cos(0.0)))))]))<0.001))
copytri! not defined
 in finite_difference_hessian! at /home/idunning/pkgtest/.julia/v0.2/Calculus/src/finite_difference.jl:265
 in finite_difference_hessian at /home/idunning/pkgtest/.julia/v0.2/Calculus/src/finite_difference.jl:276
 in anonymous at test.jl:53
 in do_test at test.jl:37
 in include at boot.jl:238
at /home/idunning/pkgtest/.julia/v0.2/Calculus/test/finite_difference.jl:53
at /home/idunning/pkgtest/.julia/v0.2/Calculus/test/runtests.jl:20
INFO: REQUIRE updated.

Calculus.gradient return an Array of Float64 instead of Array of Type T

The calculus module is great for performing finite differences for derivative, jacobian and hessian

But when I tried to use it with my own type of floating point numbers, it fails and I track down why it fails.

julia> using PDFPs

julia> Base.float(x::PDFP) = x

julia> using Calculus

julia> Calculus.derivative( x->x^2 , PDFP(7) )  # returns 14
PDFP(0, 1, [1, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])

julia> Calculus.gradient(x -> sin(x[1]) + cos(x[2]), [PDFP(0), PDFP(0)])
2-element Array{Float64,1}:
 0.99999999999
 0.0 

Now Calculus.gradient return an Array of Float64 instead of Array of PDFP

I have track down the problem to
URL https : //github.com/JuliaMath/Calculus.jl/blob/master/src/finite_difference.jl

the problem is in the line "g = Vector{Float64}(undef, length(x))"

function finite_difference(f,
                           x::AbstractVector{T},
                           dtype::Symbol = :central) where T <: Number
    # Allocate memory for gradient
    g = Vector{Float64}(undef, length(x))

    # Mutate allocated gradient
    finite_difference!(f, float(x), g, dtype)

    # Return mutated gradient
    return g
end

I think it should have been

g = Vector{eltype(x)}(undef, length(x))

Remove call to `eval` in `simplify`

I have some code where I want to use Calculus.jl to differentiate symbolic expressions inside a @generated function. This currently does not work because after differentiating Calculus.jl calls simplify, which in turn often ends up calling eval. As of right now @generated functions are not allowed to call eval.

I'm opening this issue to see if those who know the internals here better think it would be possible/feasible to remove the call to eval inside simplify?

ref: EconForge/Dolang.jl#31 (comment)

finite_difference routines temporarily mutate inputs

tl;dr The finite_difference methods for the gradient and jacobian (but not hessian) temporarily mutate the input x vector in order to compute the derivatives. I think this is unwise since there is no way to know inside these functions if the input f function holds a reference to x and the user will not suspect this will happen. A copy of x should be made to make these routines safer.

More details for those interested:

This recently happened to me and it was difficult to track down. It occurred because I called the curve_fit routine from LsqFit.jl:

function curve_fit(model::Function, xpts, ydata, p0; kwargs...)

and inputted the same array for ydata and p0. I know this seems unusual but it makes sense for the problem I was solving (I can provide details if you want). This caused the fit to fail because curve_fit makes a function that includes ydata as a closure that is used in finite_difference_jacobian!. I don't see how a user of curve_fit would suspect that somewhere under the hood an input array will be mutated (even temporarily).

I see that the Calculus.jl routines are going to be superseded by the FiniteDiff.jl and this concern affects routines there as well (for computing the gradient and hessian; I don't see a jacobian routine there).

Thanks for your consideration.

Gradients, second derivatives and hessians at pi not working

You might need some fixes with README 😄
Related issues #40 #52 : float(pi) could fix this:

julia> using Calculus

julia> h1 = second_derivative(sin)
#6 (generic function with 1 method)

julia> h1(0.0)
0.0

julia> h1(1.0)
-0.841471649579559

julia> h1(pi)
ERROR: MethodError: no method matching eps(::Type{Irrational{:π}})
Closest candidates are:
  eps(::Dates.Time) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.3\Dates\src\types.jl:387
  eps(::Dates.Date) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.3\Dates\src\types.jl:386
  eps(::Dates.DateTime) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.3\Dates\src\types.jl:385
  ...
Stacktrace:
 [1] finite_difference(::Calculus.var"#1#3"{typeof(sin),Symbol}, ::Irrational{:π}, ::Symbol) at C:\Users\islent\.julia\packages\Calculus\mbqhh\src\finite_difference.jl:27lus\mbqhh\src\finite_difference.jl:27
 [2] finite_difference_hessian(::Function, ::Function, ::Irrational{:π}, ::Symbol) at C:\Users\islent\.julia\packages\Calculus\mbqhh\ckages\Calculus\mbqhh\src\derivative.jl:27src\finite_difference.jl:224
 [3] (::Calculus.var"#6#8"{typeof(sin),Calculus.var"#1#3"{typeof(sin),Symbol},Symbol})(::Irrational{:π}) at C:\Users\islent\.julia\packages\Calculus\mbqhh\src\derivative.jl:27
 [4] top-level scope at REPL[5]:1

julia> h1(float(pi))
-1.3553766145945872e-7
julia> g2 = Calculus.gradient(x -> sin(x[1]) + cos(x[2]))
#2 (generic function with 1 method)

julia> g2([0.0, 0.0])
2-element Array{Float64,1}:
 0.9999999999868854
 0.0

julia> g2([1.0, 1.0])
2-element Array{Float64,1}:
  0.5403023058631036
 -0.8414709847974693

julia> g2([pi, pi])
ERROR: MethodError: no method matching Irrational{:π}(::Int64)
Closest candidates are:
  Irrational{:π}(::T) where T<:Number at boot.jl:718
  Irrational{:π}() where sym at irrationals.jl:18
  Irrational{:π}(::Complex) where T<:Real at complex.jl:37
  ...
Stacktrace:
 [1] convert(::Type{Irrational{:π}}, ::Int64) at .\number.jl:7
 [2] zero(::Type{Irrational{:π}}) at .\number.jl:266
 [3] float(::Array{Irrational{:π},1}) at .\float.jl:896
 [4] (::Calculus.var"#2#4"{var"#3#4",Symbol})(::Array{Irrational{:π},1}) at C:\Users\islent\.julia\packages\Calculus\mbqhh\src\derivative.jl:5
 [5] top-level scope at REPL[9]:1
julia> h2 = hessian(x -> sin(x[1]) + cos(x[2]))
#7 (generic function with 1 method)

julia> h2([0.0, 0.0])
2×2 Array{Float64,2}:
 -7.5693e-7   0.0     
  0.0        -0.999999

julia> h2([1.0, 1.0])
2×2 Array{Float64,2}:
 -0.841472   0.0
  0.0       -0.540303

julia> h2([pi, pi])
ERROR: MethodError: no method matching Irrational{:π}(::Int64)
Closest candidates are:
  Irrational{:π}(::T) where T<:Number at boot.jl:718
  Irrational{:π}() where sym at irrationals.jl:18
  Irrational{:π}(::Complex) where T<:Real at complex.jl:37
  ...
Stacktrace:
 [1] convert(::Type{Irrational{:π}}, ::Int64) at .\number.jl:7
 [2] zero(::Type{Irrational{:π}}) at .\number.jl:266
 [3] float(::Array{Irrational{:π},1}) at .\float.jl:896
 [4] (::Calculus.var"#2#4"{var"#5#6",Symbol})(::Array{Irrational{:π},1}) at C:\Users\islent\.julia\packages\Calculus\mbqhh\src\derivative.jl:5
 [5] finite_difference_jacobian(::Calculus.var"#2#4"{var"#5#6",Symbol}, ::Array{Irrational{:π},1}, ::Symbol) at C:\Users\islent\.julia\packages\Calculus\mbqhh\src\finite_difference.jl:197
 [6] finite_difference_hessian(::Function, ::Function, ::Array{Irrational{:π},1}, ::Symbol) at C:\Users\islent\.julia\packages\Calculus\mbqhh\src\finite_difference.jl:284
 [7] (::Calculus.var"#7#9"{var"#5#6",Calculus.var"#2#4"{var"#5#6",Symbol},Symbol})(::Array{Irrational{:π},1}) at C:\Users\islent\.julia\packages\Calculus\mbqhh\src\derivative.jl:29
 [8] top-level scope at REPL[15]:1

Error when taking derivative of pi

julia> derivative(x -> sin(x), pi)

ERROR: no method eps(DataType,)
in finite_difference at /Users/smackesey/.julia/Calculus/src/finite_difference.jl:27
in derivative at /Users/smackesey/.julia/Calculus/src/derivative.jl:12

This also happens when you pass an integer in place of pi.

replace methods

Hi
Most of mathematical languages have various replace methods and rules to work with expressions, e.g. see ref[1] & ref[2].
What do you think? Is it useful to add some here?
Please see my works in ref[3] and if it is possible i like to push them here.
thanks

references:
1- http://reference.wolfram.com/language/tutorial/ApplyingTransformationRules.html
2- http://reference.wolfram.com/language/ref/Replace.html
3- https://github.com/DANA-Laboratory/ThermophysicalCalculation.jl/blob/master/src/Replace.jl

Second derivative of e^x and further

Haven't looked at the source to check out what's probably happening, but it seems like there's some weird problem with definite differentiation of e^x, for example where x = 1:

julia> f(x) = e^x
# methods for generic function f
f(x) at none:1

julia> f'(1.0)
2.7182818284828385

julia> f''(1.0)
2.718281392750713 # should still be e, losing some precision

julia> f'''(1.0)
2.250004016796959 # worsens

julia> f''''(1.0)
-41284.749998126055

Calculus breaks Base.gradient function with ambiguous methods

Since a recent update, there seems to be a problem with the gradient fucntion.

The following works fine:

julia> gradient([1:3;])
3-element Array{Float64,1}:
 1.0
 1.0
 1.0

julia> using Calculus

julia> gradient([1:3;])
3-element Array{Float64,1}:
 1.0
 1.0
 1.0

However, if Calculus is used before calling gradient, you get

julia> using Calculus

julia> gradient([1:3;])
ERROR: MethodError: gradient(::Array{Int64,1}, ::Array{Int64,1}) is ambiguous. Candidates:
  gradient(f, x::Union{Array{T,1}, T}) where T<:Number in Calculus at /home/david/.julia/v0.6/Calculus/src/derivative.jl:17
  gradient(F::AbstractArray{T,1} where T, h::Array{T,1} where T) in Base.LinAlg at linalg/dense.jl:186
Possible fix, define
  gradient(::AbstractArray{T,1} where T, ::Array{T<:Number,1})
Stacktrace:
 [1] gradient(::Array{Int64,1}) at ./linalg/generic.jl:282

Even directly calling the function from base with Base.LinAlg.gradient throws the above error.
I am using v"0.3.1", but this also happens when pinning to v"0.3.0". Older version fail to pre-compile with Julia 0.6.2.

analytic Jacobian

i wrote counterparts to the existing finite_difference_jacobian for the case where the function of interest is analytically differentiable. happy to submit a PR if there is interest. would just need to know the best place to put it.

function analytic_jacobian{T<:Number}(fdot::Vector{Function}, x::Vector{T})
    f_x = fdot[1](x)
    J = Array(Float64,length(f_x),length(x))
    J[:,1] = f_x
    for i = 2:length(fdot)
        J[:,i] = fdot[i](x)
    end
    J
end

"""
    `jacobian(fdot::Vector{Function}) -> g(x::Vector)`

Given a function `f` whose partial derivatives are `fdot`, return a function
`g(x)` which itself returns the Jacobian of `f` at `x`.
"""
function jacobian(fdot::Vector{Function})
    g(x::Vector) = analytic_jacobian(fdot, x)
    return g
end

Chain Rule

I think there is a bug related to the chain rule.
This could be a nice tool for methods with use gradients.

differentiate(:((X - MU)^2), :X)
:(^(-(X,MU),2))

problems with finite difference

I think there is a problem with the central difference code.
Here is a simply example:

julia> Calculus.derivative(x -> x/(x+ 1.4424183196362515e-9),2e-8,:central)
-39.33717713979761
julia> Calculus.derivative(x -> x/(x+ 1.4424183196362515e-9),2e-8,:forward)
1.8509290259770826e6

We know that the analytical derivative is

1.4424183196362515e-9/(x+1.4424183196362515e-9)

which at x=2e-8 is 3.137210795286552e6, similar to the forward difference result (at least on the same order of magnitude) but there is a 5 orders of magnitude difference compared to the central difference result, which is supposed to be 2nd order accurate!
I looked through the source code, and I think the problem is with the macro @centralrule

julia> @centralrule 2e-8 epsilon
6.0554544523933395e-6

in comparison

julia> @forwardrule 2e-8 epsilon
1.4901161193847656e-8

The stepping given by @centralrule is clearly unacceptably large (2 orders of magnitude greater than x!. I don't quite understand why the stepping in the @centralrule is taken as the cubic square root of eps() while the @forwardrule uses the square root of eps()?

macro forwardrule(x, e)
    x, e = esc(x), esc(e)
    quote
        $e = sqrt(eps(eltype($x))) * max(one(eltype($x)), abs($x))
    end
end

macro centralrule(x, e)
    x, e = esc(x), esc(e)
    quote
        $e = cbrt(eps(eltype($x))) * max(one(eltype($x)), abs($x))
    end
end

In this case an appropriate stepping show be smaller than the order of 1e-10, which nether of these two methods gives. Why not just use eps() as the step size?

[PkgEval] Calculus may have a testing issue on Julia 0.4 (2015-06-23)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.4

  • On 2015-06-22 the testing status was Tests pass.
  • On 2015-06-23 the testing status changed to Tests fail.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Test log:

>>> 'Pkg.add("Calculus")' log
INFO: Installing Calculus v0.1.8
INFO: Package database updated

>>> 'Pkg.test("Calculus")' log
INFO: Testing Calculus
Running tests:
 * finite_difference
ERROR: LoadError: LoadError: syntax: local declaration in global scope
 in include at ./boot.jl:254
 in include_from_node1 at ./loading.jl:133
 in anonymous at no file:19
 in include at ./boot.jl:254
 in include_from_node1 at loading.jl:133
 in process_options at ./client.jl:304
 in _start at ./client.jl:404
while loading /home/vagrant/.julia/v0.4/Calculus/test/finite_difference.jl, in expression starting on line 73
while loading /home/vagrant/.julia/v0.4/Calculus/test/runtests.jl, in expression starting on line 17

==============================[ ERROR: Calculus ]===============================

failed process: Process(`/home/vagrant/julia/bin/julia --check-bounds=yes --code-coverage=none --color=no /home/vagrant/.julia/v0.4/Calculus/test/runtests.jl`, ProcessExited(1)) [1]

================================================================================
INFO: No packages to install, update or remove
ERROR: Calculus had test errors
 in error at ./error.jl:21
 in test at pkg/entry.jl:746
 in anonymous at pkg/dir.jl:31
 in cd at file.jl:22
 in cd at pkg/dir.jl:31
 in test at pkg.jl:71
 in process_options at ./client.jl:280
 in _start at ./client.jl:404


>>> End of log

Error with numerical integration

Obviously, these don't add up correctly.
julia> f(x) = x^3*exp(-x^2/2)/2
f (generic function with 1 method)
julia> integrate(f,0,20)
1.0000000000107998
julia> integrate(f,20,30)
4.30545586095874e-84
julia> integrate(f,0,30)
1.3729357524611979e-9

Tag a new release to include 0.6 depwarn fixes

Many packages depend on this one, and their testing logs get a bunch of depwarns stemming from this package since there hasn't been a release since the depwarns were fixed. If someone could make such a release, that would be awesome!

functionality for the derivative of atan2

I am a JuMP user that wanted to be able to use the atan2 function and I noticed that it was on someones radar here to add this functionality and I am wondering is this straightforward to do or would it be difficult to do and extend to JuMP since atan2 is a function of two variables. @mlubin what do you think?

Symbolic derivatives of sind, cosd, and friends are off by a factor of pi/180

derivative_rules in differentiate.jl contains the following entry:

( :sind, :(  xp * cosd(x) ))

This should be

( :sind, :(  xp * pi / 180 * cosd(x) ))

Here's some numerical evidence:

julia> sind(0.00001)
1.745329251994321e-7

julia> 0.00001*cosd(0.0)
1.0e-5

julia> 0.00001*pi/180*cosd(0.0)
1.7453292519943297e-7

It looks like the a* functions already take this factor into account, but all the other *d functions also need updating (I think).

Simplify(::Expr) failed

in v0.1.5
simplify can not handle Float64(NaN)
check this code

using calculus 
simplify(:(1-0/0)) 

thanks

Bump Calculus SHA1; split 0.1 version

The SHA1 for Calculus in METADATA is a couple of months old. The symbolic differentiation tests fail on that codebase, with latest Julia.

$julia run_tests.jl 
Running tests:
 * test/finite_difference.jl
 * test/derivative.jl
 * test/check_derivative.jl
 * test/integrate.jl
 * test/symbolic.jl
ERROR: assertion failed: :(isequal(differentiate(:(+(x,x)),:x),2))
 in error at error.jl:22
 in include_from_node1 at loading.jl:92
 in anonymous at no file:19
 in include_from_node1 at loading.jl:92
 in process_options at client.jl:250
 in _start at client.jl:329
at /Users/aviks/.julia/Calculus/test/symbolic.jl:7
at /Users/aviks/.julia/Calculus/run_tests.jl:20

However, it may be necessary to split a julia 0.1 before bumping to current head.

Parentheses in 'deparse' output

Parentheses are not correctly placed in the deparsed expression:

julia> deparse(differentiate(:(x/log(x)), :x))
"log(x) - x * 1 / x / log(x) ^ 2"

simplify give wrong result

Hey,
simplify(:(a-0)) gives -a rather than a

due to the reinterpretation of Expr(:call,:-,:a) following the stripping of zeros

Prime notation not working

Example code copied from README not working (julia-1.3.0, Calculus-0.5.1):

julia> using Calculus

julia> f(x) = sin(x)
f (generic function with 1 method)

julia> f'(1.0) - cos(1.0)
ERROR: MethodError: no method matching adjoint(::typeof(f))
Closest candidates are:
  adjoint(::Missing) at missing.jl:100
  adjoint(::Number) at number.jl:193
  adjoint(::LinearAlgebra.Adjoint) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.3\LinearAlgebra\src\adjtrans.jl:152
  ...
Stacktrace:
 [1] top-level scope at REPL[3]:1

julia> f''(1.0) - (-sin(1.0))
ERROR: MethodError: no method matching adjoint(::typeof(f))
Closest candidates are:
  adjoint(::Missing) at missing.jl:100
  adjoint(::Number) at number.jl:193
  adjoint(::LinearAlgebra.Adjoint) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.3\LinearAlgebra\src\adjtrans.jl:152
  ...
Stacktrace:
 [1] top-level scope at REPL[6]:1

Using AbstractArray instead of Array in finite_difference_jacobian!() ?

I attempted to use curve_fit from LsqFit.jl on data stored in a DataFrame column, as in curve_fit(model, df[:x], df[:y], p0). This produced the following error:

LoadError: MethodError: `finite_difference_jacobian!` has no method matching finite_difference_jacobian!(::Function, ::Array{Float64,1}, ::DataArrays.DataArray{Float64,1}, ::Array{Float64,2}, ::Symbol)
Closest candidates are:
  finite_difference_jacobian!{R<:Number,S<:Number,T<:Number}(::Function, ::Array{R<:Number,1}, !Matched::Array{S<:Number,1}, ::Array{T<:Number,N}, ::Symbol)
  finite_difference_jacobian!{R<:Number,S<:Number,T<:Number}(::Function, ::Array{R<:Number,1}, !Matched::Array{S<:Number,1}, ::Array{T<:Number,N})

Of course this is easily resolved by explicitly converting the inputs to Arrays, but it's a bit cumbersome. I'm pretty new to Julia and its type system, but from looking at the code there's no obvious reason to me why finite_difference_jacobian! couldn't work with its signature changed to use AbstractArray and AbstractVector, relieving the need to convert DataArrays. Is this reasonable? If there's no reason not to do it, I can try to test it out and make a pull request when I have time.

Disallow second_derivative and hessian functions to be called without g

Currently, derivative.jl has the two following definintions

second_derivative(f::Function) = second_derivative(f, derivative(f), :scalar, :central)
hessian(f::Function) = second_derivative(f, gradient(f), :vector, :central)

however we discussed that by estimating the first derivative to hand to the second derivative is error prone and subject to possibly large round off error.

I think it's simple enough for a user to input second_derivavitve(f,derivative(f)) explicitly if they want to take this risk.

Thoughts?
@johnmyleswhite

Derivative of `abs`

Is there a reason that abs not supported by differentiate or just an omission?

norm(matrix) -> opnorm(matrix) in Julia 0.7

It looks like may be using norm(matrix). In Julia 0.7, this will compute the Frobenius norm (vecnorm in Julia 0.6), due to JuliaLang/julia#27401. If you want the induced/operator norm as in Julia 0.6, use opnorm(matrix) instead, or Compat.opnorm(matrix) to work in 0.6 and 0.7 (JuliaLang/Compat.jl#577).

Note that, for testing purposes, rather than @test norm(A - B) ≤ tol, it is usually preferred to do @test A ≈ B or @test A ≈ B rtol=... (which uses isapprox).

[PackageEvaluator.jl] Your package Calculus may have a testing issue.

This issue is being filed by a script, but if you reply, I will see it.

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their test (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3).

The results of this script are used to generate a package listing enhanced with testing results.

The status of this package, Calculus, on...

  • Julia 0.2 is 'Tests pass.' PackageEvaluator.jl
  • Julia 0.3 is 'Tests fail, but package loads.' PackageEvaluator.jl

'No tests, but package loads.' can be due to their being no tests (you should write some if you can!) but can also be due to PackageEvaluator not being able to find your tests. Consider adding a test/runtests.jl file.

'Package doesn't load.' is the worst-case scenario. Sometimes this arises because your package doesn't have BinDeps support, or needs something that can't be installed with BinDeps. If this is the case for your package, please file an issue and an exception can be made so your package will not be tested.

This automatically filed issue is a one-off message. Starting soon, issues will only be filed when the testing status of your package changes in a negative direction (gets worse). If you'd like to opt-out of these status-change messages, reply to this message.

Hessian low precision when specific values are small

I have a 1D system x coupled to a harmonic oscillator y with a frequency of omega, so f_yy should be omega^2.

However, when the frequency of the harmonic oscillator is fairly small, like 5e-5, H[2, 2] returned from Calculus.hessian is 0, though Calculus.second_derivative still give the right result.

Simple code to reproduce the problem

using Calculus

# parameters
omega = 5e-5
chi = 0.02
a = 1.9657
b = 25.2139
c = 9.04337

# potential
v(x) = 4.254987695360661e-5x^4 - 0.00278189309952x^2
# coupling
inter(x, q) = sqrt(2omega) * chi * (a*x-b*tanh(x/c)) * q
# harmonic oscillator 
vp(q) = 0.5 * omega^2 * q^2
# total potential
u(x, q) = inter(x, q) + v(x) + vp(q)
# hessian
h = Calculus.hessian(x -> u(x[1], x[2]))
# print
println(h([0.0, 0.0]), " hessian")
println(Calculus.second_derivative(vp, 0.0), " 2nd derivative")

The output is

[-0.005563827244039653 -0.0001644434349728066; -0.0001644434349728066 0.0] hessian
2.5e-9 2nd derivative

The output of the first println is [-0.005563827244039653 -0.0001644434349728066; -0.0001644434349728066 0.0] (H[2, 2] is 0), while the second one gives right value 2.5e-9.

For the time being, I just replace the element h[2, 2] = omega^2 as a work-around.

I understand the value vp gets really really small, so it's kind of difficult to obtain nuermical derivatives, but why would second_derivative work, yet hessian doesn't?

Differentiate polygamma

I would like to add support for differentiating the polygamma function, see JuliaDiff/ForwardDiff.jl#196. Do you seen any difficulties? I know that one needs to access the first arg to polygamma, so I can't just add a single line to differentiate.jl. It seems the Bessel functions have the same structure. The comment in https://github.com/johnmyleswhite/Calculus.jl/blob/master/src/differentiate.jl#L264 even mentions the digamma function as a function to add. So is there any stumbling block? I'd be willing to send a PR with a little guidance

Tests of differentiate() compares expressions too strictly

Correct changes to the implementations of differentiate() and simpelify() breaks tests if the order of arguements are changed.

>>> isequal(:(a+b) , :(b+a))
false

The test should use a less strict comparison function for expressions than isequal, so that it wil be easier to write tests and easier to modify the code without having to modify tests. It is very easy to make mistakes when updating tests to new behaviour.

New deparse.jl

I've written a much more comprehensive deparse function, partly as a means of learning Julia. Before I go to the effort to package it for pulling, have I reinvented a wheel? Perhaps that ought to be in another package?

Adjoint overload for `::Function` is type piracy

if isdefined(Base, :adjoint)
Base.adjoint(f::Function) = derivative(f)
else
Base.ctranspose(f::Function) = derivative(f)
end

Is type piracy. Zygote is now similarly committing type piracy (FluxML/Zygote.jl#260), leading to warnings like

WARNING: Method definition adjoint(Function) in module Zygote at /home/twan/.julia/packages/Zygote/fuj2C/src/compiler/interface.jl:50 overwritten in module Calculus at /home/twan/.julia/packages/Calculus/2qBLt/src/derivative.jl:33.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.