spm-lab / sparseir.jl Goto Github PK
View Code? Open in Web Editor NEWOn-the-fly computation of IR basis functions
Home Page: https://spm-lab.github.io/SparseIR.jl
License: MIT License
On-the-fly computation of IR basis functions
Home Page: https://spm-lab.github.io/SparseIR.jl
License: MIT License
Related to #12.
FiniteTempBasis
and sampling classes are parametric types.
https://github.com/SpM-lab/SparseIR.jl/blob/main/src/basis.jl#L168
https://github.com/SpM-lab/SparseIR.jl/blob/main/src/sampling.jl#L34
I like the type stability of this design but I feel that we expose too many internal type parameters to the user. As a result, FiniteTempBasis
objects for different kernels have different types even though they have the same interface. Is it possible to reduce the number of exposed internal type parameters while keeping the type stability of the interface?
In my opinion, the user would expect that basis and sampling classes depend only on one type parameter T <: Floating
that describes the number of bits for representing basis functions (and T
can default to Float64
).
We could make the type of the attribute kernel
abstract to prevent the working floating type of SVD, T_work
, from be propagated into the user at the price of some type instability.
Any ideas?
struct FiniteTempBasis{T<:Floating} <: AbstractBasis
kernel::AbstractKernel
sve_result::Tuple{
PiecewiseLegendrePolyVector{T},Vector{T},PiecewiseLegendrePolyVector{T}
}
statistics::Statistics
β::T
u::PiecewiseLegendrePolyVector{T}
v::PiecewiseLegendrePolyVector{T}
s::Vector{T}
uhat::PiecewiseLegendreFTArray{T}
end
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
Respecting segments of a PiecewiseLegendrePoly
object in evaluating overlap
did NOT improve the performance. Better to port the Python code to Julia.
This performance problem becomes serious when evaluating the basis functions on a dense frequency mesh.
Julia:
using SparseIR
beta = 1.0
wmax = 1000.0
basis = FiniteTempBasis(fermion, beta, wmax, 1e-7)
nvec = 2 .* collect(1:10000) .+ 1
basis.uhat(nvec)
@time basis.uhat(nvec)
22.467327 seconds (72.36 M allocations: 22.350 GiB, 6.70% gc time)
Python:
from sparse_ir import FiniteTempBasis
import numpy as np
import time
beta = 1.0
wmax = 1000.0
basis = FiniteTempBasis("F", beta, wmax, 1e-7)
nvec = 2 * np.arange(10000) + 1
t1 = time.time()
basis.uhat(nvec)
time.time() - t1
0.49092721939086914
The SVD can be done when the fit
is called first time.
I am working on branch julia1.6
.
I've removed Base.@invoke
because it's too new.
With Julia 1.6, I still get an weird error.
https://github.com/SpM-lab/SparseIR.jl/runs/6448897131?check_suite_focus=true
With Julia 1.6, I got an weird error:
If I try to load the package with include SparseIR
on Julia v1.11.0-beta1, I get an error that the package does not precompile. Looking at the stacktrace, it comes from:
Stacktrace:
[4] cosh(x::MultiFloats.MultiFloat{Float64, 2})
@ SparseIR src/SparseIR.jl:21
[12] segments_x(hints::SparseIR.SVEHintsLogistic{Float64}, ::Type{MultiFloats.MultiFloat{Float64, 2}})
@ SparseIR src/kernel.jl:204
Problem seems to this block.
We (I and R. Sakurai) found this bit annoying:
using SparseIR
lambda_ = 100.0
beta = 10.0
wmax = lambda_/beta
basis = FiniteTempBasis(fermion, beta, wmax, 1e-7)
println(beta(basis))
Many Julia users seem to prefer to using using
.
We should rename the accessor something like getbeta
?
The same applies to wmax
...
When dim=end
, evaluate!
still allocates a lot of memory.
Benchmark result for commit bebe186.
using Revise
using SparseIR
using BenchmarkTools
beta = 1.0
wmax = 1000.0
basis = FiniteTempBasis(fermion, beta, wmax, 1e-7)
smpl = MatsubaraSampling(basis)
N = 1000000
in = zeros(ComplexF64, length(basis), N)
out = zeros(ComplexF64, length(smpl.sampling_points), N)
@benchmark evaluate!(out, smpl, in; dim=1)
N = 1000000
in = zeros(ComplexF64, N, length(basis))
out = zeros(ComplexF64, N, length(smpl.sampling_points))
@benchmark evaluate!(out, smpl, in; dim=2)
@JuliaRegistrator register
The symbols boson
and fermion
are now gone. Can we reintroduce them?
The composite/augmented basis things are in a pretty sorry state right now.
Problems:
Vector{Any}
and Union{...}
instead of doing this properlySparseIR
is now super-fast to load. However, it does take quite long to precompile.
I don't understand why this should be, since we mostly do bog-standard linear algebra ... or is the quad-precision linear algebra just much slower?
Quite a small issue, but it is a little annoying :)
Should we export as many symbols as possible or export only a minimum number of symbols?
https://github.com/SpM-lab/SparseIR.jl/blob/main/src/SparseIR.jl#L10-L19
These types depend on the SVD type:
https://github.com/SpM-lab/SparseIR.jl/blob/main/src/sampling.jl#L69
This is not so convenient because changes in the implementation of the fitting could affect user codes. For instance, the code shown below does not work anymore.
https://github.com/SpM-lab/sparse-ir-tutorial/blob/main/src/ipt_jl.md
Does it make sense to define type aliases?
const TauSampling64 = TauSampling{Float64,Float64,SVD}
const MatsubaraSampling64 = MatsubaraSampling{Int64,ComplexF64,SVD}
Does it make sense to switch from LU to SVD and allow the user to optionally pass a preallocated work array to fit!
?
using Revise
using SparseIR
using BenchmarkTools
beta = 1.0
wmax = 1000.0
basis = FiniteTempBasis(fermion, beta, wmax, 1e-7)
smpl = MatsubaraSampling(basis)
N = 100000
in = zeros(ComplexF64, length(basis), N)
out = zeros(ComplexF64, length(smpl.sampling_points), N)
#@benchmark evaluate!(out, smpl, in; dim=1)
@benchmark fit!(in, smpl, out; dim=1)
Just a memo
To avoid allocating a new array, implementing them may be useful.
But, if dim!=1 and dim!=end, we still need to allocate temporary arrays for permutating dims.
As for evaluate
, we can improve the performance of fit
using LAPACK.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.