juliaapproximation / compactbases.jl Goto Github PK
View Code? Open in Web Editor NEWJulia library for function approximation with compact basis functions
License: MIT License
Julia library for function approximation with compact basis functions
License: MIT License
FEDVR.t
is an AbstractVector
. Is changing this breaking according to SemVer?
The same set of data points can lead to different estimates of the convergence rate, which sometimes leads to tests failing:
It does not seem to be dependent on architecture or Julia version, e.g. a recent failure on Mac, nightly:
https://travis-ci.org/github/JuliaApproximation/CompactBases.jl/jobs/701688381#L386
For discretization of non-Hermitian operators in non-uniform finite-differences, it can be advantageous to work with separate left- and right-vectors, where the inner product is computed as transpose(left)*right
(i.e. the first argument is not conjugated), instead of u'*S*u
. The reason is that the metric S
is dense, and it's more efficient to work with a biorthogonal set of vectors.
How should this be represented in the ContinuumArrays framework? I'm thinking something like this:
R = Basis(...)
u = ...
# The current way of computing the norm (inner product) would be
u'*R'*R*u
# or equivalently
dot(u, R'R, u) # R'R is dense
uc = (R'R)*u # Left vector
# The new way of computing norms/inner products
transpose(uc)*transpose(R)*R*u
# or equivalently
dotu(uc, u)
I am not entirely satisfied by this, ideally one would always write an inner product as dot(u, S, v)
, and this would figure out if u
is a left-vector that should be transposed in the biorthogonal case or adjointed in the normal case.
The algorithm is already implemented, but unused. I think it is not applicable to StaggeredFiniteDifferences
, however.
Remove old specialized densities and make broadcast dispatch to the new Density
struct.
Should use built-in functionality instead of duplicating
CompactBases.jl/docs/bspline_plots.jl
Lines 458 to 512 in b022394
b = A*x
is not particularly fast for A isa BlockSkylineMatrix
, at least not for the block sizes that commonly arise in FE-DVR. Maybe representing A
as a BandedMatrix
(thus explicitly storing structural zeros) could improve arithmetic performance due to cache-friendliness? When we get to distributed computing, we could similarly use a BandedBlockBandedMatrix
, where each node essentially has a banded matrix with some number of finite elements stored in it, and the only communication between nodes would be the bridge function between two finite elements, i.e. a single scalar. I assume an incomplete factorization could be formed by factorizing each banded matrix, and this could be used as e.g. a preconditioner.
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
Scalar operators never supported complex scaling and the complex scaling of function interpolation is broken since jagot/FEDVRQuasi.jl@7d53702, when support for the new Inclusion
machinery was implemented. The nice implementation is probably an extension of Inclusion
to the complex plane, and f.(z)
means that f
is analytical in the region of the complex plane that is spanned by z
. Connected with JuliaApproximation/QuasiArrays.jl#13.
R*c
where R
is a restricted basis will by default use LazyArrays.FlattenMulStyle
, e.g.
julia> typeof(applied(*, R, cf))
Applied{LazyArrays.FlattenMulStyle,typeof(*),Tuple{QuasiArrays.SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,Interval{:closed,:closed,Float64}},UnitRange{Int64}},false},Array{Float64,1}}}
which will result in dropping the restriction and padding the coefficient vector by zeroes, which we do not want.
Projectors are non-trivial in non-orthogonal spaces, cf
It could be useful to implement them in CompactBases.jl.
Add reciprocal space to uniform FiniteDifferences
, where the derivative operators are diagonal, along with transforms between the spaces using FFT, i.e. spectral derivatives.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.