juliaapproximation / genericfft.jl Goto Github PK
View Code? Open in Web Editor NEWA package for computing the FFT with arbitrary floating point numbers
License: MIT License
A package for computing the FFT with arbitrary floating point numbers
License: MIT License
The docs badges in the README don’t seem to work.
At the moment GenericFFT.jl also exports fft
from FFTW.jl, such that type promotions take place when using it
julia> using GenericFFT, SoftPosit
julia> a = Posit16.(randn(8))
8-element Vector{Posit16}:
Posit16(-0.88500977)
Posit16(-2.5810547)
Posit16(-0.9567871)
Posit16(1.7719727)
Posit16(-1.2495117)
Posit16(-0.043945312)
Posit16(1.4433594)
Posit16(-0.9428711)
julia> fft(a)
8-element Vector{ComplexF32}:
-3.4438477f0 + 0.0f0im
-3.3491898f0 + 2.2744694f0im
-2.6210938f0 + 3.4541016f0im
4.0781937f0 - 2.5258236f0im
0.14794922f0 + 0.0f0im
4.0781937f0 + 2.5258236f0im
-2.6210938f0 - 3.4541016f0im
-3.3491898f0 - 2.2744694f0im
julia> GenericFFT.generic_fft(a)
8-element Vector{Complex{Posit16}}:
Posit16(-3.444336) + Posit16(0.0)im
Posit16(-3.3486328) + Posit16(2.2734375)im
Posit16(-2.6210938) + Posit16(3.4541016)im
Posit16(4.078125) - Posit16(2.5253906)im
Posit16(0.14746094) + Posit16(0.0)im
Posit16(4.076172) + Posit16(2.5273438)im
Posit16(-2.6210938) - Posit16(3.4541016)im
Posit16(-3.3486328) - Posit16(2.2753906)im
However, generic_fft
calls the actual code from this repo. I guess in general the idea is to be able to do either
julia> using FFTW
julia> fft(a)
or
julia> using GenericFFT
julia> fft(a)
such that the interface is the same, regardless of which FFT implementation is imported. When the main functionality of other FFT implementations has been replicated here the only code changes necessary is which FFT implementation to import. Do you agree?
julia> using GenericFFT
julia> P = plan_bfft(zeros(BigFloat, 16))
GenericFFT.DummybFFTPlan{Complex{BigFloat}, false, UnitRange{Int64}}(1:1, #undef)
julia> inv(P)
ERROR: MethodError: no method matching plan_inv(::GenericFFT.DummybFFTPlan{Complex{BigFloat}, false, UnitRange{Int64}})
The likely reason is that the inverse of a bFFTPlan should be a ScaledPlan, which needs the dimension of the plan in order to compute the scale factor. However, our dummy plans do not currently store the size. Like this:
julia> P = plan_bfft(zeros(16))
FFTW backward plan for 16-element array of ComplexF64
(dft-direct-16 "n2bv_16_avx2_128")
julia> inv(P)
0.0625 * FFTW forward plan for 16-element array of ComplexF64
(dft-direct-16 "n2fv_16_avx2_128")
julia> typeof(inv(P))
AbstractFFTs.ScaledPlan{ComplexF64, FFTW.cFFTWPlan{ComplexF64, -1, false, 1, UnitRange{Int64}}, Float64}
FFTW plans are <:AbstractFFTs.Plan{T}
with T complex for complex-to-real plans and T real for real-to-complex plans. So the parametric type of the plan is the type of the input. However, the plans here always seem to be <:AbstractFFTs.Plan{Complex}
. Is that on purpose?
julia> generic_plan = GenericFFT.plan_rfft(zeros(Float16,64))
GenericFFT.DummyrFFTPlan{ComplexF16, false, UnitRange{Int64}}(64, 1:1, #undef)
julia> fftw_plan = FFTW.plan_rfft(zeros(64))
FFTW real-to-complex plan for 64-element array of Float64
(rdft2-ct-dit/8
(hc2c-direct-8/28/1 "hc2cfdftv_8_avx2"
(rdft2-r2hc-direct-8 "r2cf_8")
(rdft2-r2hc01-direct-8 "r2cfII_8"))
(dft-direct-8-x4 "n2fv_8_avx2_128"))
julia> supertype(supertype(typeof(fftw_plan)))
AbstractFFTs.Plan{Float64}
julia> supertype(supertype(typeof(generic_plan)))
AbstractFFTs.Plan{ComplexF16} # expected Float16 not ComplexF16
In principle, though fft(x)
has become standard terminology, it is about the computation of the DFT and the fact that the FFT algorithm is used is an implementation detail. Perhaps it makes sense to make a fallback implementation of dft(x)
and idft(x)
in terms of complex exponentials, i.e., simply implementing the O(n^2) sums in their definition. For AbstractFloats it could of course invoke fft
to be faster.
Most functions here use ::StridedVector
as argument types, but some use ::Vector
or ::AbstractVector
. It's not fully clear to me whether this actually follows some logic, or whether only parts were gradually abstracted to AbstractVector/StridedVector.
As far as I can see that, most vectors used here need to have implemented
which should apply to much more vectors than just ::Vector
. Raising this as I was trying to use views, i.e. SubArray, which currently isn't supported because of the type constraints.
It currently says:
The code in this package was developed in the FastTransforms.jl package, with contributions by many developers
But as far as I can remember the generic FFT code was entirely due to @MikaelSlevinsky. (Perhaps he can correct me if I'm mistaken.) So I would change "with contributions by many developers" to "by Mikael Slevinsky".
Counting the float operations, we currently aren't as type-stable as I was expecting
julia> using GenericFFT, GFlops
julia> rfft_plan = plan_rfft(zeros(Float16,1024))
GenericFFT.DummyrFFTPlan{ComplexF16, false, UnitRange{Int64}}(1024, 1:1, #undef)
julia> @count_ops rfft_plan*randn(Float16,1024)
Flop Counter: 61382 flop
┌────────┬─────────┬─────────┬─────────┐
│ │ Float16 │ Float32 │ Float64 │
├────────┼─────────┼─────────┼─────────┤
│ fma │ 0 │ 0 │ 4 │
│ muladd │ 0 │ 0 │ 153 │
│ add │ 18449 │ 0 │ 83 │
│ sub │ 16447 │ 0 │ 39 │
│ mul │ 24622 │ 0 │ 1250 │
│ div │ 40 │ 23 │ 2 │
│ abs │ 40 │ 19 │ 17 │
│ neg │ 15 │ 0 │ 3 │
│ sqrt │ 0 │ 19 │ 0 │
└────────┴─────────┴─────────┴─────────┘
The Float32 operations might be miscounted similar to triscale-innov/GFlops.jl#40 but I doubt the Float64 operations are. Just raising this as we may want to ensure type stability and explicit conversions rather than relying on promotions (which can easily cascade into Float64s where we don't actually want to use them). The output may still be of eltype T but users of this package probably want a Fourier transform fully in T when they provide an input vector of eltype T?
I notice that generic_fft
allows me to fft
BigFloat, which is great!
I found however that I couldn't do the usual thing with FFTW, e.g.
generic_fft(X,2)
for only DFTing along the rows:
FastTransforms.generic_fft(BigFloat.(randn(20,20))+im*BigFloat.(randn(20,20)),2)
ERROR: UndefVarError: ret not defined
Stacktrace:
[1] generic_fft(x::Matrix{Complex{BigFloat}}, region::Int64)
@ FastTransforms ~/.julia/packages/FastTransforms/spjz2/src/fftBigFloat.jl:35
[2] top-level scope
@ REPL[19]:1
Is it possible?
Hi, I now get the following error trying to execute non-radix-2 ffts with MicroFloatingPoints.jl. I recall that I managed to run this code before.
My code:
`using FastTransforms
using Statistics
using MicroFloatingPoints
TF32 = Floatmu{8,15};
N = 20; # 2^14; #
#x = randn(Float64, N);
x = (randn(Float64, N) + im*randn(Float64, N))/sqrt(2);
X_F64 = fft(x);
#x_TF32 = TF32.(real(x));
x_TF32 = complex.(TF32.(real(x)), TF32.(imag(x)));
@time X_TF32 = fft(x_TF32);
X_R = Float64.(real(X_TF32)) +im*Float64.(imag(X_TF32));
`
`ERROR: MethodError: no method matching _conv!(::Vector{Complex{Floatmu{8, 15}}}, ::Vector{ComplexF64})
Closest candidates are:
_conv!(::AbstractVector{T}, ::AbstractVector{T}) where T<:Union{AbstractFloat, Complex{T} where T<:AbstractFloat}
@ GenericFFT C:\Users\ccccc.julia\packages\GenericFFT\RfD44\src\fft.jl:84
Stacktrace:
[1] generic_fft(x::Vector{Complex{Floatmu{8, 15}}})
@ GenericFFT C:\Users\ccccc.julia\packages\GenericFFT\RfD44\src\fft.jl:62
[2] generic_fft
@ C:\Users\ccccc.julia\packages\GenericFFT\RfD44\src\fft.jl:23 [inlined]
[3] *
@ C:\Users\ccccc.julia\packages\GenericFFT\RfD44\src\fft.jl:272 [inlined]
[4] fft
@ C:\Users\ccccc.julia\packages\AbstractFFTs\4iQz5\src\definitions.jl:67 [inlined]
[5] fft(x::Vector{Complex{Floatmu{8, 15}}})
@ AbstractFFTs C:\Users\ccccc.julia\packages\AbstractFFTs\4iQz5\src\definitions.jl:66
[6] macro expansion
@ .\timing.jl:279 [inlined]
[7] top-level scope
@ g:\Julia\Packages\MicroFloatingPoints.jl\My_Examples\FFT_Example.jl:269`
@dlfivefifty @MikaelSlevinsky any opinions on whether or not to proceed like this?
This condition
Line 64 in 93a2460
GenericFFT.jl/src/GenericFFT.jl
Lines 24 to 33 in 2e55016
These should ideally be moved to AbstractFFTs to reduce the type-piracy in this package.
Currently, the in-place functions just call the regular functions and copy the output. Examples:
This issue is a reminder to fix that.
It probably won't make much difference when working with BigFloat's because they allocate memory regardless, but it might help significantly with other types (especially in 2D-3D).
Part of the reason I opened the original issue in FastTransforms.jl
is that I didn't love pulling in a ton of code and dependencies for something where I was using just a single function and that could stand alone. The single-file radix-2 FFT that I use in some packages (for example) has no dependencies. This package will naturally have a bigger scope than that, but I think some deps in the tree aren't super necessary:
DSP
from the tree and remove the DSP.conv
methods that this file defines. That's the only place where it's used, and if those extra methods are valuable I think it would make more sense for DSP
to pull in this package and define those methods, not vice versa.DoubleFloats
is in the tree at all. It looks like it's only necessary for tests and the README.Reexport
, but that barely pulls in any new code and so if you think its use here is idiomatic I have no complaints.So yeah, primarily DSP
and DoubleFloats
seem to me like dependencies that don't need to be pulled in. If there are no objections I'm happy to make the PR that changes it and I really do appreciate that you actually took the initiative and did the work to create the package, so I hope this doesn't read as bratty. Just making an issue first to discuss in case there are objections or people want to discuss.
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
We currently have:
julia> using GenericFFT
julia> x = rand(BigFloat, 16, 16);
julia> fft(x)
ERROR: MethodError: no method matching generic_fft(::Matrix{Complex{BigFloat}}, ::UnitRange{Int64})
The call fft(x,1)
returns something, but fft(x,2)
returns an error.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.