Giter VIP home page Giter VIP logo

genericfft.jl's People

Contributors

cgeoga avatar daanhb avatar dependabot[bot] avatar dlfivefifty avatar jishnub avatar mikaelslevinsky avatar milankl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

genericfft.jl's Issues

Drop-in replacement for `fft`

At the moment GenericFFT.jl also exports fft from FFTW.jl, such that type promotions take place when using it

julia> using GenericFFT, SoftPosit

julia> a = Posit16.(randn(8))
8-element Vector{Posit16}:
 Posit16(-0.88500977)
 Posit16(-2.5810547)
 Posit16(-0.9567871)
  Posit16(1.7719727)
 Posit16(-1.2495117)
 Posit16(-0.043945312)
  Posit16(1.4433594)
 Posit16(-0.9428711)

julia> fft(a)
8-element Vector{ComplexF32}:
 -3.4438477f0 + 0.0f0im
 -3.3491898f0 + 2.2744694f0im
 -2.6210938f0 + 3.4541016f0im
  4.0781937f0 - 2.5258236f0im
 0.14794922f0 + 0.0f0im
  4.0781937f0 + 2.5258236f0im
 -2.6210938f0 - 3.4541016f0im
 -3.3491898f0 - 2.2744694f0im

julia> GenericFFT.generic_fft(a)
8-element Vector{Complex{Posit16}}:
  Posit16(-3.444336) + Posit16(0.0)im
 Posit16(-3.3486328) + Posit16(2.2734375)im
 Posit16(-2.6210938) + Posit16(3.4541016)im
   Posit16(4.078125) - Posit16(2.5253906)im
 Posit16(0.14746094) + Posit16(0.0)im
   Posit16(4.076172) + Posit16(2.5273438)im
 Posit16(-2.6210938) - Posit16(3.4541016)im
 Posit16(-3.3486328) - Posit16(2.2753906)im

However, generic_fft calls the actual code from this repo. I guess in general the idea is to be able to do either

julia> using FFTW
julia> fft(a)

or

julia> using GenericFFT
julia> fft(a)

such that the interface is the same, regardless of which FFT implementation is imported. When the main functionality of other FFT implementations has been replicated here the only code changes necessary is which FFT implementation to import. Do you agree?

inverse of bfft plan not supported

julia> using GenericFFT

julia> P = plan_bfft(zeros(BigFloat, 16))
GenericFFT.DummybFFTPlan{Complex{BigFloat}, false, UnitRange{Int64}}(1:1, #undef)

julia> inv(P)
ERROR: MethodError: no method matching plan_inv(::GenericFFT.DummybFFTPlan{Complex{BigFloat}, false, UnitRange{Int64}})

The likely reason is that the inverse of a bFFTPlan should be a ScaledPlan, which needs the dimension of the plan in order to compute the scale factor. However, our dummy plans do not currently store the size. Like this:

julia> P = plan_bfft(zeros(16))
FFTW backward plan for 16-element array of ComplexF64
(dft-direct-16 "n2bv_16_avx2_128")

julia> inv(P)
0.0625 * FFTW forward plan for 16-element array of ComplexF64
(dft-direct-16 "n2fv_16_avx2_128")

julia> typeof(inv(P))
AbstractFFTs.ScaledPlan{ComplexF64, FFTW.cFFTWPlan{ComplexF64, -1, false, 1, UnitRange{Int64}}, Float64}

Type inconsistencies with FFTW?

FFTW plans are <:AbstractFFTs.Plan{T} with T complex for complex-to-real plans and T real for real-to-complex plans. So the parametric type of the plan is the type of the input. However, the plans here always seem to be <:AbstractFFTs.Plan{Complex}. Is that on purpose?

julia> generic_plan = GenericFFT.plan_rfft(zeros(Float16,64))
GenericFFT.DummyrFFTPlan{ComplexF16, false, UnitRange{Int64}}(64, 1:1, #undef)

julia> fftw_plan = FFTW.plan_rfft(zeros(64))
FFTW real-to-complex plan for 64-element array of Float64
(rdft2-ct-dit/8
  (hc2c-direct-8/28/1 "hc2cfdftv_8_avx2"
    (rdft2-r2hc-direct-8 "r2cf_8")
    (rdft2-r2hc01-direct-8 "r2cfII_8"))
  (dft-direct-8-x4 "n2fv_8_avx2_128"))

julia> supertype(supertype(typeof(fftw_plan)))
AbstractFFTs.Plan{Float64}

julia> supertype(supertype(typeof(generic_plan)))
AbstractFFTs.Plan{ComplexF16}     # expected Float16 not ComplexF16

Should we implement dft(x)?

In principle, though fft(x) has become standard terminology, it is about the computation of the DFT and the fact that the FFT algorithm is used is an implementation detail. Perhaps it makes sense to make a fallback implementation of dft(x) and idft(x) in terms of complex exponentials, i.e., simply implementing the O(n^2) sums in their definition. For AbstractFloats it could of course invoke fft to be faster.

Vector vs StridedVector

Most functions here use ::StridedVector as argument types, but some use ::Vector or ::AbstractVector. It's not fully clear to me whether this actually follows some logic, or whether only parts were gradually abstracted to AbstractVector/StridedVector.

As far as I can see that, most vectors used here need to have implemented

  • getindex, setindex!
  • length
  • real, imag, complex
  • and broadcasting, i.e .*, .=

which should apply to much more vectors than just ::Vector. Raising this as I was trying to use views, i.e. SubArray, which currently isn't supported because of the type constraints.

Change Readme to mention Mikael Slevinsky?

It currently says:

The code in this package was developed in the FastTransforms.jl package, with contributions by many developers

But as far as I can remember the generic FFT code was entirely due to @MikaelSlevinsky. (Perhaps he can correct me if I'm mistaken.) So I would change "with contributions by many developers" to "by Mikael Slevinsky".

Type stability

Counting the float operations, we currently aren't as type-stable as I was expecting

julia> using GenericFFT, GFlops
julia> rfft_plan = plan_rfft(zeros(Float16,1024))
GenericFFT.DummyrFFTPlan{ComplexF16, false, UnitRange{Int64}}(1024, 1:1, #undef)

julia> @count_ops rfft_plan*randn(Float16,1024)
Flop Counter: 61382 flop
┌────────┬─────────┬─────────┬─────────┐
│        │ Float16 │ Float32 │ Float64 │
├────────┼─────────┼─────────┼─────────┤
│    fma │       004 │
│ muladd │       00153 │
│    add │   18449083 │
│    sub │   16447039 │
│    mul │   2462201250 │
│    div │      40232 │
│    abs │      401917 │
│    neg │      1503 │
│   sqrt │       0190 │
└────────┴─────────┴─────────┴─────────┘

The Float32 operations might be miscounted similar to triscale-innov/GFlops.jl#40 but I doubt the Float64 operations are. Just raising this as we may want to ensure type stability and explicit conversions rather than relying on promotions (which can easily cascade into Float64s where we don't actually want to use them). The output may still be of eltype T but users of this package probably want a Fourier transform fully in T when they provide an input vector of eltype T?

generic_fft along only some dimensions? ret undefined.

I notice that generic_fft allows me to fft BigFloat, which is great!

I found however that I couldn't do the usual thing with FFTW, e.g.

generic_fft(X,2)

for only DFTing along the rows:

FastTransforms.generic_fft(BigFloat.(randn(20,20))+im*BigFloat.(randn(20,20)),2)

ERROR: UndefVarError: ret not defined
Stacktrace:
 [1] generic_fft(x::Matrix{Complex{BigFloat}}, region::Int64)
   @ FastTransforms ~/.julia/packages/FastTransforms/spjz2/src/fftBigFloat.jl:35
 [2] top-level scope
   @ REPL[19]:1

Is it possible?

Error using GenericFFT with MicroFloatingPoints.jl

Hi, I now get the following error trying to execute non-radix-2 ffts with MicroFloatingPoints.jl. I recall that I managed to run this code before.

My code:
`using FastTransforms
using Statistics
using MicroFloatingPoints

TF32 = Floatmu{8,15};

Generate input data

N = 20; # 2^14; #

#x = randn(Float64, N);
x = (randn(Float64, N) + im*randn(Float64, N))/sqrt(2);

Evaluate FFT with Float6

X_F64 = fft(x);

Convert input data to TF32 and evaluate FFT

#x_TF32 = TF32.(real(x));
x_TF32 = complex.(TF32.(real(x)), TF32.(imag(x)));

@time X_TF32 = fft(x_TF32);
X_R = Float64.(real(X_TF32)) +im*Float64.(imag(X_TF32));
`

`ERROR: MethodError: no method matching _conv!(::Vector{Complex{Floatmu{8, 15}}}, ::Vector{ComplexF64})

Closest candidates are:
_conv!(::AbstractVector{T}, ::AbstractVector{T}) where T<:Union{AbstractFloat, Complex{T} where T<:AbstractFloat}
@ GenericFFT C:\Users\ccccc.julia\packages\GenericFFT\RfD44\src\fft.jl:84

Stacktrace:
[1] generic_fft(x::Vector{Complex{Floatmu{8, 15}}})
@ GenericFFT C:\Users\ccccc.julia\packages\GenericFFT\RfD44\src\fft.jl:62
[2] generic_fft
@ C:\Users\ccccc.julia\packages\GenericFFT\RfD44\src\fft.jl:23 [inlined]
[3] *
@ C:\Users\ccccc.julia\packages\GenericFFT\RfD44\src\fft.jl:272 [inlined]
[4] fft
@ C:\Users\ccccc.julia\packages\AbstractFFTs\4iQz5\src\definitions.jl:67 [inlined]
[5] fft(x::Vector{Complex{Floatmu{8, 15}}})
@ AbstractFFTs C:\Users\ccccc.julia\packages\AbstractFFTs\4iQz5\src\definitions.jl:66
[6] macro expansion
@ .\timing.jl:279 [inlined]
[7] top-level scope
@ g:\Julia\Packages\MicroFloatingPoints.jl\My_Examples\FFT_Example.jl:269`

Reduce type piracy by shifting methods to AbstractFFTs

AbstractFFTs.complexfloat(x::StridedArray{Complex{<:AbstractFloat}}) = x
AbstractFFTs.realfloat(x::StridedArray{<:Real}) = x
# We override this one in order to avoid throwing an error that the type is
# unsupported (as defined in AbstractFFTs)
AbstractFFTs._fftfloat(::Type{T}) where {T <: AbstractFloat} = T
# We also avoid any conversion of types that are already AbstractFloat
# (since AbstractFFTs calls float(x) by default, which might change types)
AbstractFFTs.fftfloat(x::AbstractFloat) = x
# for compatibility with AbstractFFTs
AbstractFFTs.fftfloat(x::Float16) = Float32(x)

These should ideally be moved to AbstractFFTs to reduce the type-piracy in this package.

Make in-place functions actually in-place

Currently, the in-place functions just call the regular functions and copy the output. Examples:

This issue is a reminder to fix that.

It probably won't make much difference when working with BigFloat's because they allocate memory regardless, but it might help significantly with other types (especially in 2D-3D).

Are some of the dependencies necessary?

Part of the reason I opened the original issue in FastTransforms.jl is that I didn't love pulling in a ton of code and dependencies for something where I was using just a single function and that could stand alone. The single-file radix-2 FFT that I use in some packages (for example) has no dependencies. This package will naturally have a bigger scope than that, but I think some deps in the tree aren't super necessary:

  • I would vote to remove DSP from the tree and remove the DSP.conv methods that this file defines. That's the only place where it's used, and if those extra methods are valuable I think it would make more sense for DSP to pull in this package and define those methods, not vice versa.
  • I'm not sure why DoubleFloats is in the tree at all. It looks like it's only necessary for tests and the README.
  • I personally would choose to skip Reexport, but that barely pulls in any new code and so if you think its use here is idiomatic I have no complaints.

So yeah, primarily DSP and DoubleFloats seem to me like dependencies that don't need to be pulled in. If there are no objections I'm happy to make the PR that changes it and I really do appreciate that you actually took the initiative and did the work to create the package, so I hope this doesn't read as bratty. Just making an issue first to discuss in case there are objections or people want to discuss.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

support multidimensional transforms

We currently have:

julia> using GenericFFT

julia> x = rand(BigFloat, 16, 16);

julia> fft(x)
ERROR: MethodError: no method matching generic_fft(::Matrix{Complex{BigFloat}}, ::UnitRange{Int64})

The call fft(x,1) returns something, but fft(x,2) returns an error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.