Comments (5)
Do you mean the order of loop nesting, or the order of traversal? The latter is (for large enough arrays) done in parallel.
from tullio.jl.
I'm not sure summation order is determinded by loop nesting or traversal. In this example, @tullio
sum over 3 indices i,j and k. I mean which index is put inside or outside of the loop by default? Or those three are computed parallel together?
using Tullio
julia> @tullio A[_]:=exp(i+j+k) (i in 1:3,j in 1:3,k in 1:10)
1-element Vector{Float64}:
3.1763921934292294e7
from tullio.jl.
In this case i
is innermost. This info block has the order (which it's just taking from the order supplied); verbose=2
prints the actual loops:
julia> @tullio A[_] := exp(i+j+k) (i in 1:3,j in 1:3,k in 1:10) verbose=true
[ Info: no gradient to calculate
┌ Info: threading threshold (from cost = 11)
└ block = 23832
┌ Info: reduction index ranges
│ i = Base.OneTo(3)
│ j = Base.OneTo(3)
└ k = Base.OneTo(10)
1-element Vector{Float64}:
3.1763921934292294e7
Maybe worth noting that it hasn't figured out that this is a scalar reduction, and so it will not do this in parallel. (Although it may still break up the iteration space into blocks, rather than iterating all the way in any one index.)
julia> @btime @tullio A[_] := exp(i+j-k) (i in 1:300,j in 1:300,k in 1:1000)
min 649.319 ms, mean 650.541 ms (1 allocation, 64 bytes)
1-element Vector{Float64}:
5.495344381637131e260
julia> @btime @tullio A := exp(i+j-k) (i in 1:300,j in 1:300,k in 1:1000) # 4 threads
min 164.052 ms, mean 218.762 ms (89 allocations, 4.73 KiB)
5.495344381637131e260
from tullio.jl.
Thanks for your response and instruction. Therefore in this case, if I reorder index in declaration, would the nesting of loop also change?
julia> @tullio A:= exp(i+j+k) (j in 1:3,i in 1:3,k in 1:10) verbose=true
[ Info: no gradient to calculate
┌ Info: threading threshold (from cost = 11)
└ block = 23832
┌ Info: reduction index ranges
│ j = Base.OneTo(3)
│ i = Base.OneTo(3)
└ k = Base.OneTo(10)
3.1763921934292294e7
Moreover, for another case with Einstein notation, it seems that the default order of loop is the index order from left to right. Could one change them through some options?
julia> using Tullio
julia> a=rand(10,10,10);b=rand(10,20);
julia> @tullio A[i,j]:=a[i,k,k]*b[k,j];
julia> @tullio A[i,j]:=a[i,k,k]*b[k,j] verbose=true
┌ Info: symbolic gradients
│ inbody =
│ 2-element Vector{Any}:
│ :(𝛥a[i, k, k] = 𝛥a[i, k, k] + 𝛥ℛ[i, j] * conj(b[k, j]))
└ :(𝛥b[k, j] = 𝛥b[k, j] + 𝛥ℛ[i, j] * conj(a[i, k, k]))
┌ Info: threading threshold (from cost = 1)
└ block = 262144
┌ Info: left index ranges
│ i = Base.OneTo(10)
└ j = Base.OneTo(20)
┌ Info: reduction index ranges
└ k = Base.OneTo(10)
from tullio.jl.
would the nesting of loop also change?
Yes.
When there are arrays, you can't really change it, there's no option to specify this. (Sometimes re-ordering an expression a[...] * b[...]
to b[...] * a[...]
may cause it to pick a different order.)
But the goal is not to care. For large arrays, when cache-friendliness is a reason to care a lot about loop order, most einsum expressions will have some arrays in the wrong order. Tullio runs a fairly crude (cache-oblivious?) blocking strategy, which in many cases makes the outer loop order not matter much:
julia> let n = 1
a=rand(10n,10n,10n); b=rand(10n,20n);
@tullio A[i,j]:=a[i,k,k]*b[k,j] # make A
@btime @tullio $A[i,j] = $a[i,k,k] * $b[k,j] # write into A
B = transpose(permutedims(A))
@btime @tullio $B[i,j] = $a[i,k,k] * $b[k,j] # opposite memory order
end;
min 1.046 μs, mean 1.057 μs (0 allocations) # too small to multi-thread
min 1.046 μs, mean 1.057 μs (0 allocations)
julia> let n = 100
a=rand(10n,10n,10n); b=rand(10n,20n);
@tullio A[i,j]:=a[i,k,k]*b[k,j] # make A
@btime @tullio $A[i,j] = $a[i,k,k] * $b[k,j] # write into A
B = transpose(permutedims(A))
@btime @tullio $B[i,j] = $a[i,k,k] * $b[k,j] # opposite memory order
end;
min 238.708 ms, mean 364.186 ms (50 allocations, 2.56 KiB) # threads + blocks
min 251.204 ms, mean 358.758 ms (51 allocations, 2.59 KiB) # ... so that wrong order hardly matters
Maybe not the best example, since in both of these, the major impact of blocking is on mixing up the reduction loop over k
with the outer ones on i,j
. With threads=false
these are equally slow:
min 1.939 s, mean 1.939 s (0 allocations)
min 1.937 s, mean 1.948 s (0 allocations)
None of this is very configurable. Making it so seemed like a big project, and a bigger library... something more like Halide.
Maybe also worth noting that if you load LoopVectorization, then that will often re-order inner loops. Tullio decides loop order in advance, from looking only at the expression provided, but LV waits to see the actual types & then uses generated code which knows that e.g. Array has stride 1 on 1st dim. It has much more detailed cost modelling to choose what to do.
from tullio.jl.
Related Issues (20)
- Reporting a bug when Tullio being included with LoopVectorization HOT 1
- [Question] Is it possible to create a vector of SVectors from a Matrix using Tullio? HOT 2
- Use package extensions HOT 1
- How finalizers `|>` work HOT 5
- Method error when broadcast and sum of matrices HOT 1
- GPU Kernel Compilation Failed with Interpolations HOT 2
- Upgrade to CUDA.CUDAKernels HOT 9
- Bug when using Tullio + LoopVectorization HOT 5
- Add Finch.jl backend HOT 4
- CUDA v4 support HOT 2
- Using threads, vs setting threads=false gives different result HOT 3
- Issue with vectorized functions on GPU HOT 3
- Error when specifying the range of an index with a UnitRange HOT 4
- Scalar indexing with CUDA HOT 10
- Please update dep of FillArrays to v1.
- Bad interaction with Enzyme? HOT 6
- Zygote with Tullio gives wrong gradients/pullbacks using CUDA HOT 1
- Use EllipsisNotation ? HOT 2
- Error when using Loopvectorization
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tullio.jl.