Giter VIP home page Giter VIP logo

unet.jl's Introduction

UNet.jl

Actions Status

This pacakge provides a generic UNet implemented in Julia.

The package is built on top of Flux.jl, and therefore can be extended as needed

julia> u = Unet()
UNet:
  ConvDown(64, 64)
  ConvDown(128, 128)
  ConvDown(256, 256)
  ConvDown(512, 512)


  UNetConvBlock(1, 3)
  UNetConvBlock(3, 64)
  UNetConvBlock(64, 128)
  UNetConvBlock(128, 256)
  UNetConvBlock(256, 512)
  UNetConvBlock(512, 1024)
  UNetConvBlock(1024, 1024)


  UNetUpBlock(1024, 512)
  UNetUpBlock(1024, 256)
  UNetUpBlock(512, 128)
  UNetUpBlock(256, 64)

The default input channel dimension is expected to be 1 ie. grayscale. To support different channel images, you can pass the channels to Unet.

julia> u = Unet(3) # for RGB images

The input size can be any power of two sized batch. Something like (256,256, channels, batch_size).

The default output channel dimension is the input channel dimension. So, 1 for a Unet() and e.g. 3 for a Unet(3). The output channel dimension can be set by supplying a second argument:

julia> u = Unet(3, 5) # 3 input channels, 5 output channels.

GPU Support

To train the model on UNet, it is as simple as calling gpu on the model.

julia> u = Unet();

julia> u = gpu(u);

julia> r = gpu(rand(Float32, 256, 256, 1, 1));

julia> size(u(r))
(256, 256, 1, 1)

Training

Training UNet is a breeze too.

You can define your own loss function, or use Flux binary cross entropy implementation.

using UNet, Flux,  Base.Iterators
import Flux.Losses.binarycrossentropy

device = gpu #cpu

function loss(x, y)
    op = clamp.(u(x), 0.001f0, 1.f0)
    binarycrossentropy(op,y)
end

u = Unet() |> device
w = rand(Float32, 256, 256, 1, 1) |> device
w′ = rand(Float32, 256, 256, 1, 1) |> device
rep = Iterators.repeated((w, w′), 10)

opt = ADAM()

Flux.train!(loss, Flux.params(u), rep, opt, cb = () -> @show(loss(w, w′)))

Further Reading

The package is an implementation of the paper, and all credits of the model itself go to the respective authors.

unet.jl's People

Contributors

dhairyalgandhi avatar evertschippers avatar github-actions[bot] avatar johnnychen94 avatar jonathanbieler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

unet.jl's Issues

unnecessary wrap on BatchNorm?

BatchNormWrap seems not necessary given that Conv requires 4D data as input:

UNet.jl/src/model.jl

Lines 1 to 5 in a5ac874

function BatchNormWrap(out_ch)
Chain(x->expand_dims(x,2),
BatchNorm(out_ch),
x->squeeze(x))
end

I guess I must misunderstand the idea here, can you explain it a bit why we need to unsqueeze the 4D input to 6D before feeding into BN? I didn't see this operation in other UNet implementations, for example, in pytorch-unet

Lighter variants of U-Net?

Thanks for the implementation. It would be great if lighter variants also were easily available, for example from 32 down to 128 or 256 channels.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

No MaxPool?

I'm happy to see a Julia implementation of U-Net. I was looking through the model and could not find an instance of MaxPool anywhere. Is this implementation incomplete? Did I miss this behavior somewhere else in the implementation?

Thanks in advance.

Kernel argument not used

Hi,
in UNetUpBlock:

UNet.jl/src/model.jl

Lines 23 to 28 in 0296a5b

UNetUpBlock(in_chs::Int, out_chs::Int; kernel = (3, 3), p = 0.5f0) =
UNetUpBlock(Chain(x->leakyrelu.(x,0.2f0),
ConvTranspose((2, 2), in_chs=>out_chs,
stride=(2, 2);init=_random_normal),
BatchNormWrap(out_chs),
Dropout(p)))

, the argument kernel is never used. Is this intended?

example in Readme doesn't work on my machine

Julia Version Julia Version 1.7.3 Commit 742b9abb4d (2022-05-06 12:58 UTC) Platform Info: OS: Windows (x86_64-w64-mingw32) CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-12.0.1 (ORCJIT, tigerlake) Environment: JULIA_EDITOR = code.cmd JULIA_NUM_THREADS = 8]

I ran this function (which is copied from the UNet Readme page):

function createnet()
    u = Unet()
    w = rand(Float32, 256,256,1,1)

    w′ = rand(Float32, 256,256,1,1)
    
    function loss(x, y)
             op = clamp.(u(x), 0.001f0, 1.f0)
             mean(bce(op, y))
    end
    
    rep = Iterators.repeated((w, w′), 10)
    
    opt = Momentum()
    Momentum(0.01, 0.9, IdDict{Any,Any}())
    
    Flux.train!(loss, Flux.params(u), rep, opt)
end

and got this error message:

julia> createnet()
ERROR: GPU compilation of kernel #broadcast_kernel#17(CUDA.CuKernelContext, CUDA.CuDeviceArray{Float32, 4, 1}, Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{4}, NTuple{4, Base.OneTo{Int64}}, typeof(+), Tuple{Base.Broadcast.Extruded{Array{Float32, 4}, NTuple{4, Bool}, NTuple{4, Int64}}, Base.Broadcast.Extruded{CUDA.CuDeviceArray{Float32, 4, 1}, NTuple{4, Bool}, NTuple{4, Int64}}}}, Int64) failed
KernelError: passing and using non-bitstype argument

Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{4}, NTuple{4, Base.OneTo{Int64}}, typeof(+), Tuple{Base.Broadcast.Extruded{Array{Float32, 4}, NTuple{4, Bool}, NTuple{4, Int64}}, Base.Broadcast.Extruded{CUDA.CuDeviceArray{Float32, 4, 1}, NTuple{4, Bool}, NTuple{4, Int64}}}}, which is not isbits:
  .args is of type Tuple{Base.Broadcast.Extruded{Array{Float32, 4}, NTuple{4, Bool}, NTuple{4, Int64}}, Base.Broadcast.Extruded{CUDA.CuDeviceArray{Float32, 4, 1}, NTuple{4, Bool}, NTuple{4, Int64}}} which is not isbits.
    .1 is of type Base.Broadcast.Extruded{Array{Float32, 4}, NTuple{4, Bool}, NTuple{4, Int64}} which is not isbits.
      .x is of type Array{Float32, 4} which is not isbits.


Stacktrace:
  [1] check_invocation(job::GPUCompiler.CompilerJob)
    @ GPUCompiler C:\Users\seatt\.julia\packages\GPUCompiler\jVY4I\src\validation.jl:88
  [2] macro expansion
    @ C:\Users\seatt\.julia\packages\GPUCompiler\jVY4I\src\driver.jl:417 [inlined]
  [3] macro expansion
    @ C:\Users\seatt\.julia\packages\TimerOutputs\jgSVI\src\TimerOutput.jl:252 [inlined]
  [4] macro expansion
    @ C:\Users\seatt\.julia\packages\GPUCompiler\jVY4I\src\driver.jl:416 [inlined]
  [5] emit_asm(job::GPUCompiler.CompilerJob, ir::LLVM.Module; strip::Bool, validate::Bool, format::LLVM.API.LLVMCodeGenFileType)
    @ GPUCompiler C:\Users\seatt\.julia\packages\GPUCompiler\jVY4I\src\utils.jl:64
  [6] cufunction_compile(job::GPUCompiler.CompilerJob, ctx::LLVM.Context)
    @ CUDA C:\Users\seatt\.julia\packages\CUDA\DfvRa\src\compiler\execution.jl:354
  [7] #224
    @ C:\Users\seatt\.julia\packages\CUDA\DfvRa\src\compiler\execution.jl:347 [inlined]
  [8] JuliaContext(f::CUDA.var"#224#225"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams, GPUCompiler.FunctionSpec{GPUArrays.var"#broadcast_kernel#17", Tuple{CUDA.CuKernelContext, CUDA.CuDeviceArray{Float32, 4, 1}, Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{4}, NTuple{4, Base.OneTo{Int64}}, typeof(+), Tuple{Base.Broadcast.Extruded{Array{Float32, 4}, NTuple{4, Bool}, NTuple{4, Int64}}, Base.Broadcast.Extruded{CUDA.CuDeviceArray{Float32, 4, 1}, NTuple{4, Bool}, NTuple{4, Int64}}}}, Int64}}}})
    @ GPUCompiler C:\Users\seatt\.julia\packages\GPUCompiler\jVY4I\src\driver.jl:76
  [9] cufunction_compile(job::GPUCompiler.CompilerJob)
    @ CUDA C:\Users\seatt\.julia\packages\CUDA\DfvRa\src\compiler\execution.jl:346
 [10] cached_compilation(cache::Dict{UInt64, Any}, job::GPUCompiler.CompilerJob, compiler::typeof(CUDA.cufunction_compile), linker::typeof(CUDA.cufunction_link))
    @ GPUCompiler C:\Users\seatt\.julia\packages\GPUCompiler\jVY4I\src\cache.jl:90
 [11] cufunction(f::GPUArrays.var"#broadcast_kernel#17", tt::Type{Tuple{CUDA.CuKernelContext, CUDA.CuDeviceArray{Float32, 4, 1}, Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{4}, NTuple{4, Base.OneTo{Int64}}, typeof(+), Tuple{Base.Broadcast.Extruded{Array{Float32, 4}, NTuple{4, Bool}, NTuple{4, Int64}}, Base.Broadcast.Extruded{CUDA.CuDeviceArray{Float32, 4, 1}, NTuple{4, Bool}, NTuple{4, Int64}}}}, Int64}}; name::Nothing, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ CUDA C:\Users\seatt\.julia\packages\CUDA\DfvRa\src\compiler\execution.jl:299
 [12] cufunction
    @ C:\Users\seatt\.julia\packages\CUDA\DfvRa\src\compiler\execution.jl:293 [inlined]
 [13] macro expansion
    @ C:\Users\seatt\.julia\packages\CUDA\DfvRa\src\compiler\execution.jl:102 [inlined]
 [14] #launch_heuristic#248
    @ C:\Users\seatt\.julia\packages\CUDA\DfvRa\src\gpuarrays.jl:17 [inlined]
 [15] _copyto!
    @ C:\Users\seatt\.julia\packages\GPUArrays\Hyss4\src\host\broadcast.jl:63 [inlined]
 [16] copyto!
    @ C:\Users\seatt\.julia\packages\GPUArrays\Hyss4\src\host\broadcast.jl:46 [inlined]
 [17] copy
    @ C:\Users\seatt\.julia\packages\GPUArrays\Hyss4\src\host\broadcast.jl:37 [inlined]
 [18] materialize
    @ .\broadcast.jl:860 [inlined]
 [19] broadcast(::typeof(+), ::Array{Float32, 4}, ::CUDA.CuArray{Float32, 4, CUDA.Mem.DeviceBuffer})
    @ Base.Broadcast .\broadcast.jl:798
 [20] adjoint
    @ C:\Users\seatt\.julia\packages\Zygote\D7j8v\src\lib\broadcast.jl:74 [inlined]
 [21] _pullback
    @ Zygote C:\Users\seatt\.julia\packages\Zygote\D7j8v\src\compiler\interface.jl:352
 [34] gradient(f::Function, args::Zygote.Params{Zygote.Buffer{Any, Vector{Any}}})
    @ Zygote C:\Users\seatt\.julia\packages\Zygote\D7j8v\src\compiler\interface.jl:75
 [35] macro expansion    @ C:\Users\seatt\.julia\packages\Flux\7nTyc\src\optimise\train.jl:109 [inlined]
 [36] macro expansion
    @ C:\Users\seatt\.julia\packages\Juno\n6wyj\src\progress.jl:134 [inlined]
 [37] train!(loss::Function, ps::Zygote.Params{Zygote.Buffer{Any, Vector{Any}}}, data::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{Array{Float32, 4}, Array{Float32, 4}}}}, opt::Flux.Optimise.Momentum; cb::Flux.Optimise.var"#40#46")
    @ Flux.Optimise C:\Users\seatt\.julia\packages\Flux\7nTyc\src\optimise\train.jl:107
 [38] train!
    @ C:\Users\seatt\.julia\packages\Flux\7nTyc\src\optimise\train.jl:105 [inlined]
 [39] createnet()
    @ VSCodeFeatures c:\Users\seatt\Source\VSCodeFeatures\src\VSCodeFeatures.jl:165
 [40] top-level scope
    @ REPL[4]:1

UNet Fails to install, "unsatifiable requirements"

Hi,

I tried installing UNet this morning, and it failed owing to unsatisfiable requirements on Julia v1.6.0. The full error message is

   Resolving package versions...
ERROR: Unsatisfiable requirements detected for package StatsBase [2913bbd2]:
 StatsBase [2913bbd2] log:
 ├─possible versions are: 0.24.0-0.33.5 or uninstalled
 ├─restricted by compatibility requirements with UNet [0d73aaa9] to versions: 0.24.0-0.33.5
 │ └─UNet [0d73aaa9] log:
 │   ├─possible versions are: 0.1.0-0.2.0 or uninstalled
 │   ├─restricted to versions * by an explicit requirement, leaving only versions 0.1.0-0.2.0
 │   └─restricted by compatibility requirements with Flux [587475ba] to versions: 0.2.0 or uninstalled, leaving only versions: 0.2.0
 │     └─Flux [587475ba] log:
 │       ├─possible versions are: 0.4.1-0.12.1 or uninstalled
 │       ├─restricted by compatibility requirements with UNet [0d73aaa9] to versions: 0.10.0-0.11.6
 │       │ └─UNet [0d73aaa9] log: see above
 │       ├─restricted by compatibility requirements with CUDA [052768ef] to versions: [0.4.1-0.10.4, 0.11.2-0.12.1] or uninstalled, leaving only versions: [0.10.0-0.10.4, 0.11.2-0.11.6]
 │       │ └─CUDA [052768ef] log:
 │       │   ├─possible versions are: 0.1.0-3.0.1 or uninstalled
 │       │   ├─restricted by julia compatibility requirements to versions: [2.3.0, 2.5.0-3.0.1] or uninstalled
 │       │   ├─restricted by compatibility requirements with SpecialFunctions [276daf66] to versions: 0.1.0-2.6.3 or uninstalled, leaving only versions: [2.3.0, 2.5.0-2.6.3] or uninstalled
 │       │   │ └─SpecialFunctions [276daf66] log:
 │       │   │   ├─possible versions are: 0.7.0-1.3.0 or uninstalled
 │       │   │   └─restricted by compatibility requirements with Distributions [31c24e10] to versions: 0.7.0-0.10.3
 │       │   │     └─Distributions [31c24e10] log:
 │       │   │       ├─possible versions are: 0.16.0-0.24.15 or uninstalled
 │       │   │       └─restricted by compatibility requirements with UNet [0d73aaa9] to versions: 0.20.0-0.23.12
 │       │   │         └─UNet [0d73aaa9] log: see above
 │       │   └─restricted by compatibility requirements with Flux [587475ba] to versions: 2.1.0-2.6.3, leaving only versions: [2.3.0, 2.5.0-2.6.3]
 │       │     └─Flux [587475ba] log: see above
 │       └─restricted by compatibility requirements with CuArrays [3a865a2d] to versions: [0.4.1-0.8.3, 0.11.0-0.12.1] or uninstalled, leaving only versions: 0.11.2-0.11.6
 │         └─CuArrays [3a865a2d] log:
 │           ├─possible versions are: 0.2.1-2.2.2 or uninstalled
 │           └─restricted by julia compatibility requirements to versions: uninstalled
 ├─restricted by compatibility requirements with Distributions [31c24e10] to versions: 0.30.0-0.33.5
 │ └─Distributions [31c24e10] log: see above
 ├─restricted by compatibility requirements with Flux [587475ba] to versions: 0.33.0-0.33.5
 │ └─Flux [587475ba] log: see above
 └─restricted by compatibility requirements with UNet [0d73aaa9] to versions: 0.30.0 — no versions left
   └─UNet [0d73aaa9] log: see above

Is there something within UNet that needs to get bumped?

Add pixelwise loss weight?

In the original implementation, they used a weighted loss function to weight up border pixels so that the network learns those preferentially (see Fig 3D below).

image

Do you have any suggestions for how to implement this in UNet.jl? I'm still really new to Flux so sorry if this is obvious. My guess would be to implement it in loss()

UNet.jl/src/utils.jl

Lines 49 to 52 in 954c89e

function loss(x, y)
op = clamp.(u(x), 0.001f0, 1.f0)
mean(bce(op, y))
end

EDIT: Here's an implementation of the pixel-wise weights for Keras: https://jaidevd.github.io/posts/weighted-loss-functions-for-instance-segmentation/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.