cetmann / iunets Goto Github PK
View Code? Open in Web Editor NEWA fully invertible U-Net for memory efficiency in Pytorch.
License: MIT License
A fully invertible U-Net for memory efficiency in Pytorch.
License: MIT License
may you share the artificial dataset of the 3D ’foam phantoms’, please
In Tutorial 1: Training iUNets in Pytorch, for the An invertible toy problem, when Training the model, the loss was computed by comparing the model output output = model(x)
and y = torch.flip(x, (3,))
instead of x
. Why the third dimension should be flipped? Thank you!
Hi @cetmann , thanks for providing such a valuable toolbox for the community!
There was one interesting finding when playing with the code, say, when we save a model of size (16, 32, 64, 128) | (2, 2, 2, 2) and create a new model of (16,32,64,128) | (2,2,3,3), the new model can still load the checkpoints of the previous one. I have never seen this phenomenon in non-invertible CNN. Do you have some clues of this interesting behavior?
I really appreciate any help you can provide.
To use iunet as a normalizing flow, please help add the support.
Hi, I hope you are well.
Thanks for this repository and your great work.
I would appreciate some insight about how you obtained these matrices:
https://github.com/cetmann/iunets/blob/master/iunets/layers.py#L590-L608
Regards
Hi,
First, I want to thank you for releasing the code early. This is a really interesting work and I believe it has a very high potential. I worked on through your codes for MNIST and I am impressed by the memory it saves during training compared to standard UNet.
I find that the code for BraTS is missing and it would be great if you could post your version of implementing iUNet for BraTS dataset.
Thank You !
Hello,
I'd like to reproduce your code about Brain Tumor Segmentation, how should I operate?
Hello, thanks for releasing this work.
We just wonder what is the point to make the UNet invertible in terms of "normalized flow" ?
( Since the "decoder" is not the invertible function of encoder, such design seems make iUNet indifferent with the standard UNet, except for efficient memory usage. However, the paper also leave a lot of paragraph to describe the invertible condition of normalized flow, I guess this work may have more contribute in normalized flow.
Hi! I tried to run iUNet tutorial on my machine. However, it failed and return a TypeError. TypeError: __init__() missing 1 required positional argument: 'in_channels'
. Could you assist me with this?
Thanks!
RuntimeError Traceback (most recent call last)
in
3 x, y = padding(x.to(device)), y.to(device)
4 optimizer.zero_grad()
----> 5 output = model(x)
6 loss = loss_fn(output, y)
7 loss.backward()
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
538 result = self._slow_forward(*input, **kwargs)
539 else:
--> 540 result = self.forward(*input, **kwargs)
541 for hook in self._forward_hooks.values():
542 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in forward(self, input)
98 def forward(self, input):
99 for module in self:
--> 100 input = module(input)
101 return input
102
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
538 result = self._slow_forward(*input, **kwargs)
539 else:
--> 540 result = self.forward(*input, **kwargs)
541 for hook in self._forward_hooks.values():
542 hook_result = hook(self, input, result)
/mnt/dongsoo/RSNA/Models/iUNET/iunets-master/iunets-master/iunets/networks.py in forward(self, x)
186 # RevNet L
187 for j in range(depth):
--> 188 x = self.module_L[i]j
189
190 # Downsampling L
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
538 result = self._slow_forward(*input, **kwargs)
539 else:
--> 540 result = self.forward(*input, **kwargs)
541 for hook in self._forward_hooks.values():
542 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/memcnn/models/revop.py in forward(self, *xin)
162 self.num_bwd_passes,
163 len(xin),
--> 164 *(xin + tuple([p for p in self._fn.parameters() if p.requires_grad])))
165 if not self.keep_input:
166 if not pytorch_version_one_and_above:
/usr/local/lib/python3.6/dist-packages/memcnn/models/revop.py in forward(ctx, fn, fn_inverse, keep_input, num_bwd_passes, num_inputs, *inputs_and_weights)
26 # Makes a detached copy which shares the storage
27 x = [element.detach() for element in inputs]
---> 28 outputs = ctx.fn(*x)
29
30 if not isinstance(outputs, tuple):
RuntimeError: OrderedDict mutated during iteration
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.