Comments (12)
oh this is unobvious behavior of pytorch, this is why I prefer tensorflow :D
thanks for your time.
from funit.
also I checked running_mean and running_var of AdaptiveInstanceNorm2d during training
they are always static, running_mean is always zero and running_var is always one.
so instead of
out = F.batch_norm(
x_reshaped, running_mean, running_var, self.weight, self.bias,
True, self.momentum, self.eps)
you are actually training
x * weight + bias
without any normalization at all!
from funit.
I think more logically first normalize the input and then apply style coded gamma and beta.
from funit.
quote from paper:
For each sample, AdaIN first normalizes the activations of a sample in each channel to have a zeromean and unit variance.
but it does not!
from funit.
@iperov
You missed
x_reshaped = x.contiguous().view(1, b * c, *x.size()[2:])
before
out = F.batch_norm(
x_reshaped, running_mean, running_var, self.weight, self.bias,
True, self.momentum, self.eps)
from funit.
Thanks for the reply.
Nothing is missed.
-
Concatenating batch and channels together you compute mean and var across all batch, it's called batch normalization.
But in instance normalization, mean and var are computed across channels per each batch. -
currently running_mean, running_var that you pass in batch_norm are always zero and one. Therefore your adain layer works like x * weight + bias.
-
I made batchnorm works with momentum 0.1 in my keras port. It crashes the model immediately.
from funit.
D acc: 0.5641 G acc: 0.4028
Elapsed time in update: 21.891000
Iteration: 00000006/00100000
Python 3.6.8 (tags/v3.6.8:3c6b436a57, Dec 24 2018, 00:16:47) [MSC v.1916 64 bit
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> running_mean
tensor([0., 0., 0., ..., 0., 0., 0.], device='cuda:0')
>>> running_var
tensor([1., 1., 1., ..., 1., 1., 1.], device='cuda:0')
but
x * weight + bias.
also works.
May be not so good as with normalization :)
from funit.
batch_norm computes a mean value and a deviation value per channel. By doing b x c, we are treating each sample's each channel as a channel. There are b*c channel now. Now, the effect of batch norm is equivalent to instance norm.
from funit.
Yes, my mistake, you're right.
But what about
>>> running_mean
tensor([0., 0., 0., ..., 0., 0., 0.], device='cuda:0')
>>> running_var
tensor([1., 1., 1., ..., 1., 1., 1.], device='cuda:0')
?
from funit.
Adding zero and then multiplying one is equal to doing nothing.
from funit.
F.batch_norm should do
(x - running_mean) / sqrt(running_var+eps)
then compute current mean and var
and add to running_mean and running_var with momentum 0.1
So I guess there is no normalization applied.
am I wrong?
from funit.
torch.nn.functional.batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05)
Above is from the Pytorch doc. Since we set the training mode to true, running_mean and running var are not used.
out = F.batch_norm(
x_reshaped, running_mean, running_var, self.weight, self.bias,
True, self.momentum, self.eps)
from funit.
Related Issues (20)
- The purpose of gen_test object in FUNITModel HOT 2
- Dataset download link is broken HOT 3
- GauGAN Page scrolls when painting using mobile device
- The training code does not work well... HOT 2
- FUNIT do not work for Anaconda 3 anymore
- Pretrained model on other datasets
- Seek the AnimalFaces Dataset HOT 11
- Finetune on a new sample
- Possibility to translate target class image to source class image?
- Does it work in tranformation between human faces? HOT 1
- Unable to download the dataset - Error 404 HOT 1
- Could this project finish a transformation between traffic scene? HOT 1
- GANimal demo is broken when uploading an image HOT 2
- do we need to separate content and style image during training? HOT 2
- The link for the demo is not working
- BatchNorm layers cause training error when track_running_stats=True with DistributedDataParallel
- assign_adain_params function
- The predicted generated image is too similar to the input image
- 数据集
- RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [16, 1]], which is output 0 of AsStridedBackward0, is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from funit.