Comments (7)
from wassersteingan.
The running of the code has finished !
from wassersteingan.
loss log
[167/1000][555/782][24987] Loss_D: -0.772816 Loss_G: -0.022373 Loss_D_real: -0.193230 Loss_D_fake 0.579586
[167/1000][560/782][24988] Loss_D: -0.800285 Loss_G: 0.586990 Loss_D_real: -0.633111 Loss_D_fake 0.167174
[167/1000][565/782][24989] Loss_D: -0.585662 Loss_G: 0.091040 Loss_D_real: -0.018860 Loss_D_fake 0.566802
[167/1000][570/782][24990] Loss_D: -0.930666 Loss_G: 0.580418 Loss_D_real: -0.650177 Loss_D_fake 0.280490
[167/1000][575/782][24991] Loss_D: -0.745919 Loss_G: 0.111690 Loss_D_real: -0.156507 Loss_D_fake 0.589412
[167/1000][580/782][24992] Loss_D: -0.981289 Loss_G: 0.589674 Loss_D_real: -0.631602 Loss_D_fake 0.349687
[167/1000][585/782][24993] Loss_D: -0.933379 Loss_G: 0.301309 Loss_D_real: -0.388805 Loss_D_fake 0.544573
[167/1000][590/782][24994] Loss_D: -1.077024 Loss_G: 0.548679 Loss_D_real: -0.589278 Loss_D_fake 0.487745
[167/1000][595/782][24995] Loss_D: -0.914252 Loss_G: 0.511773 Loss_D_real: -0.556782 Loss_D_fake 0.357470
[167/1000][600/782][24996] Loss_D: -1.090694 Loss_G: 0.532181 Loss_D_real: -0.572747 Loss_D_fake 0.517948
[167/1000][605/782][24997] Loss_D: -0.898265 Loss_G: 0.501241 Loss_D_real: -0.532952 Loss_D_fake 0.365313
[167/1000][610/782][24998] Loss_D: -0.943638 Loss_G: 0.485056 Loss_D_real: -0.502485 Loss_D_fake 0.441153
[167/1000][615/782][24999] Loss_D: -0.991872 Loss_G: 0.501545 Loss_D_real: -0.539097 Loss_D_fake 0.452775
[167/1000][620/782][25000] Loss_D: -1.001911 Loss_G: 0.499365 Loss_D_real: -0.527643 Loss_D_fake 0.474269
I train it 25000 iters, but the result seems still not right.
Could you help me find out what's wrong with it?
from wassersteingan.
I have change the model into 256 image size(Input image size 64 to 256 size).
then run this code 41600 iteration (800 epochs and 270 iterations at Batchsize 32).
I have use 9000 face image data set.
but over result is not generated good.
/home/mnit/PycharmProjects/ICB2019/WGAN_Pytorch_Clf256/samples/fake_samples_41600.png
Random Seed: 6408
G True
G True
DCGAN_G_nobn(
(main): Sequential(
(initial.100-512.convt): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
(initial.512.relu): ReLU(inplace)
(pyramid.512-256.convt): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.256.relu): ReLU(inplace)
(pyramid.256-128.convt): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.128.relu): ReLU(inplace)
(pyramid.128-64.convt): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.64.relu): ReLU(inplace)
(pyramid.64-32.convt): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.32.relu): ReLU(inplace)
(pyramid.32-16.convt): ConvTranspose2d(32, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.16.relu): ReLU(inplace)
(final.16-3.convt): ConvTranspose2d(16, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(final.3.tanh): Tanh()
)
)
D True
('initial WGAN Dis: ndf csize ndf', 16, 128, 16)
('Input Feature', 16, 'output feature', 32)
('WGAN Dis: size csize ndf', 256, 64, 32)
('Input Feature', 32, 'output feature', 64)
('WGAN Dis: size csize ndf', 256, 32, 64)
('Input Feature', 64, 'output feature', 128)
('WGAN Dis: size csize ndf', 256, 16, 128)
('Input Feature', 128, 'output feature', 256)
('WGAN Dis: size csize ndf', 256, 8, 256)
('Input Feature', 256, 'output feature', 512)
('WGAN Dis: size csize ndf', 256, 4, 512)
DCGAN_D(
(main): Sequential(
(initial.conv.3-16): Conv2d(3, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(initial.relu.16): LeakyReLU(0.2, inplace)
(pyramid.16-32.conv): Conv2d(16, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.32.batchnorm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True)
(pyramid.32.relu): LeakyReLU(0.2, inplace)
(pyramid.32-64.conv): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.64.batchnorm): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
(pyramid.64.relu): LeakyReLU(0.2, inplace)
(pyramid.64-128.conv): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.128.batchnorm): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
(pyramid.128.relu): LeakyReLU(0.2, inplace)
(pyramid.128-256.conv): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.256.batchnorm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(pyramid.256.relu): LeakyReLU(0.2, inplace)
(pyramid.256-512.conv): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.512.batchnorm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(pyramid.512.relu): LeakyReLU(0.2, inplace)
(final.512-1.conv): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
)
)
Loss_D: -1.515402 Loss_G: 0.700609 Loss_D_real: -0.823006 Loss_D_fake 0.692396
Loss_D: -1.515402 Loss_G: 0.700609 Loss_D_real: -0.823006 Loss_D_fake 0.692396
Loss_D: -1.515402 Loss_G: 0.700609 Loss_D_real: -0.823006 Loss_D_fake 0.692396
loss fuction is not change at 41600 iteration
from wassersteingan.
I think you can try small images such as 3232 or 6464. The method work well in all dataset with small image size in my experiment.
good luck.
from wassersteingan.
@zyoohv Have you got good results for CIFAR10 data with default parameter settings? How many epochs have you run? Thanks!
from wassersteingan.
I haven't run the code in cifar 10. You may want to take a look at https://github.com/igul222/improved_wgan_training where we provide a very good cifar10 model.
Cheers :)
Martin
from wassersteingan.
Related Issues (20)
- After training the model, how to generate Test samples using the generator?
- Where can I find bibtex of Wasserstein GAN and related works? HOT 1
- Why have a tensor of 1 or -1 in loss.backward()? HOT 1
- Problems with the optimization of loss. HOT 4
- how to train a 256*128 image dataset and output 256*128 result? HOT 3
- Results on cifar10 very bad even if trained for over 1000 epochs HOT 1
- The parameter ‘db_path' of LSUN setting in 'main.py' should be changed to 'root'
- Results cannot be reproduced. HOT 2
- module name can\'t contain "." HOT 1
- Inconsistent loss function from the paper? HOT 8
- No convergence in onw dataset
- No sigmoid activation for G on MLP?
- should the gamma and beta on batchnormalization layer be clipped?
- some problem when running the WassersteinGAN HOT 1
- Interpreting Generator and Critic loss HOT 1
- How can I use a loss as the stopping criteria in Wasserstein GAN?
- Why did not tell the label to the discriminator
- Generator update HOT 1
- I cannot find the calculating or estimating of wasserstein distance! HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from wassersteingan.