Giter VIP home page Giter VIP logo

Comments (7)

martinarjovsky avatar martinarjovsky commented on May 30, 2024

from wassersteingan.

zyoohv avatar zyoohv commented on May 30, 2024

The running of the code has finished !

from wassersteingan.

zyoohv avatar zyoohv commented on May 30, 2024

@martinarjovsky

fake_samples_25000.png
fake_samples_25000

loss log

[167/1000][555/782][24987] Loss_D: -0.772816 Loss_G: -0.022373 Loss_D_real: -0.193230 Loss_D_fake 0.579586
[167/1000][560/782][24988] Loss_D: -0.800285 Loss_G: 0.586990 Loss_D_real: -0.633111 Loss_D_fake 0.167174
[167/1000][565/782][24989] Loss_D: -0.585662 Loss_G: 0.091040 Loss_D_real: -0.018860 Loss_D_fake 0.566802
[167/1000][570/782][24990] Loss_D: -0.930666 Loss_G: 0.580418 Loss_D_real: -0.650177 Loss_D_fake 0.280490
[167/1000][575/782][24991] Loss_D: -0.745919 Loss_G: 0.111690 Loss_D_real: -0.156507 Loss_D_fake 0.589412
[167/1000][580/782][24992] Loss_D: -0.981289 Loss_G: 0.589674 Loss_D_real: -0.631602 Loss_D_fake 0.349687
[167/1000][585/782][24993] Loss_D: -0.933379 Loss_G: 0.301309 Loss_D_real: -0.388805 Loss_D_fake 0.544573
[167/1000][590/782][24994] Loss_D: -1.077024 Loss_G: 0.548679 Loss_D_real: -0.589278 Loss_D_fake 0.487745
[167/1000][595/782][24995] Loss_D: -0.914252 Loss_G: 0.511773 Loss_D_real: -0.556782 Loss_D_fake 0.357470
[167/1000][600/782][24996] Loss_D: -1.090694 Loss_G: 0.532181 Loss_D_real: -0.572747 Loss_D_fake 0.517948
[167/1000][605/782][24997] Loss_D: -0.898265 Loss_G: 0.501241 Loss_D_real: -0.532952 Loss_D_fake 0.365313
[167/1000][610/782][24998] Loss_D: -0.943638 Loss_G: 0.485056 Loss_D_real: -0.502485 Loss_D_fake 0.441153
[167/1000][615/782][24999] Loss_D: -0.991872 Loss_G: 0.501545 Loss_D_real: -0.539097 Loss_D_fake 0.452775
[167/1000][620/782][25000] Loss_D: -1.001911 Loss_G: 0.499365 Loss_D_real: -0.527643 Loss_D_fake 0.474269

I train it 25000 iters, but the result seems still not right.
Could you help me find out what's wrong with it?

from wassersteingan.

praveenkumarchandaliya avatar praveenkumarchandaliya commented on May 30, 2024

I have change the model into 256 image size(Input image size 64 to 256 size).
then run this code 41600 iteration (800 epochs and 270 iterations at Batchsize 32).
I have use 9000 face image data set.
but over result is not generated good.
/home/mnit/PycharmProjects/ICB2019/WGAN_Pytorch_Clf256/samples/fake_samples_41600.png
fake_samples_41600
Random Seed: 6408
G True
G True
DCGAN_G_nobn(
(main): Sequential(
(initial.100-512.convt): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
(initial.512.relu): ReLU(inplace)
(pyramid.512-256.convt): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.256.relu): ReLU(inplace)
(pyramid.256-128.convt): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.128.relu): ReLU(inplace)
(pyramid.128-64.convt): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.64.relu): ReLU(inplace)
(pyramid.64-32.convt): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.32.relu): ReLU(inplace)
(pyramid.32-16.convt): ConvTranspose2d(32, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.16.relu): ReLU(inplace)
(final.16-3.convt): ConvTranspose2d(16, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(final.3.tanh): Tanh()
)
)
D True
('initial WGAN Dis: ndf csize ndf', 16, 128, 16)
('Input Feature', 16, 'output feature', 32)
('WGAN Dis: size csize ndf', 256, 64, 32)
('Input Feature', 32, 'output feature', 64)
('WGAN Dis: size csize ndf', 256, 32, 64)
('Input Feature', 64, 'output feature', 128)
('WGAN Dis: size csize ndf', 256, 16, 128)
('Input Feature', 128, 'output feature', 256)
('WGAN Dis: size csize ndf', 256, 8, 256)
('Input Feature', 256, 'output feature', 512)
('WGAN Dis: size csize ndf', 256, 4, 512)
DCGAN_D(
(main): Sequential(
(initial.conv.3-16): Conv2d(3, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(initial.relu.16): LeakyReLU(0.2, inplace)
(pyramid.16-32.conv): Conv2d(16, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.32.batchnorm): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True)
(pyramid.32.relu): LeakyReLU(0.2, inplace)
(pyramid.32-64.conv): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.64.batchnorm): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
(pyramid.64.relu): LeakyReLU(0.2, inplace)
(pyramid.64-128.conv): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.128.batchnorm): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
(pyramid.128.relu): LeakyReLU(0.2, inplace)
(pyramid.128-256.conv): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.256.batchnorm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
(pyramid.256.relu): LeakyReLU(0.2, inplace)
(pyramid.256-512.conv): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(pyramid.512.batchnorm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(pyramid.512.relu): LeakyReLU(0.2, inplace)
(final.512-1.conv): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
)
)

Loss_D: -1.515402 Loss_G: 0.700609 Loss_D_real: -0.823006 Loss_D_fake 0.692396
Loss_D: -1.515402 Loss_G: 0.700609 Loss_D_real: -0.823006 Loss_D_fake 0.692396
Loss_D: -1.515402 Loss_G: 0.700609 Loss_D_real: -0.823006 Loss_D_fake 0.692396
loss fuction is not change at 41600 iteration

from wassersteingan.

zyoohv avatar zyoohv commented on May 30, 2024

@praveenkumarchandaliya

I think you can try small images such as 3232 or 6464. The method work well in all dataset with small image size in my experiment.

good luck.

from wassersteingan.

Mercurial1101 avatar Mercurial1101 commented on May 30, 2024

@zyoohv Have you got good results for CIFAR10 data with default parameter settings? How many epochs have you run? Thanks!

from wassersteingan.

martinarjovsky avatar martinarjovsky commented on May 30, 2024

I haven't run the code in cifar 10. You may want to take a look at https://github.com/igul222/improved_wgan_training where we provide a very good cifar10 model.

Cheers :)
Martin

from wassersteingan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.