Giter VIP home page Giter VIP logo

self-conditioned-gan's People

Contributors

stevliu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

self-conditioned-gan's Issues

training with custom dataset

Hello,

Thanks for the great idea.
I am now trying to train the model with my own dataset which has 1 class.
Can you briefly guide me how to / what to modify for training own dataset?

What I've changed is,

1. edited config by coping imagenet configs and changing number of classes / name in it.
2. added class for loading my own dataset in inputs.py script

Any other things should be required?

thank you

Comparing Different models

Hi,
Could you please tell me how you compared different models? Did you use the same learning rate, number of epochs, Number of decay epochs, image size, optimizer among all models? Also, did you collect test results using the final saved generator or did you use the best results testing all saved generators at different epochs?

several questions about implementation details

Nice work! I have several questions about your paper:

  1. what' the detail setting about GAN and cGAN in Table3 and Figure 4. For cGAN, is the number of classess is 1000? what's the backbone of these two methods. Are they use all the imagenet images? is the pretrained model you released name "baseline" and "cgan" denotes "gan" and "cgan" in this table?
  2. How many images you used to calculate FID?(5k or 50k) Why the cGAN results much worse than Biggan, FID 35.14 compared with 7.4 reported in bigGAN. It's not even comparable but the visualization results seems good in your paper. How to explain it, is it because your diversity is much worse than BigGAN? Or some other explaination?
  3. How do you get the Logo-GAN results in Table 3? Did you re-implementate it? I could not find the results in their paper. Why do you think your results is a little bit worse than theirs?
  4. What you mean about "random labels" in table 3.

Thank you so much! I really appreciate your work.

Error:too many values to unpack (expected 2)

hello:

 I run train.py by python train.py configs/cifar/selfcondgan.yaml. When doing cluster matching with it =25000. we meet a error :ValueError: too many values to unpack (expected 2) in line 80 of selfcondgan.py. Did you meet this error?
 beside,can you give detail of requirements ?

1625038147(1)

Got grey scale while using 3 channel G

I wonder if this happens to you guys. When I was trying try a GAN, let's say DC_GAN, on VGG face dataset. The dataset includes faces of different people. The training process on 32x32 image was nice and smooth, but when changed to 64x64 or above I will get some grey scale images and some RGB images.

image

Question regarding reproducing some results reported in the paper.

Hi there,

Thanks for the great paper and excellent implementation. Currently me and my team are working on a similar task as the one proposed in your paper. I noticed you got FID 28.08 of GAN on Cifar10, which I have a difficult time to reproduce. The results I got are:
GAN: fid = 114 (200 epoch)
GAN: fid = 116 (800 epoch)
DC-GAN fid = 125 (200 epoch)

I have two guesses:

  1. There are some issues with the model I use, maybe I need to tune it. In that case, I wonder if you can share some experience with tuning a traditional GAN/ DC_GAN on cifar10. Or maybe point me to some of the code you guys used.

  2. My FID calculation has a bug. The code I used to calculate Fid says self-conditioned-gan achieve 17 fid, which is the same as the numbers in your paper. The only different is that self-conditioned-gan generate sample results by producing a '.npz' file while I generate sample by loading the checkpoint and generate 60k png images. Does this seems right to you? Or I am making a stupid mistake :<

################ code start ###########
def gen(g, num_samples=60000, latent_size=100, path="images"):
for i in range(num_samples):
# Sample noise as generator input
z = Variable(Tensor(np.random.normal(0, 1, (1, latent_size))))
gen_imgs = g(z)

    save_image(gen_imgs.data[0], os.path.join(path, f"{i}.png"), normalize=True)

    if not i % 1000:
        print(i)

################# code end ###########

How do I code conditional GAN for stacked mnist dataset?

Thank you for sharing the code. Please share the code of the stacked MNIST dataset for conditional GAN. Actually I have some quarries regarding the conditional gan for stacked mnist dataset?

  1. for the real class conditional which class information will I need to feed into the discriminator? Although, the real data is associated with three classes. I am confused about this portion.

Question about reproducing Cifar10 experiment

Outstanding work! And Thanks for releasing this great implementation!
I’ m trying to reproducing Cifar10 experiment. GAN results in Table 2 achieved IS of 6.98.

Using python train.py configs/cifar/unconditional.yaml with epoch=400, I got IS of 5.73 for best during 400 epochs. Additionally after 400 epoch the final result I got is IS of 5.46.
Due to the unstablity of GAN training, the final result is usually not the best.

I repeated this experiment several times and got best results around 5.7, which could not achieve IS of 6.98.
Should I train more epochs,or could you give me some advice?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.