stevliu / self-conditioned-gan Goto Github PK
View Code? Open in Web Editor NEWDiverse Image Generation via Self-Conditioned GANs
Home Page: http://selfcondgan.csail.mit.edu/
License: MIT License
Diverse Image Generation via Self-Conditioned GANs
Home Page: http://selfcondgan.csail.mit.edu/
License: MIT License
Hello,
Thanks for the great idea.
I am now trying to train the model with my own dataset which has 1 class.
Can you briefly guide me how to / what to modify for training own dataset?
What I've changed is,
1. edited config by coping imagenet configs and changing number of classes / name in it.
2. added class for loading my own dataset in inputs.py script
Any other things should be required?
thank you
Hi,
Could you please tell me how you compared different models? Did you use the same learning rate, number of epochs, Number of decay epochs, image size, optimizer among all models? Also, did you collect test results using the final saved generator or did you use the best results testing all saved generators at different epochs?
Nice work! I have several questions about your paper:
Thank you so much! I really appreciate your work.
I wonder if this happens to you guys. When I was trying try a GAN, let's say DC_GAN, on VGG face dataset. The dataset includes faces of different people. The training process on 32x32 image was nice and smooth, but when changed to 64x64 or above I will get some grey scale images and some RGB images.
Hi there,
Thanks for the great paper and excellent implementation. Currently me and my team are working on a similar task as the one proposed in your paper. I noticed you got FID 28.08 of GAN on Cifar10, which I have a difficult time to reproduce. The results I got are:
GAN: fid = 114 (200 epoch)
GAN: fid = 116 (800 epoch)
DC-GAN fid = 125 (200 epoch)
I have two guesses:
There are some issues with the model I use, maybe I need to tune it. In that case, I wonder if you can share some experience with tuning a traditional GAN/ DC_GAN on cifar10. Or maybe point me to some of the code you guys used.
My FID calculation has a bug. The code I used to calculate Fid says self-conditioned-gan achieve 17 fid, which is the same as the numbers in your paper. The only different is that self-conditioned-gan generate sample results by producing a '.npz' file while I generate sample by loading the checkpoint and generate 60k png images. Does this seems right to you? Or I am making a stupid mistake :<
################ code start ###########
def gen(g, num_samples=60000, latent_size=100, path="images"):
for i in range(num_samples):
# Sample noise as generator input
z = Variable(Tensor(np.random.normal(0, 1, (1, latent_size))))
gen_imgs = g(z)
save_image(gen_imgs.data[0], os.path.join(path, f"{i}.png"), normalize=True)
if not i % 1000:
print(i)
################# code end ###########
Thank you for sharing the code. Please share the code of the stacked MNIST dataset for conditional GAN. Actually I have some quarries regarding the conditional gan for stacked mnist dataset?
Great work! But i am curious that why not using StyleGAN as the generator backbone since it has stronger capbility.
Outstanding work! And Thanks for releasing this great implementation!
I’ m trying to reproducing Cifar10 experiment. GAN results in Table 2 achieved IS of 6.98.
Using python train.py configs/cifar/unconditional.yaml with epoch=400, I got IS of 5.73 for best during 400 epochs. Additionally after 400 epoch the final result I got is IS of 5.46.
Due to the unstablity of GAN training, the final result is usually not the best.
I repeated this experiment several times and got best results around 5.7, which could not achieve IS of 6.98.
Should I train more epochs,or could you give me some advice?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.