duxingren14 / dualgan Goto Github PK
View Code? Open in Web Editor NEWDualGAN-tensorflow: tensorflow implementation of DualGAN
License: Apache License 2.0
DualGAN-tensorflow: tensorflow implementation of DualGAN
License: Apache License 2.0
Thanks in advance!
My training set images'width and height is different, (e.g. width is 80 and height is 200). what the parameter of imgsize should I choose?
and my training set A images' channel is 3 while B images' channel is 1, can I use "--A_channels 3 --B_channels 1 " to run the code?
how to obtain classification accuracy as mentioned in the paper? per pixel accuracy and class accuracy.
I see that you add tf.contrib.layers.batch_norm in the networks, but I can't fine where do you update the moving average of bn
The link for "maps" dataset seems to be broken. I wonder if you could update it? Thanks!
hello duxinggten
When I use training command:python main.py --phase train --dataset_name sketch-photo --image_size 256 --epoch 45 --lambda_A 20.0 --lambda_B 20.0 --A_channels 1 --B_channels 1
I get error:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run
options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: <exception str() failed>
As it's based on pix2pix (which had a good tensorflow implementation with tensorboard support) does DualGAN support tensorboard? What is the logdir?
above
I read your code about the design for loss and found your implementation is different from that proposed in the paper. So you use the traditional loss of GAN instead of the WGAN loss? Does it mean WGAN loss might not be a good choice in practice?
Will this resize the images automatically to the given size or is this to tell the trainer what the size of the image is in the dataset?
作者能不能 出一份requirements,让我们直接安装呀
when I was trying to train a sketch-photo model, it caused the segmentation fault. And I changed to another datasets, the fault happened, too.
I used:
python main.py --phase train --dataset_name sketch-photo --image_size 256 --epoch 45 --lambda_A 20.0 --lambda_B 20.0 --A_channels 1 --B_channels 1
Could you please help me?
could you move to imageio?
code does not work with actual tensorflow due to scipy > 1.2.0
I found that the "model.py" code has been written to search only for .jpg images. Would be nicer to make it work for .png images aswell.
Hello, I'd like to ask if I can use custom datasets to train and test the model?
Currently, I had different dataset with different image size (126x126), but still had the same characteristics as your sketch-photo
dataset (ie. having set A and B, both in train
and val
set)
I once tricked the model by filling the val
directory of sketch-photo
dataset with my own dataset, and it worked. I just want to know is there a way to do this by just providing my dataset without reconfiguring the sketch-photo
dataset directories
In model.py, why is the training step ran twice for the generators?
def run_optim(self,batch_A_imgs, batch_B_imgs, counter, start_time):
_, Adfake,Adreal,Bdfake,Bdreal, Ad, Bd = self.sess.run(
[self.d_optim, self.Ad_loss_fake, self.Ad_loss_real, self.Bd_loss_fake, self.Bd_loss_real, self.Ad_loss, self.Bd_loss],
feed_dict = {self.real_A: batch_A_imgs, self.real_B: batch_B_imgs})
_, Ag, Bg, Aloss, Bloss = self.sess.run(
[self.g_optim, self.Ag_loss, self.Bg_loss, self.A_loss, self.B_loss],
feed_dict={ self.real_A: batch_A_imgs, self.real_B: batch_B_imgs})
_, Ag, Bg, Aloss, Bloss = self.sess.run(
[self.g_optim, self.Ag_loss, self.Bg_loss, self.A_loss, self.B_loss],
feed_dict={ self.real_A: batch_A_imgs, self.real_B: batch_B_imgs})
In the paper it says:
To optimize the DualGAN networks, we follow the training
procedure proposed in WGAN [1]; see Alg. 1. We train
the discriminators ncritic steps, then one step on generators
So, is the code doing the opposite (stepping generators twice and discriminator only once) ?
Thanks for providing the code. Just wondering, does the code support to resume training process? So I can continue the training process after a certain epoch without training from scratch again. Thanks.
In the DualGAN paper shows Algorithm 1 DualGAN training procedure. It says stop training until convergence. Do you have any idea what exactly it means? Does it means A_d_loss and B_d_loss update to 0.5 Simultaneously?Many thanks.
I run with the argument --use_labeled_data 'semi', and got an error : 'AttributeError: 'DualNet' object has no attribute 'C_d_vars' .
I uncommented some code in the model.py, but still have problem running the code.
Please give me some advice. Thanks.
I see you don't specify the running mode in batch_norm. Is there any reasons? Does this have a big influence in the quality of image generated?
When i read dualgan, i meed a problem.
It‘s −DA(GB(v,z0))−DB(GA(u,z)) in formula 3,
can one task's discriminator used in another task's generator.
According to your paper, you use the loss format advocated by WGAN rather than the sigmoid cross-entropy loss used in the original GAN.
But in this repo, it looks no WGAN. That's why?
Firstly,thanks for your sharing! I have a problem.where does the 'center_crop()' in the utils.py import from? And it shows error marked by the red line when I open the codes in Pycharm.I also can't find something like 'import center_crop' at the top of the file.May you give me some advice? @duxingren14
the codes here:
def transform(image, npx=64, is_crop=True, resize_w=64): # npx : # of pixels width/height of image if is_crop: cropped_image = center_crop(image, npx, resize_w=resize_w) else: cropped_image = image return np.array(cropped_image)/127.5 - 1.
While the training is running (inside jenkins apparently) it keeps showing errors like:
c_allocator.cc:696] 18 Chunks of size 819200 totalling 14.06MiB
2017-08-27 20:07:53.768974: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\b
c_allocator.cc:696] 9 Chunks of size 1638400 totalling 14.06MiB
2017-08-27 20:07:53.768996: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\b
c_allocator.cc:696] 19 Chunks of size 3276800 totalling 59.38MiB
2017-08-27 20:07:53.769019: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\b
c_allocator.cc:696] 9 Chunks of size 6553600 totalling 56.25MiB
2017-08-27 20:07:53.769041: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\b
c_allocator.cc:696] 19 Chunks of size 13107200 totalling 237.50MiB
2017-08-27 20:07:53.769064: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\b
c_allocator.cc:696] 52 Chunks of size 26214400 totalling 1.27GiB
2017-08-27 20:07:53.769157: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\b
c_allocator.cc:696] 1 Chunks of size 46343680 totalling 44.20MiB
2017-08-27 20:07:53.769181: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\b
c_allocator.cc:696] 27 Chunks of size 52428800 totalling 1.32GiB
2017-08-27 20:07:53.769202: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\b
c_allocator.cc:700] Sum Total of in-use chunks: 3.00GiB
2017-08-27 20:07:53.769244: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\b
c_allocator.cc:702] Stats:
Limit: 3226265190
InUse: 3226265088
MaxInUse: 3226265088
NumAllocs: 808
MaxAllocSize: 52428800
...
Resource exhausted: OOM when allocating tensor with shape[5,5,1,64]
I am not familiar to jenkins, is this normal? (I am on Windows 10, with 16GB RAM and only 30% is in use and the GPU is 1050TI 4GB)
LE: This doesn't seem to occur if I set --fcn_filter_dim 32
add_argument('--niter', dest='niter', type=int, default=30, help='# of iter at starting learning rate')
This argument is added but seems to never be used. Should it be removed or be implemented?
I tried running tensorfboard on the logs directory, but dint get any loss graphs. How to visualise the training process in tensorboard?
Hi,
This is great and very well engineered, well done. I do have a question about the identity loss which you see in a few papers on dual gans, loss( A2B, A ). I'm not sure I can see it there, is there a reason for this?
Hello, I'd like to know is it possible to train the Dual GAN without the discrinminator? which means I only need to minimize the "Reconstruction Error" during training?
Many thanks!
I have no ideas that why the epoch be set 45 in test stage.Please give me some advice,thanks!
python main.py --phase test --dataset_name sketch-photo --image_size 256 --epoch 45 --lambda_A 20.0 --lambda_B 20.0 --A_channels 1 --B_channels 1
Where is the ‘clip_trainable_vars‘ used?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.