Giter VIP home page Giter VIP logo

glcic's People

Contributors

tadax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

glcic's Issues

Pretrained model

Hi @tadax ,
Can you please let me know if you can share any pretrained model of yours.
If you can share the pretrained weights. it would be really helpful if you can explain briefly how to use it during inference time.
I can then just feed in a corrupted image and get the completed image.

error: UnboundLocalError: local variable 'mask_batch' referenced before assignment

Hello,Thank you very much for you work,I am student with no experience in deep learning and image inpainting. i download your code and try to run it . but when i train the mode by using python train.py, i got a error:
Traceback (most recent call last):
File "train.py", line 138, in
train()
File "train.py", line 66, in train
completion = sess.run(model.completion, feed_dict={x: x_batch, mask: mask_batch, is_training: False})
UnboundLocalError: local variable 'mask_batch' referenced before assignment

I saw the source code and try to solve it, but didn't work at all. if you know why,would you please tell me? thanks very much.

Add perceptual loss from pre-trained CNN

Hi, @tadax, thank you very much for your project. I am new to the task of 'image inpainting', and I noticed that most of papers of this project adopt another loss, called perceptual loss by calculating the difference of reconstructed image and original image with the feature vector extracted from a pre-trained CNN (e.g., VGG16). Could you please show me how to add such loss into your model or recommend me some other implementation you know to realize such tasks? Thank you very much~~

How to use this on a test image using the pretrained model

I thought I could load the model and load the saved weights and then execute this step to generate the results.
completion = sess.run(model.completion, feed_dict={x: img, mask: mask_img, is_training: False}) sample = np.array((completion[0] + 1) * 127.5, dtype=np.uint8) cv2.imwrite('./output.jpg', cv2.cvtColor(sample, cv2.COLOR_RGB2BGR))
I used the images you given as test results.

This is the error I got.

You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [16,128,128,3]
	 [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[16,128,128,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]

Thank you for the implementation.

Updating the generator using the joint loss gradient?

Hi, thank you very much for your work. In the paper, they use the joint loss (L2 and GAN loss) gradient to update the generator G, but in your implementation, I notice that you only use L2 loss gradient to update G. Have I missed something here? Thank you.

License?

Could add a license to this project? Thanks.

What are the conditions for stopping training?

In pre-training, there is a epoch which equals 100. Therefore, the pre-training termination condition is 100 cycles. But, I can not find the condition of training, there is not the parameter of epoch. When running the training program, the program was running and could not stop until the computer did not have enough memory. I hope I can get your answer. Thank you! @tadax

Is this GAN?

Hi, thank you for your much effort!
I think your code has two fatal mistakes.
(1. As issue #2, your generator gets no feed back from discriminator. Use joint-loss.)
2. On your code, there's no path to back-propagate "discriminator loss in joint-loss" (I mean log(1-D) in joint loss) to generator's weights and biases. Because your class Network gets completion and local_completion from outside of the network.
(You can confirm this by
opt.compute_gradients(model.d_loss,model.g_variables)
which returns None instead of gradient tensor.)

training error

Hello, when I try to train my own set of images this error appears, do you know what it is?

AttributeError: module 'tensorflow' has no attribute 'placeholder'

Batch normalization module

Hello,
Thank you very much for your work, that really helps me a lot. I have noticed that the batch normalization function didn't work very well in my models: with batch normalization it will be trained slower and converge at the wrong point. But we I substitute the 'gamma' and 'beta's <truncated_normal_initializer> with <zeros/ones_initializer>, your batch normalization module works very well and converges at the right point. As I am a tensorflow beginner, I don't understand well the differences it brings. Could you please explain to me why you choose these initializers and why they bring such a difference? Thank you very much!

Best wishes,
J. SHI

completion of an incomplete image

Hi,

Thanks for your work. The code is really well written and provides the expected output(as shown in the paper).
I have a question. In the network.py file, you are defining the self.imitation in the __init()__ method which is used to define self.completion
The self.imitation is basically an instance of the generator() function(NW) defined below. The argument to this function is an incomplete image generated based on the mask and the original image provided.
My question is: will it be possible to pass an already incomplete image and the mask(which can be computed using some IP) and then generate the completed image.
In the paper, they are passing the completion network an incomplete image and the mask. You have done so in your code too but when testing the model, we need the original image. We cannot pass the model an already incomplete image and ask the model to complete the incomplete image. Eg: Let's say if I were to pick a random image from the training or test set and add a mask to this image (and forget that I have the original image) and use the code to complete this image, that won't be possible right?
I feel that should be the end goal right? Correct me if I'm wrong.
Please let me know. Any help is appreciated in this direction is appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.