Giter VIP home page Giter VIP logo

glean's People

Contributors

ckkelvinchan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

glean's Issues

Dataset

Hi,
Thanks for your great work!
I'm new in this area, so I'm confused about the dataset.
I just downloaded celebA-HQ dataset from this link [https://drive.google.com/drive/folders/11Vz0fqHS2rXDb5pprgTjpD7S2BAJhi1P?usp=sharing], and I'm afraid my validation set (or the image name) is different from yours. So can you check whether my first 100 images are the same as your validation set?
And Is it correct for me to train glean on all the 70,000 images on FFHQ, while validating and testing on these 100 images on celebA-HQ dateset?
celebA-HQ-val-100

Thanks in advance!

Hi!

Hi! kelvin!
I was very impressed by your paper. I want to implement it, but it's too difficult.
Is there any code for network structure or overall training?
Thanks!

about the StyleGAN2 in MMEditing/MMGeneration and BasicSR differences

Thanks for your great work!
The paper mentioned you adopted SytleGAN2 based on BasicSR(https://github.com/XPixelGroup/BasicSR) as pretrained model. However, It actually used the SytleGAN2 based on MMGeneration(mmedit/models/components/stylegan2/generator_discriminator.py) in your mmediting code.
So, I wonder if there are any differences or influences between these two approaches in implementing styleGAN2 in details such as network architecture and so on?
Thanks!

How to do inference with one image of my own

Nice work! I am trying to test the model out using one of my own images. The test example was to run test for a whole dataset with metrics calculated. How do I do a test on a real world image without calculating metrics but just with an output image?

Hi!!

Hello!
I have a question about your model.
What method do you use for augmentation when learning with FFHQ?
For example, do you train an image by rotating it 180 degrees?

training problem

Hi,
I am really interested in your work and thanks for all the provided materials. 1-would u please help me to solve this problem along training I have attached the screenshot of it?

I have checked all the installation multiple times.
Screenshot from 2022-04-13 16-01-48

2-would you please share the weight for when your input size is 16x16?

3- is it possible to just change the input size to 16 and out to 256 and using your x16 pretained weight without any other modification?

4- would u please share the x8 weights for face images (u have just provided x16)?

Bad Test Result

Hi kelvin,
I have tested the model (released on here) with Cat Test dataset, but I have obtained very poor test results.
00000800_007
At the same time, the metrics are also very unsatisfactory and far from the results given in the paper.
Could you give me some suggestions? Should I train model again or try something else?

Poor results or wrong useage of GLEAN in face images.

Hi, I have tried GLEAN in mmediting for 64->1024 face SR. But the generated results are very poor.
My command is python restoration_demo.py configs/restorers/glean/glean_ffhq_16x.py workdirs/glean_ffhq_16x_20210527-61a3afad.pth tests/data/1009.png preds/1009.png --device 2

My input is 64x64 face image
image
and the output is
image

Wondering about the dataset used for training

Thanks for your great work.
I'm wondering if you only use the cat class in LSUN for training 'cats', and cars in LSUN for training 'cars'? Then what's the training data for the category of Bedroom and Towers? Or did you use all the categories in LSUN for training?
Thanks in advance!

Where can I find the pretrained

Hi kelvin
I want to try to reproduce the results of this nice work but I can't find the link to the pretrained . Can you send me a the link ?
Thanks

About the LPIPS performance

Hi, I tried to test the LPIPS performance on CelebaHQ100 with your glean_ffhq_16x weights. However, the performance is 0.2864, which is higher than the paper report 0.2681. I use the https://github.com/richzhang/PerceptualSimilarity 0.1 version with Alex.
Do you have any idea on it?
Besides, I got the 26.847 PSNR, which is very similar to the paper report of 26.84.

Thanks

Test set!

HI!
I know your test set is celeba-HQ.
Can I use the numbers 0 through 99 in the original celeba-HQ file?

training with different input size

Hello,
Thanks for your spectacular paper and materials.
I want to train your network when my input size is 16x16 or 32x32 and output is 256x256.
I think I can not use your provided pretarined weight (ffhqx16) for finetunig, 1- is it true? because you used styleGAN specific to 1024.

So, I should use the special stylegan which is for 256. I have download it from:
https://catalog.ngc.nvidia.com/orgs/nvidia/teams/research/models/stylegan2/files
; but I can not find the discriminator pretrain weight. 2-would u please help me to find it?

and as I checked the file (glean_ffhq_16x.py and below screenshot), I should change input size=16, output size=256, style channels=256, and ckpt_path of generator and discrimator URL. 3- are these true? is there anything else that I should change?

image

Thanks in advance.

Where can I find the datasets correspond to your meta-info

Hi kelvin
I want to try to reproduce the results of this nice work. But I can't find the dataset corresponding to /data/ celeA-HQ/ "meta_info_CelebAHQ_val100_GT.txt".
Where can I download it?

here is the error of log : FileNotFoundError: [Errno 2] No such file or directory: 'data/CelebA-HQ/BIx16_down/00001.png'

Thanks

Some questions about the resizing method.

Hi, thanks for your kind response! I have another question about the resize method used in python. Since you have mentioned that the matlab default bicubic resize should be used in GLEAN. However, I think maybe the resize method is LINEAR_AREA?
Here is the resized result in python opencv (cv2)
bicubic:
1009_bicubic
area:
1009_area
Then, the output result of bicubic is
image
And the result of area is
image
It can be seen that area is much better than bicubic. But the area result still has some artifacts (hair). I think maybe the resize method still has some problems? The sample is named 1009.jpg in Celeba-HQ.

Questions about training and testing LSUN dataset

Hi, Kelvin. Thanks for sharing this impressive work. I have a question about LSUN dataset.
For example, for the cat category, there are only about 30k images in your meta_info file. However, there are 1m images in the LSUN cat dataset. So if I want to train and test other categories in LSUN dataset such as tower, car and so on, how should I select images and crop/ resize them?

GAN network

Hello, I would like to ask is GLEAN a GAN network? The discriminator is StyleGAN2Discriminator

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x ใ€1.xใ€2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

About input 32x32 human face

Hi kelvin,
I want to input the size of 32x32 human face images and output the size of 1024x1024 using glean.
Does the model(glean_ffhq_16x) is trained only can be input the size of 64x64 images, output the size of 1024x1024?
Can I finish my test only by modifying the encoder and decoder?

looking forward to your reply!
thx

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.