Giter VIP home page Giter VIP logo

celebamask-hq's People

Contributors

liuziwei7 avatar steven413d avatar switchablenorms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

celebamask-hq's Issues

How to process image without affecting background?

Hi, first of all, thank you very much for sharing such nice project~! I am trying to use it on real-time webcam, however, process image will affect background? Is their a way to seperate backgound so that it wont get affect by editing facial parts?

Only 18 classes can be found.

1: hair,
2: l_brow, 3: r_brow
4: l_eye, 5: r_eye, 6: eye_g
7: l_ear, 8: r_ear, 9: ear_r
10: nose, 11: mouth, 12: skin
13: u_lip, 14: l_lip
15: neck, 16: neck_l
17: cloth, 18: hat

Coderefactoring for g_mask.py

Coderefactoring for g_mask.py

`face_sep_mask = 'CelebAMask-HQ-mask-anno'

mask_path = 'CelebAMask-HQ/mask'

for i in range(15):

    # files = os.listdir(osp.join(face_sep_mask, str(i)))

    atts = ['skin', 'l_brow', 'r_brow', 'l_eye', 'r_eye', 'eye_g', 'l_ear', 'r_ear', 'ear_r',
            'nose', 'mouth', 'u_lip', 'l_lip', 'neck', 'neck_l', 'cloth', 'hair', 'hat']

    for j in range(i*2000, (i+1)*2000):

        mask = np.zeros((512, 512))

        for l, att in enumerate(atts, 1):
            total += 1
            file_name = ''.join([str(j).rjust(5, '0'), '_', att, '.png'])
            path = osp.join(face_sep_mask, str(i), file_name)

            if os.path.exists(path):
                counter += 1
                sep_mask = np.array(Image.open(path).convert('P'))
                # print(np.unique(sep_mask))

                mask[sep_mask == 225] = l
        cv2.imwrite('{}/{}.png'.format(mask_path, j), mask)`

How can I train MaskGAN for another dataset?

Hi,

Thanks for sharing your code! MaskGAN is very interesting project.
I tried to train MaskGAN for another dataset. I know that overall training pipeline is divided 2 steps. (Stage-1, Stage-2) However, I don't know how can i get pretrained Ga, Enc_VAE, Dec_VAE (Stage-1).
If possible, can you tell me how to train MaskGAN for another dataset on this code?

Thank you.

Have trouble running this demo

I added all datasets into the file,the file was 3.11G after unzipped.I put HQ pictures under './Data_preprocessing/train_img',put label pictures under './Data_preprocessing/train_label'.But when i trained the network,it shows FileNotFoundError:[Error 2]No such file or directory: './Data_preprocessing/train_label\***.png',***always change,Can you tell me why?Thank you.

Download dataset

Hello, google drive link is down. Could you please fix it again? Thanks.

HOW can I train the model from scratch?

Thanks for your excellent work. I wonder when you can release the training code and pre-trained models. And how can I trained the whole model? looking for your reply.

can't download dataset

Hi,

I tried to download the dataset from both links and they both failed.
Can you please check the links or upload the dataset again?

About the colorization of Segmentation images?

Hi, is it possible to directly colorize individual segmentation images and concatenate together?
I tried to run the run_test.sh in less no of batch size and also decreased the value of other parameters but still, it says CUDA out of memory? can you give me any solution to that?.
Thanks in Advance :)

use of celebA in a book

Hi

I am writing a book for beginners on making their first GAN with Python/PyTorch.

I would like to use the CelebA dataset as an example dataset.

I understand the dataset is "not for commercial use". So I have some questions:

  • Can I have permission to use the CelebA dataset to show a few example images from it?

  • Can I have permission to share a "HDF5" version of the dataset?

I suspect the answer to the above is no. So...

  • Can I use my own cartoon images to illustrate faces, but then only explain how to download, parse, and then write code to train a GAN using celebA? This way no actual images from the dataset are shown. The dataset itself won't be re-distributed. Users will be pointed to your website.

  • I have had legal advice to say that images that GAN outputs are not covered by the same restrictions as the source dataset because they are not actually in the dataset, and no part of them is. Do you have any thoughts on this? I hope to show the results of poor and then good GAN training in my book.

About testing model's performance

Hi,
Could you please provide the testing code of calculating mAcc? I re-train a model based on your design but on a different dataset. I want to evaluate the performance.
Thanks~

How to change the size of output

Hi, thanks for sharing such a great project. I have a question about how to change the size of the output image. Since we need the labeled output to be 256 * 256, and there is a parameter called 'imsize' that provides an option to change the size. But this is an input error when I changed the imsize from defaulted 512512 to 256256. Can you help me with that?

Model does not generalize well

I tested with an external image (not from CelebAHQ) and saw that the identity is not preserved.

The demo you provide works well with images from your training set but not good with other images. How do you see this problem ?

Demo problem

Hi,I try to run demo.py but find it has a problem about missing G model like followings:./checkpoints/label2face_512p/latest_net_G.pth not exists yet!Could you please share latest_net_G.pth? Looking forward to your reply!

Question about BN in the deconder

Hi, it seems you didn't use ReLU and BN in the decoder side. Did you implement in this way purposely?

In the defination of unetUp,
self.conv = unetConv2(in_size, out_size, False), where False means is_batchnorm=False

Thanks.

Hi ! how to get the ground-true masks of some other datasets?

Thank you for your wonderful codes and dataset!!!
how is ground-true masks of HQ dataset produced? artificial annotation or annotated by some segmentation network?
The GAN-demo project seems that the GAN is used to transform the label to synthesis images.
In the face-parsing project, the result seems bad especially for the large-pose face, can you tell me how the ground-true masks produced or can you introduce me some GAN to produce the masks.
I am very sorry for my bad English!
Looking forward for your respond!
Thank you!

Where is the training code?

Dear authors, I have read your paper

Lee, C.-H., Liu, Z., Wu, L., & Luo, P. (2019). MaskGAN: Towards Diverse and Interactive Facial Image Manipulation. http://arxiv.org/abs/1907.11922

and am interested to reimplement it in TensorFlow 2.0.

I'm having some issues with the VAE loss and would like to see how it is implemented but I can't seem to find the training code in this repo.

Most examples on the web on VAEs are on toy datasets like MNIST, and the loss consist of the binary cross entropy loss + KL divergence loss. However, since masks can have 19 classes, I'm not too certain if I can simply replace the binary cross entropy loss with the categorical cross entropy loss.

Please help.

Issue while running g_mask.py

Hi, thanks for the code.
I have been trying to preprocess the mask but I keep on getting this problem while I run g_mask.py code.
Exception has occurred: TypeError bad operand type for unary +: 'str' File "D:\Datasets\CelebAMask-HQ\Data_preprocessing\g_mask.py", line 21, in <module> filename = os.path.join(folder_base, str(folder_num), str(k).rjust(5, '0')+ + '_' + label + '.png') TypeError: bad operand type for unary +: 'str'

So when I change the code from "+ +" to just "+" The code runs but generates black png file only.
filename = os.path.join(folder_base, str(folder_num), str(k).rjust(5, '0')+ '_' + label + '.png')
Am I doing something incorrect?

A question about the "skin" label

Hi, thanks for your contribution of building these dataset. Could you please help me with the following question?

I'm not familiar with the face parsing, but it seems the "skin" region also includes "eye", "lip", "nose", etc... Have you considered about removing these facial components from the "skin" region in the annotation image? Or it is commonly used to annotate in that way?

Thanks.

Small bug in g_mask.py

In CelebAMask-HQ/face_parsing/Data_preprocessing/g_mask.py line 18, it should be k // 2000. Otherwise, there is a possibility of generating only black images.

Testing face_parsing doesn't work

Seems like there are major issues with the code in face_parsing. Following instructions on the README doesn't work.

  • mixed tabs and spaces in multiple files
  • missing indentation in multiple files
  • wrong/missing input arguments in parameters.py (what is g_num and where did it go?)
    ...

About replace the region from target mask to source mask

Thank you for the code sharing. Now, I want to replace some regions of the mask of source image with target mask, could you give me some advice, or share the code of Attribute Transfer in the paper? I remember it's said to use HopeNet, but I don't know how to use the output, that is the angle information of roll, pitch, and yaw.

Unlabeled skin. Necks.

dataset has some images with unlabeled skin

Example 573.jpg
remain skin between neck and shoulder is unlabeled
Photoshop_2020-03-10_09-50-10
Therefore I cannot fetch actual background label.

also some neck labels are wrong.
9852:
Photoshop_2020-03-10_10-00-32
Photoshop_2020-03-10_10-10-11

Hi! some issues about testing result.

HI! From your Readme, the network can achieve about 93% accuracy, but I have run your code, it only output segmentation result, how to get the accuracy, is there a script? Thank you!

where is the 'CelebAMask-HQ-label' folder ?

wonderful work, but I didn't find the folder of 'CelebAMask-HQ-label', the 'g_color.py' need it. does it the 'CelebAMaskHQ-mask'(g_mask generated) = 'CelebAMask-HQ-label' ? well, i just wanna confirm...

Looking forward to your replay

Can I share this dataset?

Hi
Thank you for sharing a nice dataset.

A few days ago, I implemented a model called SPADE, and I did an experiment using your CelebAMask-HQ dataset. link

I've been asked to release your dataset that has been completed of segmentation.
Can I release it?

If possible, I will citate you and release dataset.

celebA-HQ

Are the images in the folder named 'celebA-HQ' the same as the original images from the dataset celebA-HQ'?

Apply colors ?

How can I train it differently so that I can edit or apply hair colors or change color of lips?
Regards.

Have trouble with g_mask.py

I put CelebAMask-HQ-mask-anno under Datapreprocessing,and ran g_mask.py,but the pictures generated in CelebAMask-HQ-mask were black.Why?Shouldn't it be full labels?

Training task

I've rewritten the trainig code based on the project module and the paper. But I could'nt reproduce the same result as the pretrained model does as my losses couldn't compare with the pretrained ones. Are there other tricks in training that you guys didn't mention in the original paper?

Problem with download from google drive

Sorry, you can't view or download this file at this time.

Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.

There may be some problems on the mask image below...

Type of Problem
A: missing facial attribute
B: mask area is too large
C: useless label
D: content does not match the label
E: no content
/////////////////////////////////

CelebAMask-HQ-single/0
00481:
00481_hair.png[A]
01499:
01499_hair.png[B]

CelebAMask-HQ-single/1
02260:
02260_hair.png[B]
02281:
02281_hair.png[B]
02905:
02905_hat.png[C]
03137:
03137_hat.png[C]

CelebAMask-HQ-single/2
04790:
04790_hair.png[E]
04790_l_brow.png[E]
04790_l_ear.png[E]
04790_l_eye.png[E]
04790_l_lip.png[E]
04790_mouth.png[E]
04790_neck.png[E]
04790_nose.png[E]
04790_r_brow.png[E]
04790_r_eye.png[E]
04790_u_lip.png[E]
04995:
04995_skin.png[D]
04995_hair.png[A]
05130:
05130_hair.png[B]
05591:
05591_hair.png[B]
05608:
05608_hair.png[B]

CelebAMask-HQ-single/4
09150:
09150_cloth.png[C]
09895:
09895_hat.png[C]

CelebAMask-HQ-single/5
10184:
10184_hair.png[B]

CelebAMask-HQ-single/6
13008:
13008_hat.png[A]

CelebAMask-HQ-single/7
15587:
15587_hat.png[C]

CelebAMask-HQ-single/8
17586:
17586_hair.png[A]

CelebAMask-HQ-single/9
18279:
18279_skin.png[D]
18279_hair.png[A]
18322:
18322_r_brow.png[D]
18322_hair.png[A]

CelebAMask-HQ-single/10
20043:
20043_hat.png[C]

CelebAMask-HQ-single/11
23088:
23088_hat.png[C]
23888:
23888_hat.png[C]

CelebAMask-HQ-single/13
26534:
26534_hair.png[D]

End

how to split train/test

how to split train/test

I downloaded CelebAMask-HQ.zip data. After unzip file, I saw the folder data below,
but I don't know how to split it to train/test. Could you help me how to split train/test? and which folder I will use.
Thanks

CelebA-HQ-img
CelebAMask-HQ-mask-anno
CelebA-HQ-to-CelebA-mapping.txt
CelebAMask-HQ-attribute-anno.txt
CelebAMask-HQ-pose-anno.txt
README.txt

AssertionError: size of input tensor and input format are different.

Hi!
when i run the trainer.py,i got an error:
Traceback (most recent call last):
File "/home/data/penny/CelebAMask-HQ-master/face_parsing/main.py", line 27, in
main(config)
File "/home/data/penny/CelebAMask-HQ-master/face_parsing/main.py", line 19, in main
trainer.train()
File "/home/data/penny/CelebAMask-HQ-master/face_parsing/trainer.py", line 120, in train
writer.add_image('imresult/img', (imgs.data + 1) / 2.0, step)
File "/usr/local/lib/python3.5/dist-packages/tensorboardX/writer.py", line 548, in add_image
image(tag, img_tensor, dataformats=dataformats), global_step, walltime)
File "/usr/local/lib/python3.5/dist-packages/tensorboardX/summary.py", line 211, in image
tensor = convert_to_HWC(tensor, dataformats)
File "/usr/local/lib/python3.5/dist-packages/tensorboardX/utils.py", line 103, in convert_to_HWC
tensor shape: {}, input_format: {}".format(tensor.shape, input_format)
AssertionError: size of input tensor and input format are different. tensor shape: (8, 3, 512, 512), input_format: CHW
I tried to reduce the tensorboardX's version,but it didn't work.
So could you tell me how to solve this problem?thank you very much!

Stage 2 training

I know that Editing Behavior Simulated Training be divided into two stage.

In stage 2, dense mapping network and alpha blending network is trained.

Doesn't the discriminator learn? If you are learning discriminator, do you use different multi discriminators for the two generators(Ga = Dense mapping network, Gb = alpha blender)?
In other wards, are there two discriminators?

Parsing result

I ran the test demo,but the result seems to be indistinct.It's almost black,with grey hair and mouth.Eyes and noise are unable to see.Is it right result?Thank you.

Bug need to fix!

According to the script g_partition.py, mapping.txt need to move the first line!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.