switchablenorms / celebamask-hq Goto Github PK
View Code? Open in Web Editor NEWA large-scale face dataset for face parsing, recognition, generation and editing.
A large-scale face dataset for face parsing, recognition, generation and editing.
1: hair,
2: l_brow, 3: r_brow
4: l_eye, 5: r_eye, 6: eye_g
7: l_ear, 8: r_ear, 9: ear_r
10: nose, 11: mouth, 12: skin
13: u_lip, 14: l_lip
15: neck, 16: neck_l
17: cloth, 18: hat
Hi, thanks for your contribution of building these dataset. Could you please help me with the following question?
I'm not familiar with the face parsing, but it seems the "skin" region also includes "eye", "lip", "nose", etc... Have you considered about removing these facial components from the "skin" region in the annotation image? Or it is commonly used to annotate in that way?
Thanks.
wonderful work, but I didn't find the folder of 'CelebAMask-HQ-label', the 'g_color.py' need it. does it the 'CelebAMaskHQ-mask'(g_mask generated) = 'CelebAMask-HQ-label' ? well, i just wanna confirm...
Looking forward to your replay
我之前在stylegan的基础上修改了一下,没你的效果好;
Thank you for the code sharing. Now, I want to replace some regions of the mask of source image with target mask, could you give me some advice, or share the code of Attribute Transfer in the paper? I remember it's said to use HopeNet, but I don't know how to use the output, that is the angle information of roll, pitch, and yaw.
Hi,
I tried to download the dataset from both links and they both failed.
Can you please check the links or upload the dataset again?
Hi, thanks for the code.
I have been trying to preprocess the mask but I keep on getting this problem while I run g_mask.py code.
Exception has occurred: TypeError bad operand type for unary +: 'str' File "D:\Datasets\CelebAMask-HQ\Data_preprocessing\g_mask.py", line 21, in <module> filename = os.path.join(folder_base, str(folder_num), str(k).rjust(5, '0')+ + '_' + label + '.png') TypeError: bad operand type for unary +: 'str'
So when I change the code from "+ +" to just "+" The code runs but generates black png file only.
filename = os.path.join(folder_base, str(folder_num), str(k).rjust(5, '0')+ '_' + label + '.png')
Am I doing something incorrect?
Hello,
I see some demo images containing the eyes and eye-glasses at the same time.
How do you label them? Label the eye-glasses region first and re-label some region as eyes?
Thanks~
In CelebAMask-HQ/face_parsing/Data_preprocessing/g_mask.py line 18, it should be k // 2000. Otherwise, there is a possibility of generating only black images.
According to the script g_partition.py, mapping.txt need to move the first line!
how to split train/test
I downloaded CelebAMask-HQ.zip data. After unzip file, I saw the folder data below,
but I don't know how to split it to train/test. Could you help me how to split train/test? and which folder I will use.
Thanks
CelebA-HQ-img
CelebAMask-HQ-mask-anno
CelebA-HQ-to-CelebA-mapping.txt
CelebAMask-HQ-attribute-anno.txt
CelebAMask-HQ-pose-anno.txt
README.txt
CelebA-HQ-img(Folder)
CelebA-HQ-mask-anno(Folder)
and three text files.
But the mask folder does not exit for training.py?
I know that Editing Behavior Simulated Training be divided into two stage.
In stage 2, dense mapping network and alpha blending network is trained.
Doesn't the discriminator learn? If you are learning discriminator, do you use different multi discriminators for the two generators(Ga = Dense mapping network, Gb = alpha blender)?
In other wards, are there two discriminators?
Coderefactoring for g_mask.py
`face_sep_mask = 'CelebAMask-HQ-mask-anno'
mask_path = 'CelebAMask-HQ/mask'
for i in range(15):
# files = os.listdir(osp.join(face_sep_mask, str(i)))
atts = ['skin', 'l_brow', 'r_brow', 'l_eye', 'r_eye', 'eye_g', 'l_ear', 'r_ear', 'ear_r',
'nose', 'mouth', 'u_lip', 'l_lip', 'neck', 'neck_l', 'cloth', 'hair', 'hat']
for j in range(i*2000, (i+1)*2000):
mask = np.zeros((512, 512))
for l, att in enumerate(atts, 1):
total += 1
file_name = ''.join([str(j).rjust(5, '0'), '_', att, '.png'])
path = osp.join(face_sep_mask, str(i), file_name)
if os.path.exists(path):
counter += 1
sep_mask = np.array(Image.open(path).convert('P'))
# print(np.unique(sep_mask))
mask[sep_mask == 225] = l
cv2.imwrite('{}/{}.png'.format(mask_path, j), mask)`
Hi,
Could you please provide the testing code of calculating mAcc? I re-train a model based on your design but on a different dataset. I want to evaluate the performance.
Thanks~
Hi, thanks for sharing such a great project. I have a question about how to change the size of the output image. Since we need the labeled output to be 256 * 256, and there is a parameter called 'imsize' that provides an option to change the size. But this is an input error when I changed the imsize from defaulted 512512 to 256256. Can you help me with that?
I ran the test demo,but the result seems to be indistinct.It's almost black,with grey hair and mouth.Eyes and noise are unable to see.Is it right result?Thank you.
Hi
Thank you for sharing a nice dataset.
A few days ago, I implemented a model called SPADE, and I did an experiment using your CelebAMask-HQ dataset. link
I've been asked to release your dataset that has been completed of segmentation.
Can I release it?
If possible, I will citate you and release dataset.
Google link 404 not found error
Can I get another google link?
Thanks
Hi
I am writing a book for beginners on making their first GAN with Python/PyTorch.
I would like to use the CelebA dataset as an example dataset.
I understand the dataset is "not for commercial use". So I have some questions:
Can I have permission to use the CelebA dataset to show a few example images from it?
Can I have permission to share a "HDF5" version of the dataset?
I suspect the answer to the above is no. So...
Can I use my own cartoon images to illustrate faces, but then only explain how to download, parse, and then write code to train a GAN using celebA? This way no actual images from the dataset are shown. The dataset itself won't be re-distributed. Users will be pointed to your website.
I have had legal advice to say that images that GAN outputs are not covered by the same restrictions as the source dataset because they are not actually in the dataset, and no part of them is. Do you have any thoughts on this? I hope to show the results of poor and then good GAN training in my book.
Where can I get the label map of the faces, I need it to train my own models,can you share it with me ?
Sorry, you can't view or download this file at this time.
Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.
Dear authors, I have read your paper
Lee, C.-H., Liu, Z., Wu, L., & Luo, P. (2019). MaskGAN: Towards Diverse and Interactive Facial Image Manipulation. http://arxiv.org/abs/1907.11922
and am interested to reimplement it in TensorFlow 2.0.
I'm having some issues with the VAE loss and would like to see how it is implemented but I can't seem to find the training code in this repo.
Most examples on the web on VAEs are on toy datasets like MNIST, and the loss consist of the binary cross entropy loss + KL divergence loss. However, since masks can have 19 classes, I'm not too certain if I can simply replace the binary cross entropy loss with the categorical cross entropy loss.
Please help.
HI! From your Readme, the network can achieve about 93% accuracy, but I have run your code, it only output segmentation result, how to get the accuracy, is there a script? Thank you!
Hi,I try to run demo.py but find it has a problem about missing G model like followings:./checkpoints/label2face_512p/latest_net_G.pth not exists yet!Could you please share latest_net_G.pth? Looking forward to your reply!
I tested with an external image (not from CelebAHQ) and saw that the identity is not preserved.
The demo you provide works well with images from your training set but not good with other images. How do you see this problem ?
How can I train it differently so that I can edit or apply hair colors or change color of lips?
Regards.
I added all datasets into the file,the file was 3.11G after unzipped.I put HQ pictures under './Data_preprocessing/train_img',put label pictures under './Data_preprocessing/train_label'.But when i trained the network,it shows FileNotFoundError:[Error 2]No such file or directory: './Data_preprocessing/train_label\***.png',***always change,Can you tell me why?Thank you.
Hello, google drive link is down. Could you please fix it again? Thanks.
Seems like there are major issues with the code in face_parsing. Following instructions on the README doesn't work.
Hi,
Thanks for sharing your code! MaskGAN is very interesting project.
I tried to train MaskGAN for another dataset. I know that overall training pipeline is divided 2 steps. (Stage-1, Stage-2) However, I don't know how can i get pretrained Ga, Enc_VAE, Dec_VAE (Stage-1).
If possible, can you tell me how to train MaskGAN for another dataset on this code?
Thank you.
subj
I've rewritten the trainig code based on the project module and the paper. But I could'nt reproduce the same result as the pretrained model does as my losses couldn't compare with the pretrained ones. Are there other tricks in training that you guys didn't mention in the original paper?
Hi! Thank you for your dataset and codes!
But if I have a face dataset such as FDDB or 300W, can I get the ground-true labels for these datasets and train their network? Thank you!
End
Thanks for your excellent work. I wonder when you can release the training code and pre-trained models. And how can I trained the whole model? looking for your reply.
as said
Hi, it seems you didn't use ReLU and BN in the decoder side. Did you implement in this way purposely?
In the defination of unetUp,
self.conv = unetConv2(in_size, out_size, False)
, where False
means is_batchnorm=False
Thanks.
Hi!
when i run the trainer.py,i got an error:
Traceback (most recent call last):
File "/home/data/penny/CelebAMask-HQ-master/face_parsing/main.py", line 27, in
main(config)
File "/home/data/penny/CelebAMask-HQ-master/face_parsing/main.py", line 19, in main
trainer.train()
File "/home/data/penny/CelebAMask-HQ-master/face_parsing/trainer.py", line 120, in train
writer.add_image('imresult/img', (imgs.data + 1) / 2.0, step)
File "/usr/local/lib/python3.5/dist-packages/tensorboardX/writer.py", line 548, in add_image
image(tag, img_tensor, dataformats=dataformats), global_step, walltime)
File "/usr/local/lib/python3.5/dist-packages/tensorboardX/summary.py", line 211, in image
tensor = convert_to_HWC(tensor, dataformats)
File "/usr/local/lib/python3.5/dist-packages/tensorboardX/utils.py", line 103, in convert_to_HWC
tensor shape: {}, input_format: {}".format(tensor.shape, input_format)
AssertionError: size of input tensor and input format are different. tensor shape: (8, 3, 512, 512), input_format: CHW
I tried to reduce the tensorboardX's version,but it didn't work.
So could you tell me how to solve this problem?thank you very much!
Hi, first of all, thank you very much for sharing such nice project~! I am trying to use it on real-time webcam, however, process image will affect background? Is their a way to seperate backgound so that it wont get affect by editing facial parts?
Why do I find only Celeb-HQ-img in the .zip file downloaded from BaiduPan (3.08G), but no mask images?
The .zip file in Google Drive seems the same as the one in BaiduPan (same size). Where can I get the mask image/information?
The directories in .zip file are organized as:
--CelebAMask-HQ
----CelebA-HQ-img
------xxx.jpg
------xxx.jpg
------....
------....
How can i get colorful mask images with several color types as shown your README.md?
Are the images in the folder named 'celebA-HQ' the same as the original images from the dataset celebA-HQ'?
I put CelebAMask-HQ-mask-anno under Datapreprocessing,and ran g_mask.py,but the pictures generated in CelebAMask-HQ-mask were black.Why?Shouldn't it be full labels?
Hi, is it possible to directly colorize individual segmentation images and concatenate together?
I tried to run the run_test.sh in less no of batch size and also decreased the value of other parameters but still, it says CUDA out of memory? can you give me any solution to that?.
Thanks in Advance :)
Thank you for your wonderful codes and dataset!!!
how is ground-true masks of HQ dataset produced? artificial annotation or annotated by some segmentation network?
The GAN-demo project seems that the GAN is used to transform the label to synthesis images.
In the face-parsing project, the result seems bad especially for the large-pose face, can you tell me how the ground-true masks produced or can you introduce me some GAN to produce the masks.
I am very sorry for my bad English!
Looking forward for your respond!
Thank you!
Each pixel must belong to only one class of 19 category?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.