lizhenwangt / styleavatar Goto Github PK
View Code? Open in Web Editor NEWCode of SIGGRAPH 2023 Conference paper: StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video
License: BSD 2-Clause "Simplified" License
Code of SIGGRAPH 2023 Conference paper: StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video
License: BSD 2-Clause "Simplified" License
This seems really cool and I would love to try it. Any update on when the training code and pre-trained models will be released?
How can I find the Whole Dataset ? with 4 folders render, image,..,.
Dear Author,
I encountered this in running
python3 train.py --batch 3 --ckpt pretrained/tdmm_lizhen_full.pt --mode 3 path_to_dataset
I have installed stylegan_ops with logs looking fine.
I shall be most grateful if you may look into this. Thanks!
Sincerely,
Picard
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
logs in detail:
Traceback (most recent call last):
File "/mnt/swh/git/StyleAvatar/styleunet/train.py", line 340, in
train(args, loader, generator, discriminator, g_ema, g_optim, d_optim, device)
File "/mnt/swh/git/StyleAvatar/styleunet/train.py", line 147, in train
fake_img = generator(cond_img, latent)
File "/home/rtcai3/anaconda3/envs/geneface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/swh/git/StyleAvatar/styleunet/networks/generator.py", line 91, in forward
cond_img = self.dwt(condition_img)
File "/home/rtcai3/anaconda3/envs/geneface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/swh/git/StyleAvatar/styleunet/networks/modules.py", line 397, in forward
ll = upfirdn2d(input, self.ll, down=2)
File "/mnt/swh/git/StyleAvatar/styleunet/networks/stylegan2_ops/upfirdn2d.py", line 165, in upfirdn2d
out = UpFirDn2d.apply(input, kernel, up, down, pad)
File "/mnt/swh/git/StyleAvatar/styleunet/networks/stylegan2_ops/upfirdn2d.py", line 121, in forward
out = upfirdn2d_op.upfirdn2d(
TypeError: upfirdn2d() takes from 2 to 8 positional arguments but 10 were given
Thanks for the great repo.
I'm training the Full StyleAvatar, specifically with the command python train.py --batch 3 path-to-dataset
. Training from scratch as the checkpoints have not been shared yet.
On the A10 GPU, it takes about a week to run for the default training parameters. Is that normal? I ask because the paper mentioned
The proposed network can converge within two hours while ensuring high image quality and a forward rendering time of only 20 milliseconds.
So maybe I'm missing something, can you help? :)
Thanks again for the great work. Unfortunately; the output is still quite glitchy as mentioned in the paper. Do you have any recommendations on how to improve that?
For example:
Running full style avatar but it seems to resize images the wrong way. It zooms out a lot on the face for each image (even though it was 1536 crop size from faceverse and there is a lot of space around the face). This makes most of the input images off.
Is there a mistake or is this normal?
The exe version of the preprocessing requires some cudnn libraries that are missing. Please, include them on the distributable zip or state so in the README so future users will not have to face this problem.
Hi,
It's really a nice job! I made my new video into raw data (render, image, uv, exp.txt, id.txt....) using the python method. Can I use lizhen_full_python.pt to train this new subject? Or is it only applicable to lizhen? If yes, how do I train new characters?
I use python train.py --batch 3 --ckpt pretrained/lizhen_full_python.pt my_new_dataset
Hello author,
In this line should you add the id coeffs to existing id coeffs or just replace them?https://github.com/LizhenWangT/FaceVerse/blob/24b86858b99035b54bce8514521df53af7100f9d/faceversev3_jittor/tracking_offline_cuda.py#L132C33-L132C33
INVALID_CONFIG: The engine plan file is generated on an incompatible device, expecting compute 8.9 got compute 8.6, please rebuild
Any suggestions on how to rebuild or releasing the 8.6 compute version would be greatly helpful!
I am trying to train styleavatar on the newly released code. I used: python train.py --batch 3 pretrain_dataset/0/
However I am getting the following error:
load dataset: 0
0%| | 0/800000 [00:00<?, ?it/s]
/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/functional.py:3737: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
0%| | 0/800000 [00:03<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 387, in <module>
train(args, loader, back_generator, face_generator, image_generator, discriminator, g_ema, b_g_optim, f_g_optim, i_g_optim, d_optim, device)
File "train.py", line 189, in train
l1_loss = torch.mean(torch.abs(fake_img - image)) * 5
RuntimeError: The size of tensor a (1024) must match the size of tensor b (784) at non-singleton dimension 3
How to fix this?
Great work, I have been following this for a long time. Today is May 9th, I would like to ask when you will open source code
what is the reccomended training time and minimum recomended frames to train with to get the results in your demo & minimum GPU requirments?
I see that the face shapes of the driving and target actors are relatively similar in many of yours demos. What's your intuition on how the quality will be affected when they are different? In theory, should the algorithm perform just as well?
Thanks a lot for this - great work! You mention "We will crop the video, then render the tracked FaceVerse model with texture and uv vertex colors"; where is this part of the code?
I am trying to run the pre-trained model with the output from faceverse but the results are bad and i believe it's because it's not rendering the texture and uv vertex color
Hello, when using your pretrained checkpoints I have the following error
load model: pretrained\tdmm_lizhen_full.pt Traceback (most recent call last): File "test.py", line 57, in <module> g_ema.load_state_dict(ckpt["g_ema"], strict=True) File "C:\Users\user\anaconda3\envs\styleavatar\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for DoubleStyleUnet: Missing key(s) in state_dict: "iwt_1.ll", "iwt_1.lh", "iwt_1.hl", "iwt_1.hh", "iwt_4.ll", "iwt_4.lh", "iwt_4.hl", "iwt_4.hh", "from_rgbs_0.0.iwt.ll", "from_rgbs_0.0.iwt.lh", "from_rgbs_0.0.iwt.hl", "from_rgbs_0.0.iwt.hh", "from_rgbs_0.0.downsample.kernel", "from_rgbs_0.0.dwt.ll", "from_rgbs_0.0.dwt.lh", "from_rgbs_0.0.dwt.hl", "from_rgbs_0.0.dwt.hh", "from_rgbs_0.0.conv.0.weight", "from_rgbs_0.0.conv.0.bias", "from_rgbs_0.1.iwt.ll", "from_rgbs_0.1.iwt.lh", "from_rgbs_0.1.iwt.hl", "from_rgbs_0.1.iwt.hh", "from_rgbs_0.1.downsample.kernel", "from_rgbs_0.1.dwt.ll", "from_rgbs_0.1.dwt.lh", "from_rgbs_0.1.dwt.hl", "from_rgbs_0.1.dwt.hh", "from_rgbs_0.1.conv.0.weight", "from_rgbs_0.1.conv.0.bias", "from_rgbs_0.2.iwt.ll", "from_rgbs_0.2.iwt.lh", "from_rgbs_0.2.iwt.hl", "from_rgbs_0.2.iwt.hh", "from_rgbs_0.2.downsample.kernel", "from_rgbs_0.2.dwt.ll", "from_rgbs_0.2.dwt.lh", "from_rgbs_0.2.dwt.hl", "from_rgbs_0.2.dwt.hh", "from_rgbs_0.2.conv.0.weight", "from_rgbs_0.2.conv.0.bias", "cond_convs_0.0.conv1.0.weight", "cond_convs_0.0.conv1.0.bias", "cond_convs_0.0.conv2.0.kernel", "cond_convs_0.0.conv2.1.weight", "cond_convs_0.0.conv2.1.bias", "cond_convs_0.1.conv1.0.weight", "cond_convs_0.1.conv1.0.bias", "cond_convs_0.1.conv2.0.kernel", "cond_convs_0.1.conv2.1.weight", "cond_convs_0.1.conv2.1.bias", "cond_convs_0.2.conv1.0.weight", "cond_convs_0.2.conv1.0.bias", "cond_convs_0.2.conv2.0.kernel", "cond_convs_0.2.conv2.1.weight", "cond_convs_0.2.conv2.1.bias", "comb_convs_0.0.0.weight", "comb_convs_0.0.0.bias", "comb_convs_0.1.0.weight", "comb_convs_0.1.0.bias", "convs_0.0.bias", "convs_0.0.conv.weight", "convs_0.0.conv.blur.kernel", "convs_0.0.conv.modulation.weight", "convs_0.0.conv.modulation.bias", "convs_0.0.noise.weight", "convs_0.1.bias", "convs_0.1.conv.weight", "convs_0.1.conv.modulation.weight", "convs_0.1.conv.modulation.bias", "convs_0.1.noise.weight", "convs_0.2.bias", "convs_0.2.conv.weight", "convs_0.2.conv.blur.kernel", "convs_0.2.conv.modulation.weight", "convs_0.2.conv.modulation.bias", "convs_0.2.noise.weight", "convs_0.3.bias", "convs_0.3.conv.weight", "convs_0.3.conv.modulation.weight", "convs_0.3.conv.modulation.bias", "convs_0.3.noise.weight", "convs_0.4.bias", "convs_0.4.conv.weight", "convs_0.4.conv.blur.kernel", "convs_0.4.conv.modulation.weight", "convs_0.4.conv.modulation.bias", "convs_0.4.noise.weight", "convs_0.5.bias", "convs_0.5.conv.weight", "convs_0.5.conv.modulation.weight", "convs_0.5.conv.modulation.bias", "convs_0.5.noise.weight", "to_rgbs_0.0.bias", "to_rgbs_0.0.iwt.ll", "to_rgbs_0.0.iwt.lh", "to_rgbs_0.0.iwt.hl", "to_rgbs_0.0.iwt.hh", "to_rgbs_0.0.upsample.kernel", "to_rgbs_0.0.dwt.ll", "to_rgbs_0.0.dwt.lh", "to_rgbs_0.0.dwt.hl", "to_rgbs_0.0.dwt.hh", "to_rgbs_0.0.conv.weight", "to_rgbs_0.0.conv.modulation.weight", "to_rgbs_0.0.conv.modulation.bias", "to_rgbs_0.1.bias", "to_rgbs_0.1.iwt.ll", "to_rgbs_0.1.iwt.lh", "to_rgbs_0.1.iwt.hl", "to_rgbs_0.1.iwt.hh", "to_rgbs_0.1.upsample.kernel", "to_rgbs_0.1.dwt.ll", "to_rgbs_0.1.dwt.lh", "to_rgbs_0.1.dwt.hl", "to_rgbs_0.1.dwt.hh", "to_rgbs_0.1.conv.weight", "to_rgbs_0.1.conv.modulation.weight", "to_rgbs_0.1.conv.modulation.bias", "to_rgbs_0.2.bias", "to_rgbs_0.2.iwt.ll", "to_rgbs_0.2.iwt.lh", "to_rgbs_0.2.iwt.hl", "to_rgbs_0.2.iwt.hh", "to_rgbs_0.2.upsample.kernel", "to_rgbs_0.2.dwt.ll", "to_rgbs_0.2.dwt.lh", "to_rgbs_0.2.dwt.hl", "to_rgbs_0.2.dwt.hh", "to_rgbs_0.2.conv.weight", "to_rgbs_0.2.conv.modulation.weight", "to_rgbs_0.2.conv.modulation.bias", "tex_up.0.bias", "tex_up.0.conv.weight", "tex_up.0.conv.blur.kernel", "tex_up.0.conv.modulation.weight", "tex_up.0.conv.modulation.bias", "tex_up.0.noise.weight", "tex_up.1.bias", "tex_up.1.conv.weight", "tex_up.1.conv.modulation.weight", "tex_up.1.conv.modulation.bias", "tex_up.1.noise.weight", "tex_up.2.bias", "tex_up.2.iwt.ll", "tex_up.2.iwt.lh", "tex_up.2.iwt.hl", "tex_up.2.iwt.hh", "tex_up.2.upsample.kernel", "tex_up.2.dwt.ll", "tex_up.2.dwt.lh", "tex_up.2.dwt.hl", "tex_up.2.dwt.hh", "tex_up.2.conv.weight", "tex_up.2.conv.modulation.weight", "tex_up.2.conv.modulation.bias", "get_mask.0.weight", "get_mask.0.bias", "cond_addition.conv1.0.weight", "cond_addition.conv1.0.bias", "cond_addition.conv2.0.kernel", "cond_addition.conv2.1.weight", "cond_addition.conv2.1.bias", "from_rgbs_1.0.iwt.ll", "from_rgbs_1.0.iwt.lh", "from_rgbs_1.0.iwt.hl", "from_rgbs_1.0.iwt.hh", "from_rgbs_1.0.downsample.kernel", "from_rgbs_1.0.dwt.ll", "from_rgbs_1.0.dwt.lh", "from_rgbs_1.0.dwt.hl", "from_rgbs_1.0.dwt.hh", "from_rgbs_1.0.conv.0.weight", "from_rgbs_1.0.conv.0.bias", "from_rgbs_1.1.iwt.ll", "from_rgbs_1.1.iwt.lh", "from_rgbs_1.1.iwt.hl", "from_rgbs_1.1.iwt.hh", "from_rgbs_1.1.downsample.kernel", "from_rgbs_1.1.dwt.ll", "from_rgbs_1.1.dwt.lh", "from_rgbs_1.1.dwt.hl", "from_rgbs_1.1.dwt.hh", "from_rgbs_1.1.conv.0.weight", "from_rgbs_1.1.conv.0.bias", "from_rgbs_1.2.iwt.ll", "from_rgbs_1.2.iwt.lh", "from_rgbs_1.2.iwt.hl", "from_rgbs_1.2.iwt.hh", "from_rgbs_1.2.downsample.kernel", "from_rgbs_1.2.dwt.ll", "from_rgbs_1.2.dwt.lh", "from_rgbs_1.2.dwt.hl", "from_rgbs_1.2.dwt.hh", "from_rgbs_1.2.conv.0.weight", "from_rgbs_1.2.conv.0.bias", "cond_convs_1.0.conv1.0.weight", "cond_convs_1.0.conv1.0.bias", "cond_convs_1.0.conv2.0.kernel", "cond_convs_1.0.conv2.1.weight", "cond_convs_1.0.conv2.1.bias", "cond_convs_1.1.conv1.0.weight", "cond_convs_1.1.conv1.0.bias", "cond_convs_1.1.conv2.0.kernel", "cond_convs_1.1.conv2.1.weight", "cond_convs_1.1.conv2.1.bias", "cond_convs_1.2.conv1.0.weight", "cond_convs_1.2.conv1.0.bias", "cond_convs_1.2.conv2.0.kernel", "cond_convs_1.2.conv2.1.weight", "cond_convs_1.2.conv2.1.bias", "comb_convs_1.0.0.weight", "comb_convs_1.0.0.bias", "comb_convs_1.1.0.weight", "comb_convs_1.1.0.bias", "comb_convs_1.2.0.weight", "comb_convs_1.2.0.bias", "convs_1.0.bias", "convs_1.0.conv.weight", "convs_1.0.conv.blur.kernel", "convs_1.0.conv.modulation.weight", "convs_1.0.conv.modulation.bias", "convs_1.0.noise.weight", "convs_1.1.bias", "convs_1.1.conv.weight", "convs_1.1.conv.modulation.weight", "convs_1.1.conv.modulation.bias", "convs_1.1.noise.weight", "convs_1.2.bias", "convs_1.2.conv.weight", "convs_1.2.conv.blur.kernel", "convs_1.2.conv.modulation.weight", "convs_1.2.conv.modulation.bias", "convs_1.2.noise.weight", "convs_1.3.bias", "convs_1.3.conv.weight", "convs_1.3.conv.modulation.weight", "convs_1.3.conv.modulation.bias", "convs_1.3.noise.weight", "convs_1.4.bias", "convs_1.4.conv.weight", "convs_1.4.conv.blur.kernel", "convs_1.4.conv.modulation.weight", "convs_1.4.conv.modulation.bias", "convs_1.4.noise.weight", "convs_1.5.bias", "convs_1.5.conv.weight", "convs_1.5.conv.modulation.weight", "convs_1.5.conv.modulation.bias", "convs_1.5.noise.weight", "convs_1.6.bias", "convs_1.6.conv.weight", "convs_1.6.conv.blur.kernel", "convs_1.6.conv.modulation.weight", "convs_1.6.conv.modulation.bias", "convs_1.6.noise.weight", "convs_1.7.bias", "convs_1.7.conv.weight", "convs_1.7.conv.modulation.weight", "convs_1.7.conv.modulation.bias", "convs_1.7.noise.weight", "convs_1.8.bias", "convs_1.8.conv.weight", "convs_1.8.conv.blur.kernel", "convs_1.8.conv.modulation.weight", "convs_1.8.conv.modulation.bias", "convs_1.8.noise.weight", "convs_1.9.bias", "convs_1.9.conv.weight", "convs_1.9.conv.modulation.weight", "convs_1.9.conv.modulation.bias", "convs_1.9.noise.weight", "convs_1.10.bias", "convs_1.10.conv.weight", "convs_1.10.conv.blur.kernel", "convs_1.10.conv.modulation.weight", "convs_1.10.conv.modulation.bias", "convs_1.10.noise.weight", "convs_1.11.bias", "convs_1.11.conv.weight", "convs_1.11.conv.modulation.weight", "convs_1.11.conv.modulation.bias", "convs_1.11.noise.weight", "to_rgbs_1.0.bias", "to_rgbs_1.0.iwt.ll", "to_rgbs_1.0.iwt.lh", "to_rgbs_1.0.iwt.hl", "to_rgbs_1.0.iwt.hh", "to_rgbs_1.0.upsample.kernel", "to_rgbs_1.0.dwt.ll", "to_rgbs_1.0.dwt.lh", "to_rgbs_1.0.dwt.hl", "to_rgbs_1.0.dwt.hh", "to_rgbs_1.0.conv.weight", "to_rgbs_1.0.conv.modulation.weight", "to_rgbs_1.0.conv.modulation.bias", "to_rgbs_1.1.bias", "to_rgbs_1.1.iwt.ll", "to_rgbs_1.1.iwt.lh", "to_rgbs_1.1.iwt.hl", "to_rgbs_1.1.iwt.hh", "to_rgbs_1.1.upsample.kernel", "to_rgbs_1.1.dwt.ll", "to_rgbs_1.1.dwt.lh", "to_rgbs_1.1.dwt.hl", "to_rgbs_1.1.dwt.hh", "to_rgbs_1.1.conv.weight", "to_rgbs_1.1.conv.modulation.weight", "to_rgbs_1.1.conv.modulation.bias", "to_rgbs_1.2.bias", "to_rgbs_1.2.iwt.ll", "to_rgbs_1.2.iwt.lh", "to_rgbs_1.2.iwt.hl", "to_rgbs_1.2.iwt.hh", "to_rgbs_1.2.upsample.kernel", "to_rgbs_1.2.dwt.ll", "to_rgbs_1.2.dwt.lh", "to_rgbs_1.2.dwt.hl", "to_rgbs_1.2.dwt.hh", "to_rgbs_1.2.conv.weight", "to_rgbs_1.2.conv.modulation.weight", "to_rgbs_1.2.conv.modulation.bias", "to_rgbs_1.3.bias", "to_rgbs_1.3.iwt.ll", "to_rgbs_1.3.iwt.lh", "to_rgbs_1.3.iwt.hl", "to_rgbs_1.3.iwt.hh", "to_rgbs_1.3.upsample.kernel", "to_rgbs_1.3.dwt.ll", "to_rgbs_1.3.dwt.lh", "to_rgbs_1.3.dwt.hl", "to_rgbs_1.3.dwt.hh", "to_rgbs_1.3.conv.weight", "to_rgbs_1.3.conv.modulation.weight", "to_rgbs_1.3.conv.modulation.bias", "to_rgbs_1.4.bias", "to_rgbs_1.4.iwt.ll", "to_rgbs_1.4.iwt.lh", "to_rgbs_1.4.iwt.hl", "to_rgbs_1.4.iwt.hh", "to_rgbs_1.4.upsample.kernel", "to_rgbs_1.4.dwt.ll", "to_rgbs_1.4.dwt.lh", "to_rgbs_1.4.dwt.hl", "to_rgbs_1.4.dwt.hh", "to_rgbs_1.4.conv.weight", "to_rgbs_1.4.conv.modulation.weight", "to_rgbs_1.4.conv.modulation.bias", "to_rgbs_1.5.bias", "to_rgbs_1.5.iwt.ll", "to_rgbs_1.5.iwt.lh", "to_rgbs_1.5.iwt.hl", "to_rgbs_1.5.iwt.hh", "to_rgbs_1.5.upsample.kernel", "to_rgbs_1.5.dwt.ll", "to_rgbs_1.5.dwt.lh", "to_rgbs_1.5.dwt.hl", "to_rgbs_1.5.dwt.hh", "to_rgbs_1.5.conv.weight", "to_rgbs_1.5.conv.modulation.weight", "to_rgbs_1.5.conv.modulation.bias". Unexpected key(s) in state_dict: "from_rgbs.0.iwt.ll", "from_rgbs.0.iwt.lh", "from_rgbs.0.iwt.hl", "from_rgbs.0.iwt.hh", "from_rgbs.0.downsample.kernel", "from_rgbs.0.dwt.ll", "from_rgbs.0.dwt.lh", "from_rgbs.0.dwt.hl", "from_rgbs.0.dwt.hh", "from_rgbs.0.conv.0.weight", "from_rgbs.0.conv.0.bias", "from_rgbs.1.iwt.ll", "from_rgbs.1.iwt.lh", "from_rgbs.1.iwt.hl", "from_rgbs.1.iwt.hh", "from_rgbs.1.downsample.kernel", "from_rgbs.1.dwt.ll", "from_rgbs.1.dwt.lh", "from_rgbs.1.dwt.hl", "from_rgbs.1.dwt.hh", "from_rgbs.1.conv.0.weight", "from_rgbs.1.conv.0.bias", "from_rgbs.2.iwt.ll", "from_rgbs.2.iwt.lh", "from_rgbs.2.iwt.hl", "from_rgbs.2.iwt.hh", "from_rgbs.2.downsample.kernel", "from_rgbs.2.dwt.ll", "from_rgbs.2.dwt.lh", "from_rgbs.2.dwt.hl", "from_rgbs.2.dwt.hh", "from_rgbs.2.conv.0.weight", "from_rgbs.2.conv.0.bias", "cond_convs.0.conv1.0.weight", "cond_convs.0.conv1.0.bias", "cond_convs.0.conv2.0.kernel", "cond_convs.0.conv2.1.weight", "cond_convs.0.conv2.1.bias", "cond_convs.1.conv1.0.weight", "cond_convs.1.conv1.0.bias", "cond_convs.1.conv2.0.kernel", "cond_convs.1.conv2.1.weight", "cond_convs.1.conv2.1.bias", "cond_convs.2.conv1.0.weight", "cond_convs.2.conv1.0.bias", "cond_convs.2.conv2.0.kernel", "cond_convs.2.conv2.1.weight", "cond_convs.2.conv2.1.bias", "comb_convs.0.0.weight", "comb_convs.0.0.bias", "comb_convs.1.0.weight", "comb_convs.1.0.bias", "comb_convs.2.0.weight", "comb_convs.2.0.bias", "convs.0.bias", "convs.0.conv.weight", "convs.0.conv.blur.kernel", "convs.0.conv.modulation.weight", "convs.0.conv.modulation.bias", "convs.0.noise.weight", "convs.1.bias", "convs.1.conv.weight", "convs.1.conv.modulation.weight", "convs.1.conv.modulation.bias", "convs.1.noise.weight", "convs.2.bias", "convs.2.conv.weight", "convs.2.conv.blur.kernel", "convs.2.conv.modulation.weight", "convs.2.conv.modulation.bias", "convs.2.noise.weight", "convs.3.bias", "convs.3.conv.weight", "convs.3.conv.modulation.weight", "convs.3.conv.modulation.bias", "convs.3.noise.weight", "convs.4.bias", "convs.4.conv.weight", "convs.4.conv.blur.kernel", "convs.4.conv.modulation.weight", "convs.4.conv.modulation.bias", "convs.4.noise.weight", "convs.5.bias", "convs.5.conv.weight", "convs.5.conv.modulation.weight", "convs.5.conv.modulation.bias", "convs.5.noise.weight", "convs.6.bias", "convs.6.conv.weight", "convs.6.conv.blur.kernel", "convs.6.conv.modulation.weight", "convs.6.conv.modulation.bias", "convs.6.noise.weight", "convs.7.bias", "convs.7.conv.weight", "convs.7.conv.modulation.weight", "convs.7.conv.modulation.bias", "convs.7.noise.weight", "convs.8.bias", "convs.8.conv.weight", "convs.8.conv.blur.kernel", "convs.8.conv.modulation.weight", "convs.8.conv.modulation.bias", "convs.8.noise.weight", "convs.9.bias", "convs.9.conv.weight", "convs.9.conv.modulation.weight", "convs.9.conv.modulation.bias", "convs.9.noise.weight", "convs.10.bias", "convs.10.conv.weight", "convs.10.conv.blur.kernel", "convs.10.conv.modulation.weight", "convs.10.conv.modulation.bias", "convs.10.noise.weight", "convs.11.bias", "convs.11.conv.weight", "convs.11.conv.modulation.weight", "convs.11.conv.modulation.bias", "convs.11.noise.weight", "to_rgbs.0.bias", "to_rgbs.0.iwt.ll", "to_rgbs.0.iwt.lh", "to_rgbs.0.iwt.hl", "to_rgbs.0.iwt.hh", "to_rgbs.0.upsample.kernel", "to_rgbs.0.dwt.ll", "to_rgbs.0.dwt.lh", "to_rgbs.0.dwt.hl", "to_rgbs.0.dwt.hh", "to_rgbs.0.conv.weight", "to_rgbs.0.conv.modulation.weight", "to_rgbs.0.conv.modulation.bias", "to_rgbs.1.bias", "to_rgbs.1.iwt.ll", "to_rgbs.1.iwt.lh", "to_rgbs.1.iwt.hl", "to_rgbs.1.iwt.hh", "to_rgbs.1.upsample.kernel", "to_rgbs.1.dwt.ll", "to_rgbs.1.dwt.lh", "to_rgbs.1.dwt.hl", "to_rgbs.1.dwt.hh", "to_rgbs.1.conv.weight", "to_rgbs.1.conv.modulation.weight", "to_rgbs.1.conv.modulation.bias", "to_rgbs.2.bias", "to_rgbs.2.iwt.ll", "to_rgbs.2.iwt.lh", "to_rgbs.2.iwt.hl", "to_rgbs.2.iwt.hh", "to_rgbs.2.upsample.kernel", "to_rgbs.2.dwt.ll", "to_rgbs.2.dwt.lh", "to_rgbs.2.dwt.hl", "to_rgbs.2.dwt.hh", "to_rgbs.2.conv.weight", "to_rgbs.2.conv.modulation.weight", "to_rgbs.2.conv.modulation.bias", "to_rgbs.3.bias", "to_rgbs.3.iwt.ll", "to_rgbs.3.iwt.lh", "to_rgbs.3.iwt.hl", "to_rgbs.3.iwt.hh", "to_rgbs.3.upsample.kernel", "to_rgbs.3.dwt.ll", "to_rgbs.3.dwt.lh", "to_rgbs.3.dwt.hl", "to_rgbs.3.dwt.hh", "to_rgbs.3.conv.weight", "to_rgbs.3.conv.modulation.weight", "to_rgbs.3.conv.modulation.bias", "to_rgbs.4.bias", "to_rgbs.4.iwt.ll", "to_rgbs.4.iwt.lh", "to_rgbs.4.iwt.hl", "to_rgbs.4.iwt.hh", "to_rgbs.4.upsample.kernel", "to_rgbs.4.dwt.ll", "to_rgbs.4.dwt.lh", "to_rgbs.4.dwt.hl", "to_rgbs.4.dwt.hh", "to_rgbs.4.conv.weight", "to_rgbs.4.conv.modulation.weight", "to_rgbs.4.conv.modulation.bias", "to_rgbs.5.bias", "to_rgbs.5.iwt.ll", "to_rgbs.5.iwt.lh", "to_rgbs.5.iwt.hl", "to_rgbs.5.iwt.hh", "to_rgbs.5.upsample.kernel", "to_rgbs.5.dwt.ll", "to_rgbs.5.dwt.lh", "to_rgbs.5.dwt.hl", "to_rgbs.5.dwt.hh", "to_rgbs.5.conv.weight", "to_rgbs.5.conv.modulation.weight", "to_rgbs.5.conv.modulation.bias".
Any clue about what is happening? The command I used was `python test.py --render_dir D:\TRASPASOS\styleavatar\code\results\lizhen_20210318_1536\render --uv_dir D:\TRASPASOS\styleavatar\code\results\lizhen_20210318_1536\uv --ckpt pretrained\tdmm_lizhen_full.pt --save_dir output\lizhen``, it gives a similar error with the tdmm_lizhen.pt checkpoint.
when testing the python version Full StyleAvatar pretrained model, I got error as below:
python test.py --render_dir test/render --uv_dir test/uv --ckpt pretrained/lizhen_full_python.pt --save_dir output/test
load model: pretrained/lizhen_full_python.pt
Traceback (most recent call last):
File "test.py", line 56, in
ckpt = torch.load(args.ckpt)
File "/root/miniconda3/envs/styleavatar/lib/python3.8/site-packages/torch/serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/root/miniconda3/envs/styleavatar/lib/python3.8/site-packages/torch/serialization.py", line 1172, in _load
result = unpickler.load()
File "/root/miniconda3/envs/styleavatar/lib/python3.8/site-packages/torch/serialization.py", line 1142, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "/root/miniconda3/envs/styleavatar/lib/python3.8/site-packages/torch/serialization.py", line 1112, in load_tensor
storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage)._typed_storage()._untyped_storage
RuntimeError: PytorchStreamReader failed reading file data/104: invalid header or archive is corrupted
it seems the model file is corrupted
Hello - I read the previous issue about interactive editing (#19); I was wondering if you could walk us through what are the corresponding coefficents for:
self.labellist.append(QLabel(' pupil x'))
self.labellist.append(QLabel(' pupil y'))
self.labellist.append(QLabel(' eyes'))
self.labellist.append(QLabel(' mouth 1'))
self.labellist.append(QLabel(' mouth 2'))
self.labellist.append(QLabel(' mouth 3'))
self.labellist.append(QLabel(' mouth 4'))
self.labellist.append(QLabel(' mouth 5'))
I want to recreate this on my own but going through a but of issue (i.e. how to replicate the results of the video (without interactive editing); just say I drag mouth 3 to the max
Description:
When I attempt to run faceverse.exe, I encounter the following error related to the CUDA library:
procedure entry point cublasLt_for_cublas_TST could not be located in the dynamic link library C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\cublas64_11.dll
Steps to Reproduce:
Installed required .dll files in the application directory.
Ran faceverse.exe.
Expected Behavior:
faceverse.exe runs without any errors.
Actual Behavior:
Received the aforementioned error related to cublas64_11.dll.
Environment:
OS: Windows 10
GPU: Nvidia A4500
CUDA Version: v11.8
Additional Context:
I encountered this issue after I added the missing .dll files to the application's folder as suggested in the documentation.
Any assistance or guidance on resolving this issue would be appreciated.
Thank you for your excellent work and open-source code. I am trying to train a model from scratch using my own dataset. During the training process, I noticed that the results in the "sample" folder were very good. However, when I used your default "test.py" file for inference with the trained model, I obtained poor results. Therefore, I would like to seek your advice on where the problem might be. Thank you very much, and best wishes.
When I launch the distributed training for the full styleavatar I get the following error:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Everything is fine for training in a single machine. Any clue on what is happening?
I managed to fix it by passing find_unused_parameters=True
to every DistributedDataParallel
constructor but it gives me several warnings so I prefer to open an issue rather than a pull request.
Hello, when training the styleunet of model 3 in-the-wild dataset, i use the pretrained model of model 1 with discriminator (face-superresolution) as you suggested (the training of styleunet can start with the pretrained model 1 with discriminator above). But got the the following error
Error(s) in loading state_dict for StyleUNet: size mismatch for from_rgbs.0.conv.0.weight: copying a param with shape torch.Size([32, 12, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 12, 1, 1]). size mismatch for from_rgbs.0.conv.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for from_rgbs.1.conv.0.weight: copying a param with shape torch.Size([256, 12, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 12, 1, 1]). size mismatch for from_rgbs.1.conv.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for cond_convs.0.conv1.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for cond_convs.0.conv1.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for cond_convs.0.conv2.1.weight: copying a param with shape torch.Size([256, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]). size mismatch for cond_convs.0.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for cond_convs.1.conv1.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for cond_convs.1.conv1.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for cond_convs.1.conv2.1.weight: copying a param with shape torch.Size([512, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for comb_convs.0.0.weight: copying a param with shape torch.Size([256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 1024, 3, 3]). size mismatch for comb_convs.0.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for comb_convs.2.0.weight: copying a param with shape torch.Size([512, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). File "/data4/y00028864/code_project/StyleAvatar/styleunet/train.py", line 325, in <module> generator.load_state_dict(ckpt["g"], strict=False) RuntimeError: Error(s) in loading state_dict for StyleUNet: size mismatch for from_rgbs.0.conv.0.weight: copying a param with shape torch.Size([32, 12, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 12, 1, 1]). size mismatch for from_rgbs.0.conv.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for from_rgbs.1.conv.0.weight: copying a param with shape torch.Size([256, 12, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 12, 1, 1]). size mismatch for from_rgbs.1.conv.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for cond_convs.0.conv1.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for cond_convs.0.conv1.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for cond_convs.0.conv2.1.weight: copying a param with shape torch.Size([256, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]). size mismatch for cond_convs.0.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for cond_convs.1.conv1.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for cond_convs.1.conv1.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for cond_convs.1.conv2.1.weight: copying a param with shape torch.Size([512, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for comb_convs.0.0.weight: copying a param with shape torch.Size([256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 1024, 3, 3]). size mismatch for comb_convs.0.0.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for comb_convs.2.0.weight: copying a param with shape torch.Size([512, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
The mismatch error. Did I understand your words wrong? THX
Out of curiosity - why are you using faceversev3 instead of faceversev1? faceversev1 has more detailed render which would make StyleAvatar learn better and have better tracking - what is teh rational for not using it and using v3 instead?
您在styleavatar论文中写的loss包括vgg loss,但是在train代码的backward中没有vgg loss,请问是不添加vgg loss效果会更好吗?
作者你好,你的styleavatar的效果非常让人惊喜。请问一下训练集需要多长的视频才能达到较好的效果?
Hi Lizhen,
First, thank you for your awesome work and the public code!
In your paper, you mentioned that you have a pre-training on the 6 videos cropped from 4K videos. I am very interested in your pre-trained model. Could you please provide it? Also, I wish to know more about the details of your pre-training, e.g. the training epochs, the video length, etc.
Thanks,
Zhuowen
When I open the .exe it gives an error to change the directory path, I then edit the info.json so I have a custom video input path and and output path (I am using the provided demo video)
But when I open the .exe again it auto closes without any messages/errors
I am running windows 4gb GPU which could be the cause, I am also doing no other steps as it it my understanding the exe does the pre-procesing.
After training on a custom dataset, the torch2onnx works perfectly fine but after converting to TensorRT with the exe and executing it with the exe it creates a gray screen and then halts. It does not produce any output, when using your TensorRT model everything works fine. Any clue what is happening?
When I'm doing multi-gpu training, specifically, CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port='1234' train.py --batch 3 path-to-dataset
, I'm told I need to make find_unused_parameters=True
while initializing torch.nn.parallel.DistributedDataParallel
. Things work when I make that change, but it feels like the training is 2x slower, and wanted to make sure it doesn't break anything else.
So my questions are:
Thank you!
Full error below
@surya-v100-spot:~/code/StyleAvatar/styleavatar$ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port='1234' train_backup.py --batch 3 ~/code/FaceVerse/faceversev3_jittor/output/vid
eo
/opt/conda/envs/py38/lib/python3.8/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects `--local-rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
load dataset: 0 video
load dataset: 0 video
0%| | 0/800000 [00:00<?, ?it/s]
/opt/conda/envs/py38/lib/python3.8/site-packages/torch/nn/functional.py:3737: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/opt/conda/envs/py38/lib/python3.8/site-packages/torch/nn/functional.py:3737: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/opt/conda/envs/py38/lib/python3.8/site-packages/torch/autograd/__init__.py:200: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the para
m's strides changed since DDP was constructed. This is not an error, but may impair performance.
grad.sizes() = [1, 512], strides() = [1, 1]
bucket_view.sizes() = [1, 512], strides() = [512, 1] (Triggered internally at ../torch/csrc/distributed/c10d/reducer.cpp:323.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
/opt/conda/envs/py38/lib/python3.8/site-packages/torch/autograd/__init__.py:200: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the para
m's strides changed since DDP was constructed. This is not an error, but may impair performance.
grad.sizes() = [1, 512], strides() = [1, 1]
bucket_view.sizes() = [1, 512], strides() = [512, 1] (Triggered internally at ../torch/csrc/distributed/c10d/reducer.cpp:323.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
0%| | 1/800000 [00:33<7434:14:11, 33.45s/it]
Traceback (most recent call last):
File "train_backup.py", line 387, in <module>
train(args, loader, back_generator, face_generator, image_generator, discriminator, g_ema, b_g_optim, f_g_optim, i_g_optim, d_optim, device)
File "train_backup.py", line 177, in train
feature_back, skip_back = back_generator(video_latent)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1139, in forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by pas
sing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the retur
n value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 12 17 22 27 32 37 42 47 52 57
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
Traceback (most recent call last):
File "train_backup.py", line 387, in <module>
train(args, loader, back_generator, face_generator, image_generator, discriminator, g_ema, b_g_optim, f_g_optim, i_g_optim, d_optim, device)
File "train_backup.py", line 177, in train
feature_back, skip_back = back_generator(video_latent)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1139, in forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by pas
sing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the retur
n value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 1: 12 17 22 27 32 37 42 47 52 57
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 14910) of binary: /opt/conda/envs/py38/bin/python
Traceback (most recent call last):
File "/opt/conda/envs/py38/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/envs/py38/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/distributed/launch.py", line 196, in <module>
main()
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/distributed/launch.py", line 192, in main
launch(args)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/distributed/launch.py", line 177, in launch
run(args)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
Exe windows file for style avatar cannot be downloaded; it requires permission:
"Exe: Windows exe version can be downloaded in https://drive.google.com/file/d/1BNflreit1RmJFJCTvFzEZUy78PxzU8qa/view?usp=drive_link, which is much faster.:
Sorry for bothering you but I saw in the demo video that you manage to reenact other faces like Obama's. How did you do this? Because as specified in the article, the generated model can only produce expressions close to those available in the dataset you train styleavatar with. Did you find such images for Obama, or is there a way to achieve same result training differently?
Style U Net is great - when will you release full style avatar? and what are the main differences with style U net?
首先感谢您的伟大作品,刚接触这个方向,有几个疑惑想请教一下:
Hey - I am able to reproduce results from the paper, great work!
I am currently looking into generating a full video with the original background. Faceverse crops the video to the face to train Styleavatar and now I am looking to attach the generated face images back onto original video. Any ideas or suggestions to do this would be greatly appreciated
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.