psyai-net / emotalk_release Goto Github PK
View Code? Open in Web Editor NEWThis is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"
License: Other
This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"
License: Other
你好,非常感谢你的工作,我在复现你的工作的时候,发现结果不收敛,您这边能够提供train的代码吗,万分感谢。
I can do lip sync for any character. I reduced the quality to 10 MB in order to upload video. If you are interested, write to me in telegram: The_best_result
(zyt-nerf) amax@amax:~/zyt/audio2face/EmoTalk_release$ python demo.py --wav_path "./audio/disgust.wav"
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at jonatasgrosman/wav2vec2-large-xlsr-53-english and are newly initialized: ['wav2vec2.lm_head.bias', 'wav2vec2.lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of Wav2Vec2ForSpeechClassification were not initialized from the model checkpoint at r-f/wav2vec-english-speech-emotion-recognition and are newly initialized: ['wav2vec2.lm_head.bias', 'wav2vec2.lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/activation.py:1144: UserWarning: Converting mask without torch.bool dtype to bool; this will negatively affect performance. Prefer to use a boolean mask directly. (Triggered internally at ../aten/src/ATen/native/transformers/attention.cpp:150.)
return torch._native_multi_head_attention(
Traceback (most recent call last):
File "/home/amax/zyt/audio2face/EmoTalk_release/demo.py", line 111, in
main()
File "/home/amax/zyt/audio2face/EmoTalk_release/demo.py", line 106, in main
test(args)
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/amax/zyt/audio2face/EmoTalk_release/demo.py", line 30, in test
prediction = model.predict(audio, level, person)
File "/home/amax/zyt/audio2face/EmoTalk_release/model.py", line 140, in predict
bs_out11 = self.transformer_decoder(hidden_states11, hidden_states_emo11_832, tgt_mask=tgt_mask11,
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 360, in forward
output = mod(output, memory, tgt_mask=tgt_mask,
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 698, in forward
x = self.norm1(x + self._sa_block(x, tgt_mask, tgt_key_padding_mask, tgt_is_causal))
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 707, in _sa_block
x = self.self_attn(x, x, x,
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/activation.py", line 1144, in forward
return torch._native_multi_head_attention(
RuntimeError: Mask shape should match input. mask: [4, 104, 104] input: [1, 4, 104, 104]
Could anyone help me with an idea on how to apply this animation to generate a real head talker/lip-syncing head?
Thank you so much
Hello,
I'm very interested in the blendshape capturing method used for reconstructing the 3D-ETF.
Could you tell me how you capture the facial blendshapes from 2D videos?
Hope you reply, thanks.
Please post instructions for how to use Emo after it is installed. Thanks!
What's the problem?
I set up my env following the readme, but it always occurs an error : "EGL_NOT_INITIALIZED: EGL is not initialized, or could not be initialized, for the specified EGL display connection."
System: centos7 server
Cuda: 11.7
How to render the blendshape coefficient using blend linear skinning, can you learn the code of Blendshape to Flame model ?
Where does the bpy package that needs to be imported in code rend.py need to be downloaded.
您好,已经给[email protected]邮箱发了申请下载3D-ETF数据的请求,麻烦查收邮件,非常感谢您百忙之中查看邮件,期待您的回复![email protected]
Is the output is Apple FACS format?
感谢你们这么棒的工作,你们之后会公布训练脚本吗
Are the RAVDESS dataset and HDTF dataset mixed for training together?
The repository does not provide the code of the Emotional Vertex Error (EVE) proposed in the paper. I wanted to know if it would be possible to provide the same.
Hello,
I recently came across a section in the supplementary material of a paper where the authors mention a process of performing linear weighting on the corresponding parameters of 52 FLAME head templates to obtain the vertex parameters in a 5023*3 dimensional space. This process appears to be crucial for my research, and I'm very interested in understanding how it was implemented.
Could you provide more details or any available resources on how this transformation was accomplished? Specifically, I am looking for information on how the 52 blendshape coefficients are applied to the FLAME head templates to achieve the vertex transformation.
Thank you for your assistance.
您好,我在复现代码,我想知道一下您的训练过程中,收敛时loss大概降到什么程度
你好!model.py中151行报错:
RuntimeError: The shape of the 3D attn_mask is torch.Size([4, 104, 104]), but should be (416, 1, 1).
跑demo.py中一直出现维度错误的问题,请问应该修改哪里呢?
How to generate animation with different emotions? It seems that there is no emotion label in demo.py.
gpu: A100-80G
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.17 GiB (GPU 0; 79.19 GiB total capacity; 62.80 GiB already allocated; 1.50 GiB free; 64.06 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Hello, I have been reading your paper and there is one detail that I do not understand. From my understanding your dataset is made up of HDTF and RAVDESS. The model in the paper mentions that the identity one-hot encoding is 24 dimensional. Do these 24 identities correspond to the actors in RAVDESS? If so how are the HDTF identities encoded. Also how does the cross reconstruction loss work with the HDTF dataset since there are no emotions and similar content in these sequences in order to apply this loss term
I really hope you could support windows 11
你好,我在训练的过程中,发现数据集的blendshape存在很多负数,比如: 'mouthDimpleLeft', 'mouthFrownLeft', 'mouthFrownRight', 'mouthLeft', 'mouthLowerDownLeft'等,麻烦问下,你们是怎么处理的呢。理论是arkit的blendshape都是处于0-1之间的,是直接不管吗?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.