Giter VIP home page Giter VIP logo

emotalk_release's People

Contributors

fanzhaoxin666 avatar noirmist avatar ziqiaopeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

emotalk_release's Issues

Train code

你好,非常感谢你的工作,我在复现你的工作的时候,发现结果不收敛,您这边能够提供train的代码吗,万分感谢。

Who needs high-quality lip sync - contact me!

I can do lip sync for any character. I reduced the quality to 10 MB in order to upload video. If you are interested, write to me in telegram: The_best_result

git1.mp4
git2.mp4
git3.mp4

RuntimeError: Mask shape should match input. mask: [4, 104, 104] input: [1, 4, 104, 104]

image

(zyt-nerf) amax@amax:~/zyt/audio2face/EmoTalk_release$ python demo.py --wav_path "./audio/disgust.wav"
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at jonatasgrosman/wav2vec2-large-xlsr-53-english and are newly initialized: ['wav2vec2.lm_head.bias', 'wav2vec2.lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of Wav2Vec2ForSpeechClassification were not initialized from the model checkpoint at r-f/wav2vec-english-speech-emotion-recognition and are newly initialized: ['wav2vec2.lm_head.bias', 'wav2vec2.lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/activation.py:1144: UserWarning: Converting mask without torch.bool dtype to bool; this will negatively affect performance. Prefer to use a boolean mask directly. (Triggered internally at ../aten/src/ATen/native/transformers/attention.cpp:150.)
return torch._native_multi_head_attention(
Traceback (most recent call last):
File "/home/amax/zyt/audio2face/EmoTalk_release/demo.py", line 111, in
main()
File "/home/amax/zyt/audio2face/EmoTalk_release/demo.py", line 106, in main
test(args)
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/amax/zyt/audio2face/EmoTalk_release/demo.py", line 30, in test
prediction = model.predict(audio, level, person)
File "/home/amax/zyt/audio2face/EmoTalk_release/model.py", line 140, in predict
bs_out11 = self.transformer_decoder(hidden_states11, hidden_states_emo11_832, tgt_mask=tgt_mask11,
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 360, in forward
output = mod(output, memory, tgt_mask=tgt_mask,
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 698, in forward
x = self.norm1(x + self._sa_block(x, tgt_mask, tgt_key_padding_mask, tgt_is_causal))
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/transformer.py", line 707, in _sa_block
x = self.self_attn(x, x, x,
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/amax/miniconda3/envs/zyt-nerf/lib/python3.10/site-packages/torch/nn/modules/activation.py", line 1144, in forward
return torch._native_multi_head_attention(
RuntimeError: Mask shape should match input. mask: [4, 104, 104] input: [1, 4, 104, 104]

3D Animation to Reality

Could anyone help me with an idea on how to apply this animation to generate a real head talker/lip-syncing head?
Thank you so much

Details about blendshape capturing

Hello,
I'm very interested in the blendshape capturing method used for reconstructing the 3D-ETF.
Could you tell me how you capture the facial blendshapes from 2D videos?
Hope you reply, thanks.

train script

感谢你们这么棒的工作,你们之后会公布训练脚本吗

Code of EVE Metric

The repository does not provide the code of the Emotional Vertex Error (EVE) proposed in the paper. I wanted to know if it would be possible to provide the same.

Method for Transforming 52 Blendshapes into 5023*3 Vertices

Hello,

I recently came across a section in the supplementary material of a paper where the authors mention a process of performing linear weighting on the corresponding parameters of 52 FLAME head templates to obtain the vertex parameters in a 5023*3 dimensional space. This process appears to be crucial for my research, and I'm very interested in understanding how it was implemented.

Could you provide more details or any available resources on how this transformation was accomplished? Specifically, I am looking for information on how the 52 blendshape coefficients are applied to the FLAME head templates to achieve the vertex transformation.

Thank you for your assistance.

Screenshot 2024-01-16 at 11 02 05 PM Screenshot 2024-01-16 at 11 02 10 PM

cuda outof memory with 5min audio input

gpu: A100-80G

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.17 GiB (GPU 0; 79.19 GiB total capacity; 62.80 GiB already allocated; 1.50 GiB free; 64.06 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Paper details

Hello, I have been reading your paper and there is one detail that I do not understand. From my understanding your dataset is made up of HDTF and RAVDESS. The model in the paper mentions that the identity one-hot encoding is 24 dimensional. Do these 24 identities correspond to the actors in RAVDESS? If so how are the HDTF identities encoded. Also how does the cross reconstruction loss work with the HDTF dataset since there are no emotions and similar content in these sequences in order to apply this loss term

Dataset BlendShape

你好,我在训练的过程中,发现数据集的blendshape存在很多负数,比如: 'mouthDimpleLeft', 'mouthFrownLeft', 'mouthFrownRight', 'mouthLeft', 'mouthLowerDownLeft'等,麻烦问下,你们是怎么处理的呢。理论是arkit的blendshape都是处于0-1之间的,是直接不管吗?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.