Giter VIP home page Giter VIP logo

styletalk's Introduction

StyleTalk

The official repository of the AAAI2023 paper StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles

Paper | Supp. Materials | Video

The proposed StyleTalk can generate talking head videos with speaking styles specified by arbitrary style reference videos.

News

  • April 14th, 2023. The code is available.

Get Started

Installation

Clone this repo, install conda and run:

conda create -n styletalk python=3.7.0
conda activate styletalk
pip install -r requirements.txt
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
conda update ffmpeg

The code has been test on CUDA 11.1, GPU RTX 3090.

Data Preprocessing

Our methods takes 3DMM parameters(*.mat) and phoneme labels(*_seq.json) as input. Follow PIRenderer to extract 3DMM parameters. Follow AVCT to extract phoneme labels. Some preprocessed data can be found in folder samples.

Inference

Download checkpoints for StyleTalk and Renderer and put them into ./checkpoints.

Run the demo:

python inference_for_demo.py \
--audio_path samples/source_video/phoneme/reagan_clip1_seq.json \
--style_clip_path samples/style_clips/3DMM/happyenglish_clip1.mat \
--pose_path samples/source_video/3DMM/reagan_clip1.mat \
--src_img_path samples/source_video/image/andrew_clip_1.png \
--wav_path samples/source_video/wav/reagan_clip1.wav \
--output_path demo.mp4

Change audio_path, style_clip_path, pose_path, src_img_path, wav_path, output_path to generate more results.

Acknowledgement

Some code are borrowed from following projects:

Thanks for their contributions!

styletalk's People

Contributors

fuxivirtualhuman avatar yifengma9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

styletalk's Issues

How to extract the phonemes?

Unfortunately your reference concerning phonemes does not provide a reference other than the link to CMU Sphinx.

I did a bit of research and ended up with the following code:

def create_phoneme(audio_wave_file):
    with wave.open(audio_wave_file, "rb") as audio:
        decoder = Decoder(samprate=audio.getframerate(), allphone=ps.get_model_path("en-us/en-us-phone.lm.bin"))
        decoder.start_utt()
        decoder.process_raw(audio.getfp().read(), full_utt=True)
        decoder.end_utt()

    input_phoneme_list = []
    if decoder.hyp():
        segments = decoder.seg()
        for seg in segments:
            input_phoneme_list.append({'phone': seg.word, 'phone_end_frame': seg.end_frame})
    else:
        raise Exception('Phoneme recognition failed')

    total_number_of_frames_in_audio = int(input_phoneme_list[-1]['phone_end_frame'] / 100 * ASSUMED_FRAME_RATE)
    print(total_number_of_frames_in_audio)

    frame_index = 0
    phone_list = []
    phone_index = 0

    while frame_index < total_number_of_frames_in_audio:
        if (frame_index * 100 / ASSUMED_FRAME_RATE) < input_phoneme_list[phone_index]['phone_end_frame']:
            phone_list.append(input_phoneme_list[phone_index]['phone'])
            frame_index += 1
        else:
            phone_index += 1

    with open(str("phindex.json")) as f:
        ph2index = json.load(f)
    phonemes = []
    for p in phone_list:
        if p in ph2index:
            phonemes.append(ph2index[p])
        else:
            print(f"Weird Phoneme found: {p}. Ignoring...")
            phonemes.append(31) # Silence

    phone_list = phonemes

    print("Phoneme generation done")

    return phone_list

I'm using the phindex.json file from https://github.com/FuxiVirtualHuman/AAAI22-one-shot-talking-face/blob/main/phindex.json and a ASSUMED_FRAME_RATE of 30 (this seems to match the number of phonemes you have in the samples rather than 25 as referenced in the papers).

However my phonemes look a lot different as compared to your samples for the sample wave files. What am I doing wrong?

Code license

Thanks you for your great research.
What is the license of code?

TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.

Hi, I have added my own wav file (16k, pcm, 16000hz) and get this error.

Traceback (most recent call last):
File "/content/styletalk/inference_for_demo.py", line 168, in
generate_expression_params(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/styletalk/inference_for_demo.py", line 107, in generate_expression_params
audio_win = torch.tensor(audio_win).cuda()
TypeError: can't convert np.ndarray of type numpy.object
. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.
video created!

OK so it wasnt the wav file it was the json not being in the correct format.

When to publish the code?

Amazing work!
When do you plan to publish the code? Or the inference code and weights should be public for comparison. Thanks!

what is your planning about releasing code

hi sir,

as i have seen above you tell that code release on feb 2023,but now march also gone so please upload code as soon as possible

Thanks ,public is waiting for your code please release as soon as possible

About face group index

@YifengMa9
In the paper, the param was divided into 13 and 51.

We select 13 out of 64 expression parameters that are highly related to mouth movements as the lower
face group, and the other parameters as the upper face group.

But in the code: in core.networks.disentangle_decoder import DisentangleDecoder

upper_face3d_indices=tuple(list(range(19)) + list(range(46, 51))),
lower_face3d_indices=tuple(range(19, 46)),

upper and lower gorup has 24 and 27 elements, 51 in total, 13 were missing from 64.

Can you help me with this question?
Many thanks

Discriminator architectures

Hi there, can you share any more details about the discriminators you used? E.g how many layers do they have? I can't seem to find in the paper or supplementary details. Thanks for the great work.

Does input images require some special pre-processing ?

I'm getting very strange results with the face being distorted and I'm unsure if it's a matter of not being aligned better, or there's some additional step per-identity that needs to happen first
All I did here, compared to the sample Reagan and Andrew Ng sample, is to replace the input image
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.