Comments (4)
What you mentioned could've happened during training, for example, when the training and validation filelists have different number of speakers. We circumvent this by first getting a mellotron speaker ids dictionary from the training data and using it for the validation data.
https://github.com/NVIDIA/mellotron/blob/master/train.py#L44
from mellotron.
What you mentioned could've happened during training, for example, when the training and validation filelists have different number of speakers. We circumvent this by first getting a mellotron speaker ids dictionary from the training data and using it for the validation data.
https://github.com/NVIDIA/mellotron/blob/master/train.py#L44
Thanks for your reply. I've noticed this part of codes in training.
However, my concern is that in the inference stage, when we tried to get the rhythm from some reference audios, we need to load the reference filelist with the TextMelLoader. I found that there is no speaker id dictionary given to the TextMelLoader.
arpabet_dict = cmudict.CMUDict('data/cmu_dictionary')
audio_paths = 'data/examples_filelist.txt'
dataloader = TextMelLoader(audio_paths, hparams)
datacollate = TextMelCollate(1)
file_idx = 0
print(dataloader.audiopaths_and_text)
audio_path, text, sid = dataloader.audiopaths_and_text[file_idx]
# get audio path, encoded text, pitch contour and mel for gst
text_encoded = torch.LongTensor(text_to_sequence(text, hparams.text_cleaners, arpabet_dict))[None, :].cuda()
pitch_contour = dataloader[file_idx][3][None].cuda()
mel = load_mel(audio_path)
print(audio_path, text)
# load source data to obtain rhythm using tacotron 2 as a forced aligner
x, y = mellotron.parse_batch(datacollate([dataloader[file_idx]]))
In this case, the mellotron speaker ids are related to the number of speakers in the reference filelist. Then, we do mellotron.forward to get the reference rhythm as below:
with torch.no_grad():
# get rhythm (alignment map) using tacotron 2
mel_outputs, mel_outputs_postnet, gate_outputs, rhythm = mellotron.forward(x)
rhythm = rhythm.permute(1, 0, 2)
where the x contains ref_text, ref_mel, ref_f0 and ref_melltron_speaker_ids, and the generated rhythm will changed if the number of speakers in the reference filelist changed, for the same reference audio.
from mellotron.
During experiments, we noticed that the rhythm (alignment map) we get from Tacotron seems to be independent of providing the correct speaker id. You can try, for example, to provide different speaker ids while using Tacotron as a forced aligner and observe if there is a significant difference.
from mellotron.
Closing due to inactivity.
from mellotron.
Related Issues (20)
- NoneType' object is not iterable
- Mismatch model volume
- Training on a different language HOT 1
- Inference without rhythm and pitch
- parse_output error with Blizzard2013 data
- Training on EmovDB HOT 2
- Voice synthesis by model is not the same as the voice with speaker ID HOT 1
- Try to train some new words
- inference speed on CPU
- Training time
- Two key points of training multispeaker mellotron
- how to train?
- colab demo for inferenece
- How to generate .musicxml files like the examples in `/data`? HOT 1
- Synthesize own text without style transfer gives poor audio results HOT 1
- Here's some code to start mellotron inference by calling a .py file from CLI [Docs]
- What is the reason of filtering "_" and "~" symbols?
- Something wrong with text padding HOT 5
- Can I use TensorRT to speed up model inference?
- colab error HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mellotron.