Comments (8)
First off sorry if any of this sounds like a rant, I have been fighting with this project for many weeks and am tired.
Are the model links correct? None of the indicated Waveglow models produce any sound. And the scripts provided throw errors I can only trace to the pretrained models.
Does the script check if a speaker ID is valid by iterating through the training file list? I keep seeing this “speaker id” term tossed around but I cannot figure it out. Not only can I not find a list of valid ids for any of the pretrained models, half of the time the scripts say that the number of speakers is incorrect. Incorrect with reference to what? And how can I find the correct number.
Every model I try to train throws dictionary key errors for iterations, model, state_dict, epochs, just about everything —though not all at the same time—. I have tried all of the fixes I can find across the issue reports from this project along with FastSpeech FastPitch Tacotron2 and Waveglow. And every fix I try creates two or more issues to track down.
Assuming I only have an intermediate knowledge in this field, a file list with the corresponding wav files in the correct format and sample rate, otherwise working hardware which more than meets the requirements and a stubborn determination to keep fighting this until my eyes bleed, where should I start?
If anyone would be willing to help me out, I need to know which models are known to function with the script, which mode to use —fine tuning or warmstart— with a small dataset of maybe between 15 min to a couple hours if necessary, and which model —ljs vs libritts— would work best. If libritts works better, where do I find which speaker IDs are valid without brute forcing it over a few days.
Thanks in advance if anyone can help me out here! I would certainly appreciate it!
For solving problem with sound.
delete .half()
in this two lines
Line 82 in 7017801
Line 47 in 7017801
need to be:
waveglow.cuda()
audio = waveglow.infer(mels, sigma=0.8).float()
It helps me.
from flowtron.
@Bahm9919 It worked!! Now we’re in business. Thanks!
from flowtron.
@Bahm9919 It worked!! Now we’re in business. Thanks!
For answering other questions need to know more about your data. is it the data for single speaker or not? Language?
from flowtron.
@Bahm9919 Ill try and edit my first comment to add more info when I get a free minute today. The data is a single speaker, me, and it is in English. Since I’m doing the recording myself I can record for longer times or adjust the data or text in any way that’s necessary.
from flowtron.
@Bahm9919 The only reason I thought I might need the libritts version would be to take advantage of the style or emotion transfer properties. So I have a bit more control over the output audio.
from flowtron.
@Bahm9919 Ill try and edit my first comment to add more info when I get a free minute today. The data is a single speaker, me, and it is in English. Since I’m doing the recording myself I can record for longer times or adjust the data or text in any way that’s necessary.
In this case will be suitable warmstart training with ljs pretrained model.
from flowtron.
@Bahm9919 The only reason I thought I might need the libritts version would be to take advantage of the style or emotion transfer properties. So I have a bit more control over the output audio.
Try to get results with LJS model. And then maybe try Libritts.
from flowtron.
@Bahm9919 You got it! Sounds like a plan.
from flowtron.
Related Issues (20)
- Attention weights with partial flat line (non-english) HOT 6
- What is difference of Flowtron and Mellotron HOT 1
- Inference starting repeat itself. HOT 5
- List index out of range
- Custom model resumed from pre-trained model has a stuttering problem.
- How would one keep the model loaded for immediate synthesis? HOT 17
- Inference on pre-trained model (flowtron_ljs) speaking nonsense. HOT 4
- Inference Demo "Hitting gate limit" HOT 2
- .
- inference speed on CPU
- Accelerated inference with TensorRT HOT 2
- Single word input leads to ValueError: Expected more than 1 spatial element when training, got input size torch.Size([1, 512, 1]) HOT 1
- Error on loading training model "_pickle.UnpicklingError: invalid load key, '<'"
- Custom trained model and dataset problem
- Index out of range for custom dataset.
- value error while training custom dataset
- TypeError: guvectorize() missing 1 required positional argument 'signature' HOT 1
- _pickle.UnpicklingError: invalid load key, '<'. in inference.py in colab HOT 3
- What's the filelist used to train LibriTTS2k pretrained embedding?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from flowtron.