Comments (5)
@AndroYD84 try changing the hyperparameter before resuming from checkpoint: ignore_layers=[] and use_saved_learning_rate=True
from mellotron.
By default a checkpoint is saved every 500 iterations.
from mellotron.
Yes, but that's not the issue, imagine I trained for weeks up to 300.400 iterations and a black out happened, I'd lose only 400 iterations of progress but still have a "checkpoint_300000" file, is it possible to resume training from this checkpoint? Any attempt I made to resume from a checkpoint have generated a model that sounded much worse than its predecessor (checkpoint_300000" file), I know sometimes resuming a training requires some warming up before returning to its original state, but this isn't happening after a week, the results are not even close to its predecessor, if I had a time machine and could have prevented the blackout now the new checkpoint (ie. checkpoint_400000) would have sounded better not worse than before, do I have to start over again from scratch and lose weeks of training or I did something wrong? Thanks for your patience.
from mellotron.
Note that when using --warm_start does not include the optimizer.
When resuming from your own model. you should not include --warm_start.
from mellotron.
Thank you for your help, I resumed training without the "warm_start" option and I confirm that so far I haven't noticed any quality loss, I haven't tried texpomru13 solution as the results were already improving without the need to change anything.
However, if at some point I notice that the model is not improving any further then I plan to test the other solution as well, but right now I didn't want to jinx it.
from mellotron.
Related Issues (20)
- NoneType' object is not iterable
- Mismatch model volume
- Training on a different language HOT 1
- Inference without rhythm and pitch
- parse_output error with Blizzard2013 data
- Training on EmovDB HOT 2
- Voice synthesis by model is not the same as the voice with speaker ID HOT 1
- Try to train some new words
- inference speed on CPU
- Training time
- Two key points of training multispeaker mellotron
- how to train?
- colab demo for inferenece
- How to generate .musicxml files like the examples in `/data`? HOT 1
- Synthesize own text without style transfer gives poor audio results HOT 1
- Here's some code to start mellotron inference by calling a .py file from CLI [Docs]
- What is the reason of filtering "_" and "~" symbols?
- Something wrong with text padding HOT 5
- Can I use TensorRT to speed up model inference?
- colab error HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mellotron.