Comments (3)
@Bryce1010 Thanks. I will update thie repo for torch 1.4
from pytorch-gradual-warmup-lr.
@Bryce1010 I check the warning, but it seems to be unavoidable.
Unlike most of the schedulers, which start with the given initial learning rate and adapt it step by step, this gradual warmup scheduler should modify the initial learning rate as zero before any backward update on weights.
But if you want to avoid this warning message, there is a walk-around. See the latest code. I call 'optimizer.step()' with zero gradients, right after I create schedulers.
from pytorch-gradual-warmup-lr.
I think this work-around does the job (meaning nothing, except suppressing the warning), but I also think perhaps the design of the scheduler was based on a wrong assumption that the .step() method is only called directly by the user. On the contrary, PyTorch indeed makes an initial call to .step() from the scheduler's constructor, via the ._initial_step() function (at least in recent versions, perhaps ever since version 1.1). Before this implicit initial step, self._last_epoch = -1
, and after the initial step, self._last_epoch = 0
. Consequently, while I do not fully understand why .get_lr() alone could not compute the correct learning rates (without any preceding .step() calls), .step() will in any case have been called once in the scheduler constructor.
Perhaps by simply replacing most occurences of self._last_epoch
by self._last_epoch+1
in the current implementation, one could preserve the functionality even if .step() is only called at the end of each epoch by the user..?
I do however think the whole scheduler would be easier / less error-prone to implement using the built-in PyTorch LR scheduler LinearLR for the warmup part, optionally chained with one or more other schedulers (the equivalent of "after_scheduler") using SequentialLR.
from pytorch-gradual-warmup-lr.
Related Issues (18)
- why i got this error, when the warmup epoches ends. HOT 3
- when i use your readme code ,it has a bug???? HOT 1
- LR not work when pytorch version under 1.2.0
- StepLR and Cos has no funcion like ".get_last_lr()" HOT 1
- What is the meaning of base_lrs? HOT 1
- Math is wrong for multiplier=1 HOT 1
- `warmup_lr` is computed incorrectly in `step_ReduceLROnPlateau` HOT 8
- When to call scheduler.step? HOT 1
- Set Starting learning rate HOT 1
- multiplier works weird HOT 2
- Target optimizer not set properly when loading from state dict
- WARNING: Did not find branch or tag '08f7d5e', assuming revision or ref
- Question of run.py
- Usage mandatory metric HOT 5
- why multiplier must be greater than 1.0? HOT 1
- It seems you got one learning rate per epoch. HOT 2
- the initial lr value is higher than target lr value HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-gradual-warmup-lr.