Comments (6)
@pablogps @lessw2020 PyCharm works wonders for code styling. It'll reformat all of your code to conform to PEP8 with one click under code > reformat code
from ranger-deep-learning-optimizer.
@lessw2020 Could you use black or yapf to make the code into a standard format? This would also reduce many PEP8 issues.
from ranger-deep-learning-optimizer.
@hiyyg - please try it now.
I ran ranger.py through pylint via vscode and had a bunch of tweaks that should resolve this.
Let me know if not!
Thanks
from ranger-deep-learning-optimizer.
Hi @pablogps,
Excellent recommendations and feedback.
I will update the code per your suggestions and check it back in.
Thanks again!
from ranger-deep-learning-optimizer.
Not at all, thank you for uploading your work. I will take a look when it's done to see if I can be of any help :)
from ranger-deep-learning-optimizer.
Hi, and thanks for the update! I see many improvements, that's for sure! There may be a few lingering issues, though. For example, is the "required" import ever used? But most problems have been solved, so if you feel comfortable with the current version feel free to close the issue :)
from ranger-deep-learning-optimizer.
Related Issues (20)
- Is there a publication of Ranger? HOT 2
- It makes sense to use it on a batch of 1? HOT 3
- Benchmarck Adaptive Scheduling of Stochastic Gradients
- Gradient centralization was updated HOT 4
- [question] Why Ranger is not available as a pip package HOT 3
- What the "GC operations" mean?
- TypeError in GC operation for Conv layers and FC layers
- Not able to save the model_state_dict. HOT 1
- Does it works well for transformer?
- The results I tested on the cifar10 dataset are as follows. Ranger's results look strange HOT 1
- RangerVA with GC
- This overload of addcmul_ is deprecated: addcmul_(Number value, Tensor tensor1, Tensor tensor2) HOT 5
- How to use ranger in keras? Please help me. HOT 1
- Stochastic Weight Averaging support HOT 1
- Loss stuck after 1 epoch HOT 1
- Is adabelief the best optimizer? HOT 7
- best result : flat learning rate for 75% it means ranger optimizer is not sensitive to lr? HOT 1
- Please note in the documentation (or in the constructor) that closures must be enabled
- ranger and cosine annealing LR leads to different schedule than SGD optimizer? o_O
- Collate pip package so that it picks up from main repo. HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ranger-deep-learning-optimizer.