Comments (3)
Maybe we should designate autofixes as safe or unsafe, like ruff started to do recently
Maybe we could offer safe and unsafe versions of each fix when possible? If both modes exist, I would imagine users would first try running with unsafe, and if anything breaks, fallback to safe mode. I feel like this would be a good default for the users who don't have the time to actually dive in to debug anything, but still want to make sure their code is compliant for the most part.
And maybe it's possible to detect some cases when use_reentrant=False certainly will not work statically from the code?
Yeah this will be hard to do statically, unfortunately
from torchfix.
I think that's fine that sometimes autofix is not correct, as it's not possible to make it work correctly in every case.
From README:
Please keep in mind that autofix is a best-effort mechanism. Given the dynamic nature of Python, and especially the beta version status of TorchFix, it's very difficult to have certainty when making changes to code, even for the seemingly trivial fixes
The idea is to provide the recommended value (as the doc for checkpoint
says) and make the users test if it's working for them.
Otherwise users will just continue to use use_reentrant=True
when it's not needed.
There is another rule with similar risk of unsafe behavior - "TOR102 torch.load
without weights_only
parameter is unsafe".
Maybe we should designate autofixes as safe
or unsafe
, like ruff started to do recently https://docs.astral.sh/ruff/linter/#fix-safety.
Or maybe there should be a separate configuration for fixes: the rule may run by default, but not autofix by default.
from torchfix.
And maybe it's possible to detect some cases when use_reentrant=False
certainly will not work statically from the code?
from torchfix.
Related Issues (10)
- Add `torchfix --version` to print the version
- Feature needed: configuration for which PyTorch version to target
- Add functionality to remove no longer needed imports HOT 2
- [New Rule Request] Torchfix should issue a warning when a torch.nn.Module stores its layers in a python list instead of a torch.nn.ModuleList HOT 3
- Feature needed: TorchFix should understand statically types types
- Add linter for pretrained parameters for more TorchVision models
- Update `import torchvision.models as models` HOT 1
- Add doc listing all available error codes and test it's complete
- Try to refactor common functionality for checking default args
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torchfix.