Comments (2)
With using_pjrt
user doesn't need to call the function _maybe_select_default_device
to init the device by themselves. To get rid of using_pjrt
decorator, we either need to:
- force user to call function like
_maybe_select_default_device
at the beginning of their script; Or - place the
_maybe_select_default_device
within some other functions that must be called at the beginning of their script.
Both ways look annoying. Can we do something in torch_xla/__init__.py
file to automatically find the device at the first place?
from xla.
__init__.py
sounds fine. That's what I meant by "trigger auto detection upon import".
Broadly, there are two options that I see:
- Decorate any function that could potentially init the runtime something equivalent to
requires_pjrt
. This is arguably cleaner/more precise, but I'm concerned we'll miss some functions and not fully root out this$PJRT_DEVICE is not set
error. _maybe_select_default_device
in__init__.py
so it runs automatically when the user importstorch_xla
.
I'm open to other solutions too. The core constraint is that runtime init happens in C++, while most of the logic we need to detect devices is in Python.
from xla.
Related Issues (20)
- Equivalent of get_worker_info to split an IterableDataset HOT 18
- Is there any way to directly execute the cached computational graph HOT 5
- Op info test for `T .. arange` HOT 1
- CUDA and GPU-Flavoured Docker/Container Image Missing CUDA Support HOT 1
- Graph dump to optimize HOT 9
- Invalid version identifier in filenames of nightly builds HOT 6
- How to test on a subset of TPUs in a TPU Pod HOT 7
- Failed to import torch_xla by following the GPU instructions on an H100 node (A3-High) HOT 1
- Iteration of MpDeviceLoader doesn't work HOT 1
- libtpu not installed with nightly build HOT 4
- PyTorch/XLA usability progress tracking
- inconsistency in calling `get_ordinal` and `world_size` calls HOT 2
- Effectively manage API usability changes
- Make `torch_xla.launch` work transparently in notebooks
- Support portable executables in `torch_xla.launch`
- `xmp.spawn(_mp_fn, nprocs=1)` failure HOT 4
- Device init before `xmp.spawn()` HOT 3
- Does PyTorch/XLA nightly provide GPU support? HOT 3
- introduce torch.tpu.is_available() HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xla.