Comments (3)
core
has hidden module xla_builder
which has computation_from_module_proto
and get_computation_hlo
import torch_xla.core.xla_builder as xb
fname = "model.hlo"
with open(fname, mode="rb") as f:
comp = xb.computation_from_module_proto("foo", f.read())
print(xb.get_computation_hlo(comp))
BTW, Looks like xla_builder
is hidden , at least dir(core)
and "tab-tab" do not show it, but it still can be imported
import torch_xla.core as core
>>> dir(core)
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'dynamo_bridge', 'xla_env_vars', 'xla_model']
from xla.
Looking at the JAX implementation it seems like we just need to expose a pybind(in https://github.com/pytorch/xla/blob/master/torch_xla/csrc/init_python_bindings.cpp) for XlaComputation
, its constructor should be able to convert proto to HLO.
I don't know if we currently have bandwidth for that but we are happy to take contributions.
from xla.
oh nice finding. We added that python xla builder couple years ago but never really use it that much.
from xla.
Related Issues (20)
- Test export HLO instructions HOT 10
- Add example for training small LLM HOT 4
- Select a model to train and run on TPUs HOT 10
- How do I know which pytorch parameter corresponds to which parameter in hlo ir HOT 9
- Distributed spmd training with multiple compilations HOT 4
- RuntimeError: isDifferentiableType(variable.scalar_type()) INTERNAL ASSERT FAILED when using torch.repeat HOT 2
- In-place operations on an DLPack aliased XLA tensor does not propagate. HOT 8
- [RFC] PR Cherrypicking Process After a Release Branch Cut HOT 1
- Incomplete Checkpoints for Non-Sharded Parameters During SPMD Training in PyTorch XLA HOT 4
- Delete main branch HOT 4
- TPU Initialization Failed HOT 3
- The combination of inplace ops and custom op resulted in incorrect results HOT 3
- 2.4 backport PR request list HOT 15
- [torchbench] `drq` training fails to run on non-dynamo. HOT 2
- [RFC] PyTorch/XLA eager mode as default HOT 3
- [RFC] torch_xla2 dynamo integration HOT 5
- Error: Check Failed: `it != inputs_.end()` When Using `torch` and `torch_xla` Nightly Version (Post-20240527) with SPMD HOT 4
- xla gpu train ResizeBicubic is not supported HOT 4
- SPMD on TPU Pod with Multiple Machines and Randomness/Seed HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xla.