Comments (8)
Unfortunately, the XRT API version you are hitting on the TPU VM, is (TF) 1.13, which does not have that API :|
1.14 should be out soon, which has that API.
Also, we are working on letting Cloud TPU customers to be able to access Nightly build (under a not supported agreement).
The latter will allow you to always be able to run against the latest bits.
Sorry about that, but we are just booting up the PyTorch/TPU project and there are a few things we need to sort out from a release POV.
from xla.
oh noooo.... so close haha.
any idea if we are talking about weeks or days for that release? NeurIPS deadline is coming up 🙈
from xla.
Should be a week off or so.
from xla.
Here is my error:
#659 (comment)
I think it's also caused by data transfering, but happened in a different place of source code.
Hi, @dlibenzi
Is it the same reason?
If it is, maybe I'll also need a-week patience?
from xla.
I'v checked the output and confirmed it is the same error.
from xla.
Yes, same reason.
from xla.
Good news!
You should be able now, when you create a Cloud TPU node, to select nightly builds.
This brings the TPU VM up to the latest TPU service side.
from xla.
Closing issue as the nightly is available to users. Please feel free to reopen if you have followup questions. Thanks!
from xla.
Related Issues (20)
- [torchbench] `hf_BigBird` (inference and training) fails to run on dynamo.
- Make `tpu-info` more visible to the community HOT 5
- While loop can't yet handle additional tensors HOT 1
- Make profiler server static in torch_xla.debug.profiler.start_server()
- Test Script Not Utilizing All XLA Devices HOT 1
- xla-smi similar to nvidia-smi/nvitop or rocm-smi HOT 1
- [StableHLO] Failed to export GPT-J model from HF Transformers HOT 1
- Is pytorch xla spmd working as expected? HOT 3
- Is it possible free TPU memory without restarting in pytorch xla? HOT 6
- SPMD - how to use different dataloader on each VM of a TPU pod in SPMD HOT 8
- Crash when using auto SPMD on TPU pod with more than 1 VM HOT 2
- Using mark_sharding vs. MpDeviceLoader with input_sharding=xs.ShardingSpec HOT 4
- How to sync TPUs when using a pod with more than 1 VM in SPMD HOT 3
- Why do the communication in my spmd training have control-predecessors HOT 2
- [Bug] Notebook `Stable Diffusion with PyTorch/XLA 2.0` is outdated
- unsqueeze returns wrong results when input is a sliced tensor with non-trivial offset HOT 4
- In spmd training of multiple machines, xp.trace is problematic HOT 4
- Import "torch_xla.core.xla_model" could not be resolved HOT 6
- torch_xla.distributed.parallel_loader doesn't shard data HOT 5
- Documentation: Discoverability of http://pytorch.org/xla HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xla.