Comments (3)
I would perhaps suggest this video giving an overview of TorchInductor: https://www.youtube.com/watch?v=p13HpZv2S3Q
Another thing you can check out is TORCH_LOGS="output_code"
, which'll show you the actual triton kernels that are generated.
Other than that, there is somewhat of a lack of publicly available educational resources on Inductor, hopefully we'll able to release some at some point.
from gpt-fast.
@Chillee
Thanks! I've been using TORCH_LOGS=all
to dump the entire compilation process though this is probably overkill.
Would be instructive to have a tutorial that steps through the compilation pipeline for a simple module with a focus on the backend lowering / codegen. Lmk if something like this exists already or would be useful to the community.
Will do some more digging around the inductor
tests to gather digestible bits.
Btw, enjoy your blogposts / tweets on gpu performance :) Hope to see more of these.
from gpt-fast.
FYI: I'm developping a walk-through example of torch.compile
, although the focus is more on the Dynamo and AOTAutograd side. The detailed working procedure of inductor is harder to describe. Hope someday I can figure it out later.
from gpt-fast.
Related Issues (20)
- Questions on Speculative Decoding in gpt-fast generate.py HOT 2
- 'Triton Error [CUDA]: device kernel image is invalid' while compiling HOT 2
- Device-side assertions’ error when speculative decoding with different length of prompts.
- Does `gpt-fast` work on V100 GPUs? HOT 1
- TypeError: __init__() got an unexpected keyword argument 'mmap' HOT 1
- Error when running convert_hf_checkpoint.py for TinyLlama-1.1B-intermediate-step-480k-1T
- Inference on a dataset instead of an individual prompt
- Code is extremely slow! HOT 1
- torch.compile leads to OOM with different prompts.
- How is llama-7b trained, what is the verification accuracy? HOT 2
- RuntimeError: CUDA error: named symbol not found HOT 1
- Size mismatch error occurs when loading models quantized by GPTQ HOT 1
- `eval.py` uses older version of lm_eval HOT 1
- Can GPT-Fast support larger batch sizes HOT 3
- I try to speed up with llava,but this it slower then eager mode,why?
- pass@1 score extremely low using GPT-fast API HOT 2
- AssertionError: assert model_map_json.is_file() HOT 1
- token/s speed HOT 4
- Problem with NVLink setup
- Bandwidth achieved for INT8 is much smaller than FP16 HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gpt-fast.