Comments (6)
This is DXGI_ERROR_DEVICE_HUNG
during inference/evaluation, which typically happens when some GPU work is taking excessively long. The recent AMD driver optimizations for stable diffusion / multi-head attention target the RDNA 3 architecture (e.g., the 7000 series, like the Radeon RX 7900 XTX) but not the RDNA 2 (6000 series). Still, we can try to repro this on an RDNA card to see if anything jumps out.
from olive.
6800xt has same err
from olive.
Error on my 6900XT as well, on 0.4.0
from olive.
The following error message seems be related to DirectML EP.
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\ExecutionProvider.cpp(896)\onnxruntime_pybind11_state.pyd!00007FFE31C80201: (caller: 00007FFE31C80C2F) Exception(2) tid(3c14) 887A0006 The GPU will not respond to more commands, most likely because of an invalid command passed by the calling application.
from olive.
@jstoecker, do you have any insight?
from olive.
seems similar with #510
from olive.
Related Issues (20)
- Whisper model converted via onnxruntime 1.17.1 won't work HOT 5
- Whisper-medium conversion failed HOT 8
- Whisper model does not work If you add a flag --enable_timestamps HOT 7
- whisper pipeline corrupting the model, unable to run on DML EP HOT 1
- GenAIModelExporter Component - parameter mismatch HOT 3
- Missing dependency: psutil HOT 2
- [FR]: FlashAttention support for Whisper HOT 1
- pydantic.error_wrappers.ValidationError: 7 validation errors for RunConfig HOT 1
- Olive workflow for mistral model optimization does not work HOT 17
- Exception while running SD XL: Not enough memory resources are available to complete this operation HOT 1
- Failed to run symbolic shape inference when doing LLM Optimization with DirectML HOT 8
- Error on the Generate an ONNX model and optimize step HOT 5
- status.IsOK() was false. Tensor shape cannot contain any negative value HOT 1
- Vitis quantization is broken with ORT 1.18 HOT 2
- Enabling openai/whisper-large-v3 using olive-ai-0.6.0 [onnxruntime-gpu: 1.17.1] on Intel CPU/GPU is not supporting HOT 2
- Llava-7b model Conversion to ONNX and Latency Optimization - OOM error (even after setting paging file size) HOT 2
- safetensor model
- onnx
- huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.'
- "num_images" doesn't work for the example of directml stable_diffusion_xl.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from olive.