Comments (4)
fyi here are the dependencies installed
onnx==1.14.1
onnxruntime-extensions==0.9.0
importlib-resources==6.1.0
from olive.
since we install ort-nightly-gpu
using --index-url
, pip has problems finding a suitable version for the package if you don't specify the version or if the requirements for onnxruntime
are not installed beforehand. The index url for ort-nightly-gpu
doesn't have the requirements available for download and install.
You can resolve this by installing the requirements for onnxruntime
manually https://github.com/microsoft/onnxruntime/blob/main/requirements.txt.in.
An easier, more hacky method is to:
python -m pip install onnxruntime-gpu
python -m pip unintall -y onnxruntime-gpu
before the ort-nightly-gpu
installation to get the dependencies pre-installled.
from olive.
BTW, ort-nightly is no longer required for the multilingual whisper example since ort version 1.16.0 got released recently.
from olive.
that works, thanks
from olive.
Related Issues (20)
- Unable to convert openai/whisper-large-v3 HOT 3
- Converted Whisper do not way to return timestamps HOT 5
- [Bug]: Merged ONNX model exceeds 2GB, the model will not be checked without `save_path` given. HOT 2
- [Bug]: stable diffusion performance degrade HOT 2
- AssertionError - assert conversion_footprint and optimizer_footprint when converting a model with size>512 HOT 2
- Validation errors for HfConfig HOT 5
- ImportError: cannot import name 'load_model' from 'onnxruntime.quantization.quant_utils' HOT 13
- Is this pass flow possible for Stable Diffusion?: OrtTransformersOptimization → IncDynamicQuantization or IncStaticQuantization HOT 46
- assert conversion_footprint and optimizer_footprint with Stable Diffusion conversion HOT 8
- Early stopping in Hugging Face models HOT 6
- Missing file https://github.com/microsoft/Olive/blob/main/docs/source/overview/configuring_pass HOT 2
- Unable to perform Whisper GPU Int8 conversion HOT 11
- UnboundLocalError: local variable 'output_model_json' referenced before assignment HOT 9
- Error with search strategic.py 'Conversion Merged" has no output models for Llama2 optimization HOT 2
- NOT_IMPLEMENTED : Could not find an implementation for BeamSearch(1) node with name 'BeamSearch_node' HOT 6
- I don't have models/optimized/llama_v2 folder after I've run python llama_v2.py --optimize HOT 8
- [Bug] Optimization step for unet fails after 'Protobuf parsing failed' HOT 10
- SDXL crashing when trying to run HOT 3
- This is an invalid model HOT 4
- Conversion of some models are buggy
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from olive.