Comments (7)
Yea I've updated all my drivers. I wish there was an easier way to debug haha
from mlc-llm.
I just bought an AMD GPU online. It's arriving later this week. Will play with it and see how it works with the Vulkan driver.
from mlc-llm.
Yes, i think there are ppl successfully running on AMD GPUs, https://www.reddit.com/r/LocalLLaMA/comments/132igcy/project_mlc_llm_universal_llm_deployment_with_gpu/
please try to upgrade to the latest driver and see if it works
from mlc-llm.
the same error
~# mlc_chat_cli
terminate called after throwing an instance of 'tvm::runtime::InternalError'
what(): [21:35:46] /home/runner/work/utils/utils/tvm/src/runtime/vulkan/vulkan_instance.cc:111:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
Check failed: (__e == VK_SUCCESS) is false: Vulkan Error, code=-9: VK_ERROR_INCOMPATIBLE_DRIVER
Stack trace:
[bt] (0) /root/miniconda3/envs/mlc-chat/bin/../lib/libtvm_runtime.so(tvm::runtime::Backtrace[abi:cxx11]()+0x27) [0x7f4190835a37]
[bt] (1) /root/miniconda3/envs/mlc-chat/bin/../lib/libtvm_runtime.so(+0x3f375) [0x7f41907d3375]
[bt] (2) /root/miniconda3/envs/mlc-chat/bin/../lib/libtvm_runtime.so(tvm::runtime::vulkan::VulkanInstance::VulkanInstance()+0x1a47) [0x7f4190922857]
[bt] (3) /root/miniconda3/envs/mlc-chat/bin/../lib/libtvm_runtime.so(tvm::runtime::vulkan::VulkanDeviceAPI::VulkanDeviceAPI()+0x40) [0x7f419091ea60]
[bt] (4) /root/miniconda3/envs/mlc-chat/bin/../lib/libtvm_runtime.so(tvm::runtime::vulkan::VulkanDeviceAPI::Global()+0x4c) [0x7f419091edfc]
[bt] (5) /root/miniconda3/envs/mlc-chat/bin/../lib/libtvm_runtime.so(+0x18ae3d) [0x7f419091ee3d]
[bt] (6) /root/miniconda3/envs/mlc-chat/bin/../lib/libtvm_runtime.so(+0x6bdc4) [0x7f41907ffdc4]
[bt] (7) /root/miniconda3/envs/mlc-chat/bin/../lib/libtvm_runtime.so(+0x6c367) [0x7f4190800367]
[bt] (8) mlc_chat_cli(+0xe010) [0x5654d12ee010]
gpu info
from mlc-llm.
@logan-markewich does it work now with your AMD GPU? If so, would you mind sharing your tok/sec info here?
from mlc-llm.
@junrushao I should have clarified, I updated everything but couldn't get it to work haha
Not sure how to debug it either
from mlc-llm.
should be resolved in latest instructions https://mlc.ai/mlc-llm/docs/
from mlc-llm.
Related Issues (20)
- [Question] CMake Error at /mnt/f/mlc-llm/CMakeLists.txt:65 (add_subdirectory) HOT 3
- [Bug] AttributeError: Module has no function 'vm_load_executable' encountered in Step 4 of the "Bring Your Own Model Library" tutorial docs/deploy/ios.html#bring-your-own-model-library HOT 4
- How can I deploy a single-card MLC-LLM model? I want the model inference to run only on one card, not distributed. HOT 1
- [NOTICE] Transition from ChatModule to MLCEngine HOT 3
- [Question] Rust SDK + WebAssembly + GPU? HOT 1
- [Bug] Failed to compile because the correct code page is not set
- [Bug] Token IDs not accepted by JSON grammar HOT 4
- [Question] Is GGUF model package format supported with quantized models? HOT 3
- [Question] Can I serve multiple models with the same instance?
- [Question] Is there an embeddings model in MLC format? HOT 1
- [Question] Is Apple Silicon Neural Engine (ANE) and Core ML model package format supported? HOT 1
- [Model Request] OpenELM HOT 1
- [Question] Support for Custom Attention Mask
- [Bug] libc++abi: terminating due to uncaught exception of type tvm::runtime::InternalError: [14:02:26] HOT 3
- [Model Request] Microsoft Phi-3 mini Instruct (Faster and better then LLama 3 8B) HOT 2
- [Bug] Unexpected Error: The model weight size may be larger than GPU memory size HOT 5
- [Bug] TVMError: Check failed: (result) is false: Failed to allocate 99121664 bytes with alignment 16 bytes
- AutoTVM optimization? HOT 3
- Phi-3-3.8 billion model [Model Request] HOT 1
- [Question] Omniquant. (AFAIK) scores best for Q. Methods, why no adoption? In any case, is per-tensor quant. best for Mixtral/MoE models? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mlc-llm.