Comments (11)
I uninstall before it , and use this cmd to install again
sudo bash ./l_BaseKit_p_2024.0.1.46_offline.sh -a --silent --eula accept --install-dir=/opt/intel/oneapi
then could find header file and compile successfully , but I don't know why previous oneapi install cmd couldn't install successfully.
from bigdl.
Hi, I am working on to reproduce this issue. Will update to this thread.
from bigdl.
My installment according to this https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/vLLM-Serving#1-install-1 succussed without error.
Can you check your oneAPI
version through:
source /opt/intel/oneapi/setvars.sh
dpcpp --version
Also, can you try to search dpct.hpp
like this:
find /opt/intel/oneapi/ -name "dpct.hpp"
from bigdl.
(ipex-vllm) intel@intel-0:~/vllm$ source /opt/intel/oneapi/setvars.sh --force
:: initializing oneAPI environment ...
-bash: BASH_VERSION = 5.1.16(1)-release
args: Using "$@" for setvars.sh arguments: --force
:: ccl -- latest
:: compiler -- latest
:: debugger -- latest
:: dev-utilities -- latest
:: dnnl -- latest
:: dpl -- latest
:: mkl -- latest
:: mpi -- latest
:: tbb -- latest
:: oneAPI environment initialized ::
(ipex-vllm) intel@intel-0:/vllm$ dpcpp --version/vllm$
icpx: warning: use of 'dpcpp' is deprecated and will be removed in a future release. Use 'icpx -fsycl' [-Wdeprecated]
Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 (2024.0.2.20231213)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /opt/intel/oneapi/compiler/2024.0/bin/compiler
Configuration file: /opt/intel/oneapi/compiler/2024.0/bin/compiler/../icpx.cfg
(ipex-vllm) intel@intel-0:
from bigdl.
Hi, can you search for file "dpct.hpp"?
In my environment, it shows that it is located in oneAPI install dir:
from bigdl.
In my environment , I can't find this
(ipex-vllm) intel@intel-0:/opt/intel/oneapi$ ls
ccl common compiler debugger dev-utilities diagnostics dnnl dpl licensing mkl modulefiles-setup.sh mpi setvars.sh support.txt tbb tcm
I use below commnad to install
sudo apt install intel-oneapi-common-vars=2024.0.0-49406
intel-oneapi-compiler-cpp-eclipse-cfg=2024.0.2-49895
intel-oneapi-compiler-dpcpp-eclipse-cfg=2024.0.2-49895
intel-oneapi-diagnostics-utility=2024.0.0-49093
intel-oneapi-compiler-dpcpp-cpp=2024.0.2-49895
intel-oneapi-mkl=2024.0.0-49656
intel-oneapi-mkl-devel=2024.0.0-49656
intel-oneapi-mpi=2021.11.0-49493
intel-oneapi-mpi-devel=2021.11.0-49493
intel-oneapi-tbb=2021.11.0-49513
intel-oneapi-tbb-devel=2021.11.0-49513
intel-oneapi-ccl=2021.11.2-5
intel-oneapi-ccl-devel=2021.11.2-5
intel-oneapi-dnnl-devel=2024.0.0-49521
intel-oneapi-dnnl=2024.0.0-49521
intel-oneapi-tcm-1.0=1.0.0-435
from bigdl.
I will try to reproduce this issue according to your config.
from bigdl.
when I run "pip install interegular cloudpickle diskcache joblib lark nest-asyncio numba scipy", I meet
Can I ignore itοΌ
from bigdl.
Yes, you can ignore it.
from bigdl.
This problem is caused by not installing this package:
intel-oneapi-dpcpp-ct-2024.0 through apt-get
from bigdl.
Closed.
If you still have problems, feel free to reopen this.
from bigdl.
Related Issues (20)
- about conflict HOT 2
- Phi3-4k winograde drop from 0515 version to 0516 version HOT 3
- [langchain-chatchat] ERROR: The expanded size of the tensor (559) must match the existing size (512) at non-singleton dimension 1. Target sizes: [1, 559]. Tensor sizes: [1, 512] HOT 2
- Unable to get LanguageBind/Video-LLaVA-7B-hf model working through ipex-llm HOT 8
- ipex-llm[cpp] error: Sub-group size 8 is not supported on the device HOT 3
- MTL Linux Qwen-VL: LLVM ERROR: GenXCisaBuilder failed
- Support for MTL-H & MTL-U iGPU on Linux HOT 1
- try to test multi xpu with example HOT 14
- miniCPM run benchmark get error in iGPU HOT 1
- Shape Mismatch with Checkpoint for Deepspeed Zero3
- [script issue] - newly created checkpoint already contain a file. HOT 3
- [Feature]internlm-xcomposer2-vl-7b support HOT 2
- Qwen-7B TypeError: qwen_attention_forward() got an unexpected keyword argument 'registered_causal_mask' HOT 2
- ipex-llm(0517) Failed to Run 'baichuan-inc/Baichuan2-7B-Chat' in batch_size==2 and batch_size==4 with 32-32, 1024-128, 2048-256 input_length HOT 1
- Qwen-7B-Chat fail with larger 6.7k for second or 3rd time
- Ollama Linux No Response Issue with IPEX-LLM HOT 2
- Qwen1.5-4b and Qwen1.5-7b model cannot be loaded correctly in ipex-llm version 20240522 HOT 6
- [inference]: fine tuned model fails to do inferencing
- ModuleNotFoundError: No module named 'ipex_llm.vllm.xpu' while using docker and installation HOT 1
- [integration]: merging bfloat16 model failed
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bigdl.