Comments (4)
Hi @JJJohnathan,
Would you mind having a check on your glibc version? We recommend glibc with version >= 2.28 :)
Besides, it would be helpful to provide more information regarding your environment through check_env.sh.
from bigdl.
@Oscilloscope98 OK, The result of running check_env.sh is as following:
PYTHON_VERSION=3.11.9
transformers=4.36.2
torch=2.1.0a0+cxx11.abi
ipex-llm Version: 2.1.0b20240613
ipex=2.1.10+xpu
CPU Information:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
Stepping: 13
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Total CPU Memory: 31.1971 GB
Memory Type: DDR4
Operating System:
Ubuntu 22.04.4 LTS \n \l
Linux openvinotest01-Z390-AORUS-ULTRA 6.5.0-35-generic #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
CLI:
Version: 1.2.35.20240425
Build ID: 00000000
Service:
Version: 1.2.35.20240425
Build ID: 00000000
Level Zero Version: 1.16.0
Driver Version 2023.16.12.0.12_195853.xmain-hotfix
Driver Version 2023.16.12.0.12_195853.xmain-hotfix
Driver related package version:
ii intel-fw-gpu 2024.17.5-32922.04 all Firmware package for Intel integrated and discrete GPUs
ii intel-i915-dkms 1.24.2.17.240301.20+i29-1 all Out of tree i915 driver.
ii intel-level-zero-gpu 1.3.29138.29-88122.04 amd64 Intel(R) Graphics Compute Runtime for oneAPI Level Zero.
ii intel-i915-dkms 1.24.2.17.240301.20+i29-1 all Out of tree i915 driver.
ii intel-level-zero-gpu 1.3.29138.29-881
igpu not detected
xpu-smi is properly installed.
No device discovered
GPU0 Memory size=16M
00:02.0 VGA compatible controller: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] (rev 02) (prog-if 00 [VGA controller])
DeviceName: Onboard - Video
Subsystem: Gigabyte Technology Co., Ltd CoffeeLake-S GT2 [UHD Graphics 630]
Flags: bus master, fast devsel, latency 0, IRQ 11
Memory at 50000000 (64-bit, non-prefetchable) [size=16M]
Memory at 40000000 (64-bit, prefetchable) [size=256M]
I/O ports at 3000 [size=64]
Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
Capabilities:
from bigdl.
ldd (Ubuntu GLIBC 2.35-0ubuntu3.8) 2.35
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
from bigdl.
Hi @JJJohnathan,
We haven't conducted verification for IPEX-LLM on the iGPU of 9th Intel Core yet. For Intel iGPU, we have mainly conducted verification for the ones of Intel Core above or equal to 11th.
For Llama2 text generation example on i7-9700K, you may have a try on CPU first through this example: https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama2 . Please note that this example will require creating a environment for CPU :)
from bigdl.
Related Issues (20)
- Can faster whisper run on Intel iGPU or Arc DGPU? HOT 2
- vllm converting model to sym_int4 even when --load-in-low-bit sym_int4 not set HOT 3
- Cannot find dGPU when install ollama on Windows HOT 1
- error with ipex-llm langchain integration for LLAVA model HOT 2
- error with loading 4bit saved LLava model on CPU HOT 3
- Benchmark latency different between oneAPI2024.0 and 2024.1 HOT 1
- Phi3 3.8B mini 128k model not supported HOT 1
- vLLM CPU example load-in-low-bit is not used HOT 2
- is support Intel® Xeon® Gold 5220R Processor (2nd Generation Intel® Xeon® Scalable Processors, formerly Cascade Lake,)? HOT 2
- Process exit when I use "Pyinstaller" to package and run the Python demo code HOT 1
- run model in A770 with ipex-llm-xpu docker get segmentation fault HOT 2
- When convert MiniCPM-V model from ModelScope to low-bit, got error: AttributeError: 'NoneType' object has no attribute 'add_bos_token' HOT 10
- non-singleton dimension errors when run Deepspeed-AutoTP HOT 4
- Can't release memory via del model after model.generate() HOT 2
- Facing issue when python3.9 -m pip install bigdl-chronos[pytorch]==2.4.0 HOT 1
- Can ipex-llm-0.43.1 run on Centos7.9? HOT 6
- I have Ultra 9 185h, which (IPEX or OpenVINO) should I choose for LLM and Stable Diffusion?
- Native API returns: -2 (PI_ERROR_DEVICE_NOT_AVAILABLE) HOT 8
- Looking for a workaround to install IPEX-LLM on Windows with an Intel GPU but running with a CPU and not running with a GPU HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bigdl.