Comments (8)
environment variables are passed through and so this works today - as we can tell by the error message when I try the install on my system that does not have cuda:
$ CMAKE_ARGS="-DLLAMA_CUDA=on" poetry add llama-cpp-python
Using version ^0.2.62 for llama-cpp-python
Updating dependencies
Resolving dependencies... (1.1s)
Package operations: 6 installs, 0 updates, 0 removals
- Installing markupsafe (2.1.5)
- Installing diskcache (5.6.3)
- Installing jinja2 (3.1.3)
- Installing numpy (1.26.4)
- Installing typing-extensions (4.11.0)
- Installing llama-cpp-python (0.2.62): Failed
ChefBuildError
Backend subprocess exited when trying to invoke build_wheel
*** scikit-build-core 0.9.0 using CMake 3.22.1 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpa0_n9frt/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.34.1") -- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Could not find nvcc, please set CUDAToolkit_ROOT.
CMake Warning at vendor/llama.cpp/CMakeLists.txt:464 (message):
CUDA not found
-- CUDA host compiler is GNU
CMake Error at vendor/llama.cpp/CMakeLists.txt:909 (get_flags):
get_flags Function invoked with incorrect arguments for function named:
get_flags
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
-- Configuring incomplete, errors occurred!
See also "/tmp/tmpa0_n9frt/build/CMakeFiles/CMakeOutput.log".
*** CMake configuration failed
from poetry.
environment variables are passed through and so this works today - as we can tell by the error message when I try the install on my system that does not have cuda:
$ CMAKE_ARGS="-DLLAMA_CUDA=on" poetry add llama-cpp-python Using version ^0.2.62 for llama-cpp-python Updating dependencies Resolving dependencies... (1.1s) Package operations: 6 installs, 0 updates, 0 removals - Installing markupsafe (2.1.5) - Installing diskcache (5.6.3) - Installing jinja2 (3.1.3) - Installing numpy (1.26.4) - Installing typing-extensions (4.11.0) - Installing llama-cpp-python (0.2.62): Failed ChefBuildError Backend subprocess exited when trying to invoke build_wheel *** scikit-build-core 0.9.0 using CMake 3.22.1 (wheel) *** Configuring CMake... loading initial cache file /tmp/tmpa0_n9frt/build/CMakeInit.txt -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.34.1") -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Could not find nvcc, please set CUDAToolkit_ROOT. CMake Warning at vendor/llama.cpp/CMakeLists.txt:464 (message): CUDA not found -- CUDA host compiler is GNU CMake Error at vendor/llama.cpp/CMakeLists.txt:909 (get_flags): get_flags Function invoked with incorrect arguments for function named: get_flags -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. -- Configuring incomplete, errors occurred! See also "/tmp/tmpa0_n9frt/build/CMakeFiles/CMakeOutput.log". *** CMake configuration failed
I have run on machine with CUDA the above exact command, no error during installation but when I load the model, it's loading it on CPU, telling me the installation went wrong. When I use or regular pip install, I was able to install it correctly with CUDA and the model will be loaded on GPU.
from poetry.
perhaps you have a cached wheel that was built before you set environment variables, suggest clearing your cache.
per my previous message, this definitely works as expected already
from poetry.
@dimbleby Hmm yeah, it's now working after I specify --no-cache
, wonder if it's from 0.2.63 as last week it didn't work when I was using 0.2.62.
On another side note, that message can occur even if you have Nvidia GPU. It was complaining that nvcc
isn't available, which can occur if someone is using the base cuda docker image and not the devel
one.
Another question if you don't mind. I checked pyproject.toml
file and it just shows llama-cpp-python = "^0.2.63"
. I'm I'm using the pyprojet.toml
or the lock file to install packages in another environment, how would llama-cpp-python
know to install with the CUDA flag as it's not specified anywhere. Thank you!
from poetry.
poetry does not record environment variables, you will have to be careful of that yourself
from poetry.
I guess that's probably the main feature I'm requesting here, maybe I only described the error/wrong behavior. For example when something like above is run with flag, that can be recorded in pyproject.toml
and lock file.
Otherwise, if I installed with CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python
, updated pyproject.toml and lock file, and I (or someone else) install dependencies in a new environment, it will have the wrong installation. That person will have to uninstall and re-install with the correct flag specified, which makes package dependency management a mess.
from poetry.
python source distributions can do whatever they want at build time: to give a reproducible build poetry would have to store not only all environment variables, but also the result of any network calls, the state of the system random number generator, the phase of the moon, etc.
(this is not going to happen)
if you want people to get the same binaries, you should build binary distributions and put them somewhere eg a private repository and point at those
from poetry.
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
from poetry.
Related Issues (20)
- Poetry fails to resolve some dependencies from source with simple API only HOT 2
- force revalidate cache when package with version not found in cache/ HOT 2
- Add ability to configure ephemeral environment location HOT 5
- Has anyone seen this error "No virtualenv implementation" HOT 1
- How to toggle torch cpu and gpu installation using poetry HOT 2
- poetry update v. 1.8.3. fails when optional package is not in current environment (works with poetry 1.7.1) HOT 3
- poetry build -C ./relative/path doesnt work properly
- "Unimplemented comparison of constraints" error for `platform_version` dependency marker
- poetry add git+http does not work on linux but work on windows for some packages HOT 2
- Error: Invalid source "pypi" referenced in dependencies.
- Prevent Poetry from changing conda-installed dependency HOT 2
- Unable to resolve non-SemVer 4-part version numbers of Python stubs HOT 4
- invalid dependency resolution with python specific dependencies HOT 1
- Make `[tool.poetry.scripts]` available from a pip installation HOT 3
- Requesting JSON Schema for `virtualenvs` in `poetry-core` HOT 7
- Cannot install torch on MacOS: poetry selects 2.3.0 which is not available on Apple Silicon HOT 1
- Multi-architecture wheels from the same source HOT 6
- 2.0.0 Release HOT 1
- Poetry failing to add Scipy to project HOT 2
- Pip does not work in poetry installation HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from poetry.