Comments (7)
Hey there, #68 will address this isuse. I will do a patch release promptly
from openllm.
what models are you using?
from openllm.
all models,
what models are you using?
when run 'openllm start opt' command,the error occurs
from openllm.
Can you send the whole trace of openllm start opt --debug
?
from openllm.
Enabling debug mode for current BentoML session
Error caught while starting LLM Server:
environment can only contain strings
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\b\anaconda3\envs\ol\Scripts\openllm.exe\__main__.py", line 7, in <module>
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\openllm\cli.py", line 381, in wrapper
return func(*args, **attrs)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\openllm\cli.py", line 354, in wrapper
return_value = func(*args, **attrs)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\openllm\cli.py", line 329, in wrapper
return f(*args, **attrs)
^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\openllm\cli.py", line 837, in model_start
server.start(env=start_env, text=True, blocking=True)
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\bentoml\server.py", line 190, in start
return _Manager()
^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\site-packages\bentoml\server.py", line 163, in __init__
self.process = subprocess.Popen(
^^^^^^^^^^^^^^^^^
File "C:\Users\b\anaconda3\envs\ol\Lib\subprocess.py", line 1024, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\b\anaconda3\envs\ol\Lib\subprocess.py", line 1509, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: environment can only contain strings
from openllm.
can you try OPENLLMDEVDEBUG=3 openllm start opt
?
from openllm.
This has been patched in 0.1.14
from openllm.
Related Issues (20)
- feat: support enforce_eager option from cli HOT 1
- bug: Cannot Run an OpenLLM server regardless of where I try to get it from or what model I use HOT 6
- bug: Attempting to invoke OpenLLM from Langchain results in error HOT 2
- Availability of the OpenAI /v1/completions API Endpoint ? HOT 3
- feat: Include starcoder2
- How to deploy a model using a single machine multi card approach? HOT 1
- Documentation HOT 1
- feat: add gemma2 HOT 1
- Can openllm support local path model? HOT 10
- bug: SSL issues HOT 3
- Struggling to launch OpenLLM with error "No module named bentoml HOT 2
- Fresh install complains about vllm-flash-attn==2.5.9.post1 dependency being unsatisfiable HOT 4
- bug: `TypeError: Argument() missing 1 required positional argument: 'default'` HOT 2
- feat: Can you support llama3? HOT 3
- bug: WARNING: openllm 0.4.44 does not provide the extra 'gemma' HOT 1
- feat: support LMDeploy backend HOT 7
- bug: error coming up while install the vllm using pip install "openllm[vllm]" HOT 1
- For AMD/GPU, how to use multi GPUS in the api_server.py HOT 2
- bug: pip package version ssues
- feat: Multimodal LLMs? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from openllm.