Comments (2)
Any details?
from lmdeploy.
import fire
from llmdeploy import TurboFormers
from llmdeploy.model import MODELS
from transformers import AutoTokenizer
import random
def input_prompt():
print('\ndouble enter to end input >>> ', end='')
sentinel = '' # ends when this string is seen
return '\n'.join(iter(input, sentinel))
def main(model_name, model_path, tokenizer_model_path, session_id: int = 1):
generator = TurboFormers(model_path)
tokenizer = AutoTokenizer(tokenizer_model_path)
model = MODELS.get(model_name)()
nth_round = 1
step = 0
seed = random.getrandbits(64)
while True:
prompt = input_prompt()
if prompt == 'exit':
exit(0)
elif prompt == 'end':
pass
else:
prompt = model.get_prompt(prompt, nth_round == 1)
input_ids = tokenizer(prompt)
for status, res, tokens in generator.stream_infer(
session_id=session_id,
input_ids=input_ids,
request_output_len=512,
sequence_start=(nth_round == 1),
sequence_end=False,
step=step,
stop=False,
top_k=40,
top_p=0.8,
temperature=0.8,
repetition_penalty=1.05,
ignore_eos=False,
random_seed=seed if nth_round == 1 else None
):
print(f'session {session_id}, {status}, {tokens}, {res}')
step = input_ids + tokens
nth_round += 1
if __name__ == '__main__':
fire.Fire(main)
from lmdeploy.
Related Issues (20)
- [Bug] 部署的多模态模型,多轮对话时输出结果异常 HOT 11
- [Bug] 部署多模态大模型时,本地图片的输入无法被正确读取 HOT 1
- [Feature] Grammar/structured output support HOT 1
- [MISC] Ask questions about Turbomind's scheduling strategy HOT 1
- [Bug] hang when many requests
- [Feature] Implement COG-VLM2 HOT 1
- GPTQ 和 AWQ 的推理 kernel 能否互用? HOT 7
- [Feature] specify gpus in pipeline
- [Feature] Layer Wise Calibration and Quantization of Models (To quantize model on Low VRAM GPU) HOT 4
- 使用KV cache(int8或int4)量化internvl-v1.5后,显存反而增加了 HOT 7
- [Feature] Support for CogVLM2 HOT 1
- [Bug] qwen1.5-14b-chat使用turbomind进行推理,会出现输出重复的情况 HOT 9
- lmdeploy搭建的服务,是否支持通过传输stop_words的方式来控制模型输出 HOT 4
- Are there any plans to support CUDA 11.7? HOT 4
- [Bug] got error when pip install. docker img works though, python ver3.11 HOT 9
- [Bug] It seems the memory of internlm2 is bad when input prompt is longtext. HOT 8
- engine_config = TurbomindEngineConfig(tp=2, quant_policy=0, cache_max_entry_count=0.2, session_len=4096)# quant_policy=8, self.pipe = pipeline("InternVL-Chat-V1-5", backend_config=engine_config) 其他配置参数不变,改变quant_policy=8,0,4 ,显存占用和推理速度没有任何改变是为什么呢?
- [Feature]- Support for the microsoft/Phi-3-vision-128k-instruct Vision Model HOT 1
- [Bug] 部署InternLM-XComposer2服务的时候,请求报错的时候;整个卡住,不返回500,并且其他请求都进不去 HOT 2
- LMDeploy-0.4.1运行qwen1.5 110B,推理长时间无结果 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lmdeploy.