Comments (3)
How have you implemented this locally? I'm seeking to do the same , would appreciate some guidance :) @LawrenceGrigoryan
from text-generation-inference.
@avacaondata
so basically you need to add MinPLogitsWarper
to /server/text_generation_server/utils/logits_process.py
and the min_p
parameter to NextTokenChooser
class in /server/text_generation_server/utils/tokens.py
Then go over a ton of files and add the min_p
parameter wherever needed. Here I would suggest you searching for one of the existing generation params like top_p
and adding the missing lines of code for the new parameter min_p
After you done that, just build the image with the default Dockerfile
and use it as you usually do
If you have any further questions, feel free to ask :)
from text-generation-inference.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
from text-generation-inference.
Related Issues (20)
- ROCm: Server error: transport error when running batch size >=2 (Falcon-11B) HOT 1
- ROCm: Support models with head_dim>128
- ROCm: mismatch in generation for gpt2
- Poor/inconsistent results from Phi-3-mini-128k HOT 1
- idefics2: Sizes of tensors must match except in dimension 0. Expected size 448 but got size 447 for tensor number 2 in the list. HOT 2
- Some typo error in the picture of flash_attention.md HOT 1
- Unable to run TGI following the instructions on the readme HOT 5
- docker-compose throws `flash attention is not installed` error HOT 3
- Phi-3-mini-128k crashes on simple query HOT 2
- Add Environment Variable for OTLP Service Name HOT 2
- ImportError: libcuda.so.1: cannot open shared object file: No such file or directory HOT 10
- Tree-attention for medusa HOT 2
- get stucked when run text-generation-benchmark on AMD gpu HOT 2
- Unable to load Qwen2-72B-Instruct-exl2 model HOT 2
- `mistralai/Mixtral-8x22B-Instruct-v0.1`: Getting `RuntimeError: 'ptxas' failed with error code 127` while warming up on 8 GPUs HOT 2
- `mistralai/Mixtral-8x22B-Instruct-v0.1`: Successful warmup, crashes on inference HOT 1
- Long install report HOT 1
- P40 with USE_FLASH_ATTENTION=False HOT 2
- Sparse Marlin HOT 3
- protobuf version not compatible HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from text-generation-inference.