Comments (4)
Hi @MiNeves00 i'm the maintainer of LiteLLM and we allow you to maximize throughput + throttle requests by load balancing between multiple LLM endpoints.
I thought it would be helpful for your use case, I'd love feedback if not
Here's the quick start, to use LiteLLM load balancer (works with 100+ LLMs)
doc: https://docs.litellm.ai/docs/simple_proxy#model-alias
Step 1 Create a Config.yaml
model_list:
- model_name: openhermes
litellm_params:
model: openhermes
temperature: 0.6
max_tokens: 400
custom_llm_provider: "openai"
api_base: http://192.168.1.23:8000/v1
- model_name: openhermes
litellm_params:
model: openhermes
custom_llm_provider: "openai"
api_base: http://192.168.1.23:8001/v1
- model_name: openhermes
litellm_params:
model: openhermes
custom_llm_provider: "openai"
frequency_penalty : 0.6
api_base: http://192.168.1.23:8010/v1
Step 2: Start the litellm proxy:
litellm --config /path/to/config.yaml
Step3 Make Request to LiteLLM proxy:
curl --location 'http://0.0.0.0:8000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "openhermes",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
],
}
'
from llmstudio.
Hey @ishaan-jaff thanks for the info! That case of load balancing between different endpoints might end up spinning into an issue of its own. In that case we will be sure to take a look at and explore LiteLLM, seems pretty simple to test it out.
Although for now the focus of this issue is more for a use case where the user wants a specific provider. The user does not want failure due to rate limiting but also wants to maximize the rate used.
from llmstudio.
@MiNeves00 our router should allow you to maximize your throughput from your rate limits
https://docs.litellm.ai/docs/routing
happy to make a PR on this
from llmstudio.
@ishaan-jaff I appreciate your availability for a PR. However, I just read the documentation again but I understood that you can maximize throughput by routing between several models.
If it is with just one model and one provider I only found the Cooldown function in LiteLLM to be useful for this scenario.
Although it seems to behave in a naive manner, being that when it hits the cooldown error limit per minute it cools down for a whole minute while it might not have needed to cool down for such a long time. Am I interpreting it right?
Cooldowns - Set the limit for how many calls a model is allowed to fail in a minute, before being cooled down for a minute.
from llmstudio.
Related Issues (20)
- FEAT: General UI for Assistants
- FEAT: UI Assistants - Support function def
- BUG: Adapt LLMstudio for new OpenAI version 1.2.3
- FEAT: UI should support templating
- FEAT: PromptCompare UI HOT 1
- BUG: GPT3.5 Rerouting
- Start-up: No module named 'llmstudio.engine' HOT 1
- BUG: Pip install Error HOT 3
- DOC: How to connect LLMstudio to Azure OpenAI? HOT 7
- BUG: nest_async Requirement is missing HOT 1
- BUG: docs.llmstudio.ai not reachable HOT 1
- BUG: LLMstudio command not registered on bashrc HOT 5
- DOC: ReadMe needs update HOT 1
- BUG: AttributeError in validate_parameters Method ('OpenAIParameters' object has no attribute 'model_dump') HOT 3
- FEAT: Google Colab Support for LLMstudio Web UI HOT 1
- BUG: problem with loading the UI HOT 1
- FEAT: Redesign the UI to be more fit to represent logging explorer HOT 1
- FEAT: RBAC support
- BUG: LLMstudio port changed
- FEAT: Access UI remotely
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llmstudio.