Giter VIP home page Giter VIP logo

litellm's Introduction

๐Ÿš… LiteLLM

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc.]

LiteLLM manages:

  • Translate inputs to provider's completion, embedding, and image_generation endpoints
  • Consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router

Jump to OpenAI Proxy Docs
Jump to Supported LLM Providers

Usage (Docs)

Important

LiteLLM v1.0.0 now requires openai>=1.0.0. Migration guide here

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-openai-key" 
os.environ["COHERE_API_KEY"] = "your-cohere-key" 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Async (Docs)

from litellm import acompletion
import asyncio

async def test_get_response():
    user_message = "Hello, how are you?"
    messages = [{"content": user_message, "role": "user"}]
    response = await acompletion(model="gpt-3.5-turbo", messages=messages)
    return response

response = asyncio.run(test_get_response())
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

from litellm import completion
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

# claude 2
response = completion('claude-2', messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Langfuse, DynamoDB, s3 Buckets, LLMonitor, Helicone, Promptlayer, Traceloop, Slack

from litellm import completion

## set env variables for logging tools
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["LLMONITOR_APP_ID"] = "your-llmonitor-app-id"

os.environ["OPENAI_API_KEY"]

# set callbacks
litellm.success_callback = ["langfuse", "llmonitor"] # log input/output to langfuse, llmonitor, supabase

#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi ๐Ÿ‘‹ - i'm openai"}])

OpenAI Proxy - (Docs)

Track spend across multiple projects/people

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

๐Ÿ“– Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

pip install 'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder

#INFO: Proxy running on http://0.0.0.0:8000

Step 2: Make ChatCompletions Request to Proxy

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

Proxy Key Management (Docs)

Track Spend, Set budgets and create virtual keys for the proxy POST /key/generate

Request

curl 'http://0.0.0.0:8000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "[email protected]", "team": "core-infra"}}'

Expected Response

{
    "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
    "expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
}

[Beta] Proxy UI

A simple UI to add new models and let your users create keys.

Live here: https://dashboard.litellm.ai/

Code: https://github.com/BerriAI/litellm/tree/main/ui

Screenshot 2023-12-26 at 8 33 53 AM

Supported Providers (Docs)

Provider Completion Streaming Async Completion Async Streaming Async Embedding Async Image Generation
openai โœ… โœ… โœ… โœ… โœ… โœ…
azure โœ… โœ… โœ… โœ… โœ… โœ…
aws - sagemaker โœ… โœ… โœ… โœ… โœ…
aws - bedrock โœ… โœ… โœ… โœ… โœ…
google - vertex_ai [Gemini] โœ… โœ… โœ… โœ…
google - palm โœ… โœ… โœ… โœ…
google AI Studio - gemini โœ… โœ…
mistral ai api โœ… โœ… โœ… โœ… โœ…
cloudflare AI Workers โœ… โœ… โœ… โœ…
cohere โœ… โœ… โœ… โœ… โœ…
anthropic โœ… โœ… โœ… โœ…
huggingface โœ… โœ… โœ… โœ… โœ…
replicate โœ… โœ… โœ… โœ…
together_ai โœ… โœ… โœ… โœ…
openrouter โœ… โœ… โœ… โœ…
ai21 โœ… โœ… โœ… โœ…
baseten โœ… โœ… โœ… โœ…
vllm โœ… โœ… โœ… โœ…
nlp_cloud โœ… โœ… โœ… โœ…
aleph alpha โœ… โœ… โœ… โœ…
petals โœ… โœ… โœ… โœ…
ollama โœ… โœ… โœ… โœ…
deepinfra โœ… โœ… โœ… โœ…
perplexity-ai โœ… โœ… โœ… โœ…
anyscale โœ… โœ… โœ… โœ…
voyage ai โœ…
xinference [Xorbits Inference] โœ…

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
poetry run flake8
poetry run pytest .

Step 4: Submit a PR with your changes! ๐Ÿš€

  • push your fork to your GitHub repo
  • submit a PR from there

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors

litellm's People

Contributors

ishaan-jaff avatar krrishdholakia avatar coconut49 avatar manouchehri avatar williamespegren avatar zakhar-kogan avatar maxdeichmann avatar toniengelhardt avatar mateocamara avatar canada4663 avatar nirga avatar dependabot[bot] avatar bufferoverflow avatar geekyayush avatar psu3d0 avatar yujonglee avatar mc-marcocheng avatar wallies avatar vincelwt avatar kylehh avatar fcakyon avatar estill01 avatar neubig avatar dleen avatar ashgreh avatar duc-phamh avatar ichernev avatar kumaranvpl avatar nandini-star avatar zhooda avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.