Giter VIP home page Giter VIP logo

routellm's Introduction

RouteLLM

RouteLLM is a framework for serving and evaluating LLM routers.
[ Blog ] [ Paper ]

Our core features include:

  • Launch an OpenAI-compatible API that takes in user requests and routes them to the best model for that specific request using a single command.
  • Trained routers are provided out of the box, which we have shown to reduce costs by up to 85% on widely-used benchmarks such as MT Bench while maintaining 95% GPT-4 performance.
  • Easily extend the framework to include new routers and benchmarks, and compare the performance of all routers with a single command.

Installation

From PyPI

# Modify extras depending on your use case.
pip install "routellm[serve,eval]"

From source

git clone https://github.com/lm-sys/RouteLLM.git
cd RouteLLM
# Modify extras depending on your use case.
pip install -e .[serve,eval]

Motivation

Different LLMs vary widely their costs and capabilities, which leads to a dilemma when deploying them: routing all queries to the most capable model leads to the highest-quality responses but can be very expensive, while routing queries to smaller models can save costs but may result in lower-quality responses.

LLM routing offers a solution. We deploy a router that takes in each user's query and decides what LLM to route it to. We focus on routing between two models: a stronger, more expensive model and a cheaper but weaker model. Each request is also associated with a cost threshold that determines the cost-quality tradeoff of that request - a higher cost threshold leads to lower cost but may also reduce the quality of responses.

Server

RouteLLM offers a lightweight OpenAI-compatible server for routing requests based on different routing strategies:

python -m routellm.openai_server --routers mf --config config.example.yaml 
  • --routers specifies the list of routers available to the server. For instance, here, the server is started with one available router: mf (see below for the list of routers).
  • --config specifies the path to the configuration file for the routers (see Configuration section)

For most use-cases, we recommend the mf router as we have evaluated it to be very strong and lightweight.

When making a request to the server, clients specify which router and what cost threshold to use for each request using the model field in the following format router-[ROUTER NAME]-[THRESHOLD]. For instance, using a model of router-mf-0.5 specifies that the request should be routed using the mf router with a cost threshold of 0.5.

See here for a minimal walkthrough of using the RouteLLM server.

Threshold Calibration

The range of meaningful thresholds can vary depending on the type of router and the queries received. Therefore, we recommend calibrating thresholds using a sample of your query distribution, as well as the percentage of queries you'd like to route to the stronger model or weaker model.

Out of the box, we support calibrating thresholds based on the publicly-available Chatbot Arena dataset. For example, to calibrate the threshold for the matrix factorization router such that 50% of calls are routed to the stronger model:

> python -m routellm.calibrate_threshold --task calibrate --routers mf --strong-model-pct 0.5 --config config.example.yaml
For 50.0% strong model calls, calibrated threshold for mf: 0.11592505872249603

This means that the threshold should be set to 0.1881 for the mf router such that approximately 50% of calls are routed to the strong model i.e. using a model field of router-mf-0.1159. Note that because we are calibrating the threshold based on an existing the dataset, the number of calls routed to the stronger or weaker model will differ in practice based on the actual queries received by the server.

Model Support

By default, GPT-4 and Mixtral are used as the model pair for serving. To modify the model pair used, use the --strong-model and --weak-model flags e.g.

python -m routellm.openai_server --routers mf --strong-model gpt-4o --weak-model meta-llama/Meta-Llama-3-8B-Instruct

The server will route all OpenAI model to the official OpenAI client, so you will need to set the OPENAI_API_KEY environment variable for authentication before launching the server if you're using an OpenAI model.

For other models, RouteLLM supports any provider that has an OpenAI-compatible interface, which includes a wide-range of both closed and open-source models running locally or in the cloud. Once you have an OpenAI-compatible endpoint, set the --alt-base-url and --alt-api-key flags to point to your endpoint. e.g. For Anyscale Endpoints,

python -m routellm.openai_server --routers mf --config config.example.yaml --https://api.endpoints.anyscale.com/v1 --esecret_ANYSCALE_API_KEY

Instructions for setting up an OpenAI compatible server for popular providers:

Evaluation

RouteLLM also includes a evaluation framework to measure the performance of different routing strategies on benchmarks.

To evaluate a router on a benchmark, you can use the following command:

python -m routellm.evals.evaluate --routers random sw_ranking bert --benchmark gsm8k --config config.example.yaml 
  • --routers specifies the list of routers to evaluate, for instance, random and bert in this case.
  • --benchmark specifies the specific benchmark to evaluate the routers on. We currently support: mmlu, gsm8k, and mt-bench.

Evaluation results will be printed to the console. A plot of router performance will also be generated in the current directory (override the path using --output). To avoid recomputing results, the results for a router on a given benchmark is cached by default. This behavior can be overridden by using the --overwrite-cache flag, which takes in a list of routers to overwrite the cache for.

The results for all our benchmarks have been cached. For MT Bench, we use the precomputed judgements for the desired model pair. For MMLU and GSM8K, we utilized SGLang to compute the results for the desired model pair - the full code for this can be found in the benchmark directories if you would like to evaluate a different model pair.

By default, GPT-4 and Mixtral are used as the model pair for evaluation. To modify the model pair used, use the --strong-model and --weak-model flags e.g.

python -m routellm.evals.evaluate --routers bert --benchmark gsm8k --strong-model gpt-4o --weak-model meta-llama/Meta-Llama-3-8B-Instruct

Routers

Out of the box, RouteLLM supports 4 routers trained on the gpt-4-1106-preview and mixtral-8x7b-instruct-v0.1 model pair.

The full list of routers:

  1. mf: Uses a matrix factorization model trained on the preference data. (recommended)
  2. sw_ranking: Uses a weighted Elo calculation for routing, where each vote is weighted according to how similar it is to the user's prompt.
  3. bert: Uses a BERT classifier trained on the preference data.
  4. causal_llm: Uses a LLM-based classifier tuned on the preference data.
  5. random: Randomly routes to either model.

While these routers have been trained on the gpt-4-1106-preview and mixtral-8x7b-instruct-v0.1 model pair, we have found that these routers generalize well to other strong and weak model pairs as well. For the full details, refer to our paper.

Configuration

The configuration for all routers is specified in single YAML file, which is a top-level mapping from router name to the keyword arguments used for router initialization. An example configuration is provided in the config.example.yaml file - it provides the configurations for routers that have trained on Arena data augmented using GPT-4 as a judge. The models and datasets used are all hosted on Hugging Face under the RouteLLM and LMSYS organizations.

sw_ranking:
    arena_battle_datasets:
      - lmsys/lmsys-arena-human-preference-55k
      - routellm/gpt4_judge_battles
    arena_embedding_datasets:
      - routellm/arena_battles_embeddings
      - routellm/gpt4_judge_battles_embeddings
    strong_model: gpt-4-1106-preview
    weak_model: mixtral-8x7b-instruct-v0.1
causal_llm:
    checkpoint_path: routellm/causal_llm_gpt4_augmented
bert:
    checkpoint_path: routellm/bert_gpt4_augmented
mf:
    checkpoint_path: routellm/mf_gpt4_augmented
    strong_model: gpt-4-1106-preview
    weak_model: mixtral-8x7b-instruct-v0.1

Contribution

We welcome contributions! Please feel free to open an issue or a pull request if you have any suggestions or improvements.

Adding a new router

To add a new router to RouteLLM, implement the abstract Router class in routers.py and add the new router to the ROUTER_CLS dictionary. Then, you can use immediately the new router in the server or evaluation framework.

There is only a single method to implement: calculate_strong_win_rate, which takes in the user prompt and returns the win rate for the strong model conditioned on that given prompt - if this win rate is great than user-specified cost threshold, then the request is routed to the strong model. Otherwise, it is routed to the weak model.

Adding a new benchmark

To add a new benchmark to RouteLLM, implement the abstract Benchmark class in benchmarks.py and update the evaluate.py module to properly initialize the new benchmark class. Ideally, the results for the benchmark should be precomputed to avoid having to regenerate the results for each evaluation run -- see the existing benchmarks for examples on how to do this.

Citation

The code in this repository is based on the research from the paper. Please cite if you find the repository helpful.

@misc{ong2024routellmlearningroutellms,
      title={RouteLLM: Learning to Route LLMs with Preference Data},
      author={Isaac Ong and Amjad Almahairi and Vincent Wu and Wei-Lin Chiang and Tianhao Wu and Joseph E. Gonzalez and M Waleed Kadous and Ion Stoica},
      year={2024},
      eprint={2406.18665},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2406.18665},
}

routellm's People

Contributors

iojw avatar thwu1 avatar infwinston avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.