Giter VIP home page Giter VIP logo

codingl2k1 / inference Goto Github PK

View Code? Open in Web Editor NEW

This project forked from xorbitsai/inference

0.0 0.0 0.0 22.61 MB

Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.

Home Page: https://inference.readthedocs.io

License: Apache License 2.0

Python 100.00%

inference's Introduction

xorbits

Xorbits Inference: Model Serving Made Easy πŸ€–

PyPI Latest Release License Build Status Slack Twitter

English | 中文介绍 | ζ—₯本θͺž


Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.

πŸ”₯ Hot Topics

Framework Enhancements

  • Custom model support: #325
  • LoRA support: #271
  • Multi-GPU support for PyTorch models: #226
  • Xinference dashboard: #93

New Models

Tools

  • LlamaIndex plugin: #7151

Key Features

🌟 Model Serving Made Easy: Simplify the process of serving large language, speech recognition, and multimodal models. You can set up and deploy your models for experimentation and production with a single command.

⚑️ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single command. Inference provides access to state-of-the-art open-source models!

πŸ–₯ Heterogeneous Hardware Utilization: Make the most of your hardware resources with ggml. Xorbits Inference intelligently utilizes heterogeneous hardware, including GPUs and CPUs, to accelerate your model inference tasks.

βš™οΈ Flexible API and Interfaces: Offer multiple interfaces for interacting with your models, supporting RPC, RESTful API(compatible with OpenAI API), CLI and WebUI for seamless management and monitoring.

🌐 Distributed Deployment: Excel in distributed deployment scenarios, allowing the seamless distribution of model inference across multiple devices or machines.

πŸ”Œ Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates with popular third-party libraries like LangChain and LlamaIndex.

Getting Started

Xinference can be installed via pip from PyPI. It is highly recommended to create a new virtual environment to avoid conflicts.

Installation

$ pip install "xinference"

xinference installs basic packages for serving models.

Installation with GGML

To serve ggml models, you need to install the following extra dependencies:

$ pip install "xinference[ggml]"

If you want to achieve acceleration on different hardware, refer to the installation documentation of the corresponding package.

Installation with PyTorch

To serve PyTorch models, you need to install the following extra dependencies:

$ pip install "xinference[pytorch]"

Installation with all dependencies

If you want to serve all the supported models, install all the dependencies:

$ pip install "xinference[all]"

Deployment

You can deploy Xinference locally with a single command or deploy it in a distributed cluster.

Local

To start a local instance of Xinference, run the following command:

$ xinference

Distributed

To deploy Xinference in a cluster, you need to start a Xinference supervisor on one server and Xinference workers on the other servers. Follow the steps below:

Starting the Supervisor: On the server where you want to run the Xinference supervisor, run the following command:

$ xinference-supervisor -H "${supervisor_host}"

Replace ${supervisor_host} with the actual host of your supervisor server.

Starting the Workers: On each of the other servers where you want to run Xinference workers, run the following command:

$ xinference-worker -e "http://${supervisor_host}:9997"

Once Xinference is running, an endpoint will be accessible for model management via CLI or Xinference client.

  • For local deployment, the endpoint will be http://localhost:9997.
  • For cluster deployment, the endpoint will be http://${supervisor_host}:9997, where ${supervisor_host} is the hostname or IP address of the server where the supervisor is running.

You can also view a web UI using the Xinference endpoint to chat with all the builtin models. You can even chat with two cutting-edge AI models side-by-side to compare their performance!

web UI

Xinference CLI

Xinference provides a command line interface (CLI) for model management. Here are some useful commands:

  • Launch a model (a model UID will be returned): xinference launch
  • List running models: xinference list
  • List all the builtin models: xinference list --all
  • Terminate a model: xinference terminate --model-uid ${model_uid}

Xinference Client

Xinference also provides a client for managing and accessing models programmatically:

from xinference.client import Client

client = Client("http://localhost:9997")
model_uid = client.launch_model(model_name="chatglm2")
model = client.get_model(model_uid)

chat_history = []
prompt = "What is the largest animal?"
model.chat(
            prompt,
            chat_history,
            generate_config={"max_tokens": 1024}
        )

Result:

{
  "id": "chatcmpl-8d76b65a-bad0-42ef-912d-4a0533d90d61",
  "model": "56f69622-1e73-11ee-a3bd-9af9f16816c6",
  "object": "chat.completion",
  "created": 1688919187,
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The largest animal that has been scientifically measured is the blue whale, which has a maximum length of around 23 meters (75 feet) for adult animals and can weigh up to 150,000 pounds (68,000 kg). However, it is important to note that this is just an estimate and that the largest animal known to science may be larger still. Some scientists believe that the largest animals may not have a clear \"size\" in the same way that humans do, as their size can vary depending on the environment and the stage of their life."
      },
      "finish_reason": "None"
    }
  ],
  "usage": {
    "prompt_tokens": -1,
    "completion_tokens": -1,
    "total_tokens": -1
  }
}

See examples for more examples.

Builtin models

To view the builtin models, run the following command:

$ xinference list --all
Name Language Ability
baichuan ['en', 'zh'] ['embed', 'generate']
baichuan-chat ['en', 'zh'] ['embed', 'chat']
chatglm ['en', 'zh'] ['embed', 'chat']
chatglm2 ['en', 'zh'] ['embed', 'chat']
chatglm2-32k ['en', 'zh'] ['embed', 'chat']
falcon ['en'] ['embed', 'generate']
falcon-instruct ['en'] ['embed', 'chat']
gpt-2 ['en'] ['generate']
internlm ['en', 'zh'] ['embed', 'generate']
internlm-chat ['en', 'zh'] ['embed', 'chat']
internlm-chat-8k ['en', 'zh'] ['embed', 'chat']
llama-2 ['en'] ['embed', 'generate']
llama-2-chat ['en'] ['embed', 'chat']
opt ['en'] ['embed', 'generate']
orca ['en'] ['embed', 'chat']
qwen-chat ['en', 'zh'] ['embed', 'chat']
starchat-beta ['en'] ['embed', 'chat']
starcoder ['en'] ['generate']
starcoderplus ['en'] ['embed', 'generate']
vicuna-v1.3 ['en'] ['embed', 'chat']
vicuna-v1.5 ['en'] ['embed', 'chat']
vicuna-v1.5-16k ['en'] ['embed', 'chat']
wizardlm-v1.0 ['en'] ['embed', 'chat']
wizardmath-v1.0 ['en'] ['embed', 'chat']

For in-depth details on the built-in models, please refer to built-in models.

NOTE:

  • Xinference will download models automatically for you, and by default the models will be saved under ${USER}/.xinference/cache.

Custom models

Please refer to custom models.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.