Giter VIP home page Giter VIP logo

Comments (7)

Chillee avatar Chillee commented on June 10, 2024 3

@SunMarc I think there might still be some gaps in how the kv-cache is handled during inference. Specifically, the link you sent is about vision models, not text generation.

We should chat more about this - i'd love to see the techniques here integrated.

from gpt-fast.

SunMarc avatar SunMarc commented on June 10, 2024 1

Thanks for the interest ! We already support most of the optimization described here:

from gpt-fast.

SunMarc avatar SunMarc commented on June 10, 2024 1

Yes, absolutely! cc @younesbelkada for visibility

from gpt-fast.

aniketmaurya avatar aniketmaurya commented on June 10, 2024

Most of these features are already supported in Lit-GPT (if you're looking for finetuning LLMs) and more of this will be supported soon. You can use LLMs from HF model hub.

from gpt-fast.

yhyu13 avatar yhyu13 commented on June 10, 2024

These opt should already in hf. Moreover, some specific opt made for hardware like writing your cuda knerl for GPTQ and paged attention (e.g. flash_attn2) would make inference even faster.

https://github.com/turboderp/exllamav2 has bench marked llama-7b with 190+ t/s on single 3090Ti which matches this repo on 8xA100, but 3090Ti is only about 1/3 flops of a single A100. So hardware opt also plays as another drive.

from gpt-fast.

lucasjinreal avatar lucasjinreal commented on June 10, 2024

Hi, does torch.complie works with AWQ?

(seems hf already supports AWQ, but quantization way might not same as this repo)

How to enable speculative decoding in hf?

from gpt-fast.

Chillee avatar Chillee commented on June 10, 2024

@yhyu13

https://github.com/turboderp/exllamav2 has bench marked llama-7b with 190+ t/s on single 3090Ti which matches this repo on 8xA100, but 3090Ti is only about 1/3 flops of a single A100.

To be clear, the benchmark on this repo is at 197 t/s on a single A100 with a groupsize of 32, while exllamav2 is running a single 4090 with a groupsize of 128.

Still certainly very good results from exllamav2 :)

from gpt-fast.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.