Giter VIP home page Giter VIP logo

open-llms's Introduction

Open LLMs

These LLMs (Large Language Models) are all licensed for commercial use (e.g., Apache 2.0, MIT, OpenRAIL-M). Contributions welcome!

Language Model Release Date Checkpoints Paper/Blog Params (B) Context Length Licence Try it
T5 2019/10 T5 & Flan-T5, Flan-T5-xxl (HF) Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer 0.06 - 11 512 Apache 2.0 T5-Large
UL2 2022/10 UL2 & Flan-UL2, Flan-UL2 (HF) UL2 20B: An Open Source Unified Language Learner 20 512, 2048 Apache 2.0
Cerebras-GPT 2023/03 Cerebras-GPT Cerebras-GPT: A Family of Open, Compute-efficient, Large Language Models (Paper) 0.111 - 13 2048 Apache 2.0 Cerebras-GPT-1.3B
Open Assistant (Pythia family) 2023/03 OA-Pythia-12B-SFT-8, OA-Pythia-12B-SFT-4, OA-Pythia-12B-SFT-1 Democratizing Large Language Model Alignment 12 2048 Apache 2.0 Pythia-2.8B
Pythia 2023/04 pythia 70M - 12B Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling 0.07 - 12 2048 Apache 2.0
Dolly 2023/04 dolly-v2-12b Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM 3, 7, 12 2048 MIT
DLite 2023/05 dlite-v2-1_5b Announcing DLite V2: Lightweight, Open LLMs That Can Run Anywhere 0.124 - 1.5 1024 Apache 2.0 DLite-v2-1.5B
RWKV 2021/08 RWKV, ChatRWKV The RWKV Language Model (and my LM tricks) 0.1 - 14 infinity (RNN) Apache 2.0
GPT-J-6B 2023/06 GPT-J-6B, GPT4All-J GPT-J-6B: 6B JAX-Based Transformer 6 2048 Apache 2.0
GPT-NeoX-20B 2022/04 GPT-NEOX-20B GPT-NeoX-20B: An Open-Source Autoregressive Language Model 20 2048 Apache 2.0
Bloom 2022/11 Bloom BLOOM: A 176B-Parameter Open-Access Multilingual Language Model 176 2048 OpenRAIL-M v1
StableLM-Alpha 2023/04 StableLM-Alpha Stability AI Launches the First of its StableLM Suite of Language Models 3 - 65 4096 CC BY-SA-4.0
FastChat-T5 2023/04 fastchat-t5-3b-v1.0 We are excited to release FastChat-T5: our compact and commercial-friendly chatbot! 3 512 Apache 2.0
h2oGPT 2023/05 h2oGPT Building the World’s Best Open-Source Large Language Model: H2O.ai’s Journey 12 - 20 256 - 2048 Apache 2.0
MPT-7B 2023/05 MPT-7B, MPT-7B-Instruct Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs 7 84k (ALiBi) Apache 2.0, CC BY-SA-3.0
RedPajama-INCITE 2023/05 RedPajama-INCITE Releasing 3B and 7B RedPajama-INCITE family of models including base, instruction-tuned & chat models 3 - 7 2048 Apache 2.0 RedPajama-INCITE-Instruct-3B-v1
OpenLLaMA 2023/05 open_llama_3b, open_llama_7b, open_llama_13b OpenLLaMA: An Open Reproduction of LLaMA 3, 7 2048 Apache 2.0 OpenLLaMA-7B-Preview_200bt
Falcon 2023/05 Falcon-180B, Falcon-40B, Falcon-7B The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only 180, 40, 7 2048 Apache 2.0
MPT-30B 2023/06 MPT-30B, MPT-30B-instruct MPT-30B: Raising the bar for open-source foundation models 30 8192 Apache 2.0, CC BY-SA-3.0 MPT 30B inference code using CPU
LLaMA 2 2023/06 LLaMA 2 Weights  Llama 2: Open Foundation and Fine-Tuned Chat Models 7 - 70 4096 Custom Free if you have under 700M users and you cannot use LLaMA outputs to train other LLMs besides LLaMA and its derivatives HuggingChat
OpenLM 2023/09 OpenLM 1B, OpenLM 7B  Open LM: a minimal but performative language modeling (LM) repository 1, 7 2048 MIT
Mistral 7B 2023/09 Mistral-7B-v0.1, Mistral-7B-Instruct-v0.1 Mistral 7B 7 4096-16K with Sliding Windows Apache 2.0 Mistral Transformer
OpenHermes 2023/09 OpenHermes-7B, OpenHermes-13B Nous Research 7, 13 4096 MIT OpenHermes-V2 Finetuned on Mistral 7B
SOLAR 2023/12 Solar-10.7B Upstage 10.7 4096 apache-2.0
phi-2 2023/12 phi-2 2.7B Microsoft 2.7 2048 MIT
OLMo 2024/02 OLMo 1B, OLMo 7B, OLMo 7B Twin 2T AI2 1,7 2048 Apache 2.0
Gemma 2024/02 Gemma 7B, Gemma 7B it, Gemma 2B, Gemma 2B it Technical report 2-7 8192 Gemma Terms of Use
Zephyr 2023/11 Zephyr 7B Website 7 8192 Apache 2.0

Open LLMs for code

Language Model Release Date Checkpoints Paper/Blog Params (B) Context Length Licence Try it
SantaCoder 2023/01 santacoder SantaCoder: don't reach for the stars! 1.1 2048 OpenRAIL-M v1 SantaCoder
StarCoder 2023/05 starcoder StarCoder: A State-of-the-Art LLM for Code, StarCoder: May the source be with you! 1.1-15 8192 OpenRAIL-M v1
StarChat Alpha 2023/05 starchat-alpha Creating a Coding Assistant with StarCoder 16 8192 OpenRAIL-M v1
Replit Code 2023/05 replit-code-v1-3b Training a SOTA Code LLM in 1 week and Quantifying the Vibes — with Reza Shabani of Replit 2.7 infinity? (ALiBi) CC BY-SA-4.0 Replit-Code-v1-3B
CodeGen2 2023/04 codegen2 1B-16B CodeGen2: Lessons for Training LLMs on Programming and Natural Languages 1 - 16 2048 Apache 2.0
CodeT5+ 2023/05 CodeT5+ CodeT5+: Open Code Large Language Models for Code Understanding and Generation 0.22 - 16 512 BSD-3-Clause Codet5+-6B
XGen-7B 2023/06 XGen-7B-8K-Base Long Sequence Modeling with XGen: A 7B LLM Trained on 8K Input Sequence Length 7 8192 Apache 2.0
CodeGen2.5 2023/07 CodeGen2.5-7B-multi CodeGen2.5: Small, but mighty 7 2048 Apache 2.0
DeciCoder-1B 2023/08 DeciCoder-1B Introducing DeciCoder: The New Gold Standard in Efficient and Accurate Code Generation 1.1 2048 Apache 2.0 DeciCoder Demo
Code Llama 2023 Inference Code for CodeLlama models  Code Llama: Open Foundation Models for Code 7 - 34 4096 Custom Free if you have under 700M users and you cannot use LLaMA outputs to train other LLMs besides LLaMA and its derivatives HuggingChat

Open LLM datasets for pre-training

Name Release Date Paper/Blog Dataset Tokens (T) License
starcoderdata 2023/05 StarCoder: A State-of-the-Art LLM for Code starcoderdata 0.25 Apache 2.0
RedPajama 2023/04 RedPajama, a project to create leading open-source models, starts by reproducing LLaMA training dataset of over 1.2 trillion tokens RedPajama-Data 1.2 Apache 2.0

Open LLM datasets for instruction-tuning

Name Release Date Paper/Blog Dataset Samples (K) License
MPT-7B-Instruct 2023/05 Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs dolly_hhrlhf 59 CC BY-SA-3.0
databricks-dolly-15k 2023/04 Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM databricks-dolly-15k 15 CC BY-SA-3.0
OIG (Open Instruction Generalist) 2023/03 THE OIG DATASET OIG 44,000 Apache 2.0

Open LLM datasets for alignment-tuning

Name Release Date Paper/Blog Dataset Samples (K) License
OpenAssistant Conversations Dataset 2023/04 OpenAssistant Conversations - Democratizing Large Language Model Alignment oasst1 161 Apache 2.0

Evals on open LLMs


What do the licences mean?

  • Apache 2.0: Allows users to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software under the terms of the license, without concern for royalties.
  • MIT: Similar to Apache 2.0 but shorter and simpler. Also, in contrast to Apache 2.0, does not require stating any significant changes to the original code.
  • CC BY-SA-4.0: Allows (i) copying and redistributing the material and (ii) remixing, transforming, and building upon the material for any purpose, even commercially. But if you do the latter, you must distribute your contributions under the same license as the original. (Thus, may not be viable for internal teams.)
  • OpenRAIL-M v1: Allows royalty-free access and flexible downstream use and sharing of the model and modifications of it, and comes with a set of use restrictions (see Attachment A)
  • BSD-3-Clause: This version allows unlimited redistribution for any purpose as long as its copyright notices and the license's disclaimers of warranty are maintained.

Disclaimer: The information provided in this repo does not, and is not intended to, constitute legal advice. Maintainers of this repo are not responsible for the actions of third parties who use the models. Please consult an attorney before using models for commercial purposes.


Improvements

  • Complete entries for context length, and check entries with ?
  • Add number of tokens trained? (see considerations)
  • Add (links to) training code?
  • Add (links to) eval benchmarks?

open-llms's People

Contributors

adekunleoajayi avatar amitness avatar artur-galstyan avatar bazhang87 avatar cieske avatar david-macleod avatar dbalabka avatar eric8810 avatar eugeneyan avatar fabiogra avatar fkromer avatar infro avatar jacksonlark avatar jacobrenn avatar joedf avatar johnmarcampbell avatar kaiserwholearns avatar lawwu avatar loleg avatar ludwigstumpp avatar muhtasham avatar nolantrem avatar oferbaratz avatar olliestanley avatar orkutmuratyilmaz avatar rakeshgohel01 avatar reenal avatar tekumara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-llms's Issues

Remove StableLM as model weights only include deltas to LLaMA model

From the repository of StableLM (https://github.com/stability-AI/stableLM/):

Due to the original non-commercial license of LlaMA, we can only release the weights of our model as deltas over the original model's weights. StableVicuna's delta weights are released under (CC BY-NC-SA-4.0).

Which means that the effective model cannot be used with the here specified CC BY-SA-4.0 license, but effectively the GPLv3 license of the LlaMA model applies for the merged one.

As this repository's intention is to provide an overview about open and commercially usable models, I would either propose to remove the StableLM model (unfortunately) or to indicate the above described issue with a note in the table.

What do you think?

Proposal: Remove / Rename "Name" column in Dataset section

At the moment, the first column in the Dataset for finetuning section is Name and includes the name of the corresponding model.

I find this very confusing as I would expect the first column to be the name of the dataset.

Therefore I would propose to either:

  • Remove the current first column and content Name completely + instead move the Dataset column there.
  • Rename the current first column to Trained Model and switch the positions with the Dataset column.

What do you think?

Update few models

1:Redpajama incite now have fully trained 7B weight released.
2:StarCoder plus:https://huggingface.co/bigcode/starcoderplus
It was trained Futher on 600B additional data,Futher improving its performance.Also,I notice that it can also used as a chat bot now(thank to general English data from refined web),performing similar to llama 13b based models,and with 8192 ctx.
3:Tiger bot release it's giant 180B model.https://huggingface.co/TigerResearch/tigerbot-180b-research
Seems under Apache 2.0 license(no any other license or restriction mentioned on their repo,and every repo is shown Apache 2.0 license,but did not mention explicitly about their model license.) I hope it is truelly under Apache 2.0 license.
7B base and sft model available also.

performance benchmark

Hi,

are there possibility to add a performance benchmark of the open-sourced LLMs?

Add column with language information

It would be very valuable, if the table included information about language of the model/dataset.
While obviously many people are interested in English, some of the models support/were trained on languages different than English.

Please add summaries

I'm an advocate for minimalism and self-documenting code. I understand that the links are "just enough info" that there shouldn't be need for summaries.

However, I would appreciate if each model had a 1-line summary (or, if possible, multiple tiny lines explaining "the what and the why").

Not everyone has time to click every link and read the pages pointed by them.

I just came here and I see all of the models as "the same thing". I genuinely have no idea which ones are appropriate for my use-cases.

I'm willing to visit all URLs, to open a PR for this, but I don't have time this month.

BTW, thanks to all contributors for their time! And I'm sorry for my ignorance (if there's already a summary that I missed)

Thoughts on going forward with adding additional model specifications to the table

Summary

Currently there is the idea to add the following to the table:

  • length of context window
  • number of tokens trained

This issue is to discuss if we want to do so and what the implications are. I believe this is an important decision to make moving forward, so I would like to bring this to our attention here.

Implications

If we want to add these, we could have one separate row per published model version. Model version here indicates the standalone model variant published by the authors. This could either be due to different model sizes (see LlaMA-7B, 13B, 33B, 65B) or due to different training procedures (MPT-7B-base, vs -instruct, -chat, -storywriter). This will have an effect on the assigned properties in our table (model size, number of tokens trained, context window, ...)

In short, including more information inside the table would lead to:

  • more columns, for more properties
  • more rows, as we need to differentiate between each model version (alternatively, one could indicate the span for all models in one single row, e.g 1T - 10T for number of tokens or 1024 - 4096 for context width)

with the following consequences to the audience:

  • more complete information, greater level of detail
  • more difficult to get a quick overview, might damage the table clarity

What are your thoughts on this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.