๐ฃ Please follow me for new updates https://twitter.com/camenduru
๐ฅ Please join our discord server https://discord.gg/k5BwmmvJJU
๐ฅณ Please join my patreon community https://patreon.com/camenduru
According to the Facebook Research LLaMA license (Non-commercial bespoke license), maybe we cannot use this model with a Colab Pro account. But Yann LeCun said "GPL v3" (https://twitter.com/ylecun/status/1629189925089296386) I am a little confused. Is it possible to use this with a non-free Colab Pro account?
https://www.youtube.com/watch?v=kgA7eKU1XuA
โ If you encounter an IndexError: list index out of range
error, please set the models instruction template.
https://github.com/oobabooga/text-generation-webui (Thanks to @oobabooga โค)
Model | License |
---|---|
vicuna-13b-GPTQ-4bit-128g | From https://vicuna.lmsys.org: The online demo is a research preview intended for non-commercial use only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us If you find any potential violation. The code is released under the Apache License 2.0. |
gpt4-x-alpaca-13b-native-4bit-128g | https://huggingface.co/chavinlo/alpaca-native -> https://huggingface.co/chavinlo/alpaca-13b -> https://huggingface.co/chavinlo/gpt4-x-alpaca |
llama-2 | https://ai.meta.com/llama/ Llama 2 is available for free for research and commercial use. ๐ฅณ |
Thanks to facebookresearch โค for https://github.com/facebookresearch/llama
Thanks to lmsys โค for https://huggingface.co/lmsys/vicuna-13b-delta-v0
Thanks to anon8231489123 โค for https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/lmsys/vicuna-13b-delta-v0)
Thanks to tatsu-lab โค for https://github.com/tatsu-lab/stanford_alpaca
Thanks to chavinlo โค for https://huggingface.co/chavinlo/gpt4-x-alpaca
Thanks to qwopqwop200 โค for https://github.com/qwopqwop200/GPTQ-for-LLaMa
Thanks to tsumeone โค for https://huggingface.co/tsumeone/gpt4-x-alpaca-13b-native-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca)
Thanks to transformers โค for https://github.com/huggingface/transformers
Thanks to gradio-app โค for https://github.com/gradio-app/gradio
Thanks to TheBloke โค for https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ
Thanks to Neko-Institute-of-Science โค for https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b
Thanks to gozfarb โค for https://huggingface.co/gozfarb/pygmalion-7b-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b)
Thanks to young-geng โค for https://huggingface.co/young-geng/koala
Thanks to TheBloke โค for https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/young-geng/koala)
Thanks to dvruette โค for https://huggingface.co/dvruette/oasst-llama-13b-2-epochs
Thanks to gozfarb โค for https://huggingface.co/gozfarb/oasst-llama13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/dvruette/oasst-llama-13b-2-epochs)
Thanks to ehartford โค for https://huggingface.co/ehartford/WizardLM-7B-Uncensored
Thanks to TheBloke โค for https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-7B-Uncensored)
Thanks to mosaicml โค for https://huggingface.co/mosaicml/mpt-7b-storywriter
Thanks to OccamRazor โค for https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/mosaicml/mpt-7b-storywriter)
Thanks to ehartford โค for https://huggingface.co/ehartford/WizardLM-13B-Uncensored
Thanks to ausboss โค for https://huggingface.co/ausboss/WizardLM-13B-Uncensored-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-13B-Uncensored)
Thanks to PygmalionAI โค for https://huggingface.co/PygmalionAI/pygmalion-13b
Thanks to notstoic โค for https://huggingface.co/notstoic/pygmalion-13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/PygmalionAI/pygmalion-13b)
Thanks to WizardLM โค for https://huggingface.co/WizardLM/WizardLM-13B-V1.1
Thanks to TheBloke โค for https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
Thanks to meta-llama โค for https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
Thanks to TheBloke โค for https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
Thanks to meta-llama โค for https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
Thanks to localmodels โค for https://huggingface.co/localmodels/Llama-2-13B-Chat-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
Thanks to NousResearch โค for https://huggingface.co/NousResearch/Redmond-Puffin-13B
Thanks to TheBloke โค for https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/NousResearch/Redmond-Puffin-13B)
Thanks to llSourcell โค for https://huggingface.co/llSourcell/medllama2_7b
Thanks to MetaAI โค for https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/
Thanks to TheBloke โค for https://huggingface.co/TheBloke/CodeLlama-7B-fp16
Thanks to TheBloke โค for https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-fp16
Thanks to TheBloke โค for https://huggingface.co/TheBloke/CodeLlama-7B-Python-fp16
Thanks to MistralAI โค for https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
Thanks to Gryphe โค for https://huggingface.co/Gryphe/MythoMax-L2-13b
Thanks to TheBloke โค for https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/Gryphe/MythoMax-L2-13b)
DISCLAIMER: THIS WEBSITE DOES NOT PROVIDE MEDICAL ADVICE The information, including but not limited to, text, graphics, images and other material contained on this website are for informational purposes only. No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of your physician or other qualified health care provider with any questions you may have regarding a medical condition or treatment and before undertaking a new health care regimen, and never disregard professional medical advice or delay in seeking it because of something you have read on this website.