This is a 7 billion parameter language model based on the Llama2 architecture. It was trained on the Berkshire Hathaway annual letters to shareholders from 2004 to 2021. The model was trained using Lora fine-tuning. I have also released an MLX version for optimized inference on Mac computers.
BerkshireGPT Model Card MLX Version
The model was trained on the annual letters to shareholders from 2004 to 2021. The letters were obtained from the Berkshire Hathaway website. The letters were preprocessed to remove any non-ASCII characters and then tokenized using the Hugging Face tokenizers library. The model was trained using the Lora fine-tuning method. The model was trained on Kaggle using their notebooks and available GPUs. It took about a hour and a half to train for 300 steps.
The model can be used for a variety of tasks such as text generation, summarization, and question answering. The model can be used with the Hugging Face Transformers library or mlx if you have an apple computer.
The inference.ipynb notebook can run on any device but is optimized for cuda gpus.
- Added mlx support for gradio engine. Still has some bugs though but it works.
- MLX llama index rag code is from this repository.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, TextStreamer
name = "dwightf/BerkshireGPT"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=False,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16
)
model = AutoModelForCausalLM.from_pretrained(name, quantization_config=bnb_config)
tokenizer = AutoTokenizer.from_pretrained(name)
Scripts are available in the scripts
folder to help with using the model. There are inference and RAG scripts.
You can run inference with BitsAndBytes
quantization on a GPU with only 8GB of memory
You can run the gradio demo with the following command:
python gradio_scripts/run.py
Benchmark the model
Create rag and gradio optimized for apple mlx.
I will also be releasing colab notebooks soon and hopefully the dataset I used.
I would like to thank the creators of this repository for helping me fine-tune the model.
I would also like to thank the creators of this repository for the RAG code.
I would also like to thank the creators of the llm I used as a base model found here.
If you have any questions or would like to collaborate, please feel free to reach out to me at [email protected]
This model is licensed under the MIT License. You may use it for commercial and non-commercial use.
This model is just for testing and educational purposes. It should not be used for making financial decisions and is not financial advice.