Run your LLMs Locally with Ollama
This is a quick demo of how to get a fully local model up and running with Mistral 7b.
Ollama, an accessible tool for MacOS, Linux, and Windows users, offers seamless installation from its official website https://ollama.com/download.
just open your terminal & run:
$ ollama run mistral
Clone this repository and run the below command from this Repository to install the dependencies
$ pip install -r requirements.txt
A very simple streamlit app that demonstrates how to use LLM chat in a web app.
$ streamlit run chatbot_app.py
In terminal you will see a local host link, click on it to open the app in your browser.
example: http://localhost:8501