Upload the pdf and ask the information thats inside the PDF.
NOTE: The models used are from the Meta Llama which will run in your own machine. Everything is totally free.
Docker based installation is simple. It contains Ollama(llama runner) with the streamlit based application that you can interact using the browser.
foo@bar:~$ docker compose build app
foo@bar:~$ docker compose up
NOTE: Open https://localhost:8501 in the browser. At first, It will take long time to download the models.
The components used in this repository are:
- Python
- Programming language used to perform the task.
- Langchain
- Python library that provides abstraction and tools to interact with the Large language models
- Chroma
- Vector based database that will store the documents and embedding that we provide from the PDF.
- Streamlit
- UI to upload and chat with the large language model like llama3.
- Ollama
- Tool that will help to run llm in the local machine.
- Docker
- Tool that runs streamlit and ollama inside the docker container.