This is a LlamaIndex project bootstrapped with create-llama
to act as a full stack UI to accompany Retrieval-Augmented Generation (RAG) Bootstrap Application, which can be found in its own repository at https://github.com/tyrell/llm-ollama-llamaindex-bootstrap
My blog post provides more context, motivation and thinking behind these projects.
The backend code of this application has been modified as below;
- Loading the Vector Store Index created previously in the Retrieval-Augmented Generation (RAG) Bootstrap Application in response to user queries submitted through the frontend UI.
- Refer backend/app/utils/index.py and the code comments to understand the modifications.
- Querying the index with streaming enabled
- Refer backend/app/api/routers/chat.py and the code comments to understand the modifications.
First, startup the backend as described in the backend README.
Second, run the development server of the frontend as described in the frontend README.
Open http://localhost:3000 with your browser to see the result.
Apache 2.0
~ Tyrell Perera