Note
Work in progress ๐ง
This demo showcases Typesense's conversational search features.
The dataset contains essays authored by Paul Graham, indexed in Typesense.
Questions are sent directly to Typesense, which has a built-in RAG pipeline to return a conversation response, using the indexed dataset as context.
- Node.js 20.x and npm
- Typesense server. Can be hosted locally using the
docker-compose.yml
file in the repository, instructions in Local Setup section.
-
Clone the project.
-
Install dependencies at the root of the project using npm.
npm install
-
(Optional) To run a local instance of Typesense server using the
docker-compose.yml
config in this repository, run the following command.docker compose up -d
Note: This requires Docker to be installed on the system.
-
Copy
.env.example
file and create a.env
file at the root of the project. -
Set the values of required environment variables in the
.env
file that was created. (Skip theTYPESENSE_CONVERSATION_MODEL_ID
env variable for now, we'll come back to it). -
Run the following command to create the dataset by fetching Paul Graham's essays:
npm run fetchData
-
Import the data into Typesense by running the following command.
npm run indexInTypesense
This command may take a while depending on the size of the data.
-
The script will output a conversational model ID, which you want to set as the
TYPESENSE_CONVERSATION_MODEL_ID
in your.env
file. -
Now start the Next.js application.
- For production:
npm run build npm start
- For development:
npm run dev
- For production:
-
Access the application at
localhost:3000
.
This project is licensed under Apache License 2.0. The dataset used is Paul Graham's essays by Paul Graham.