Giter VIP home page Giter VIP logo

cognix's Introduction

🤖 CogniX AI Chatbot 🤖

GitHub last commit join our community on Slack

Hey there! We're Gian and Michael, We’re excited to have you here!

CogniX is an AI-powered chatbot solution, leveraging Retrieval-Augmented Generation (RAG) technology. This evolution allows you to deploy intelligent chatbots for sales, support, and customer engagement with minimal setup.

Features

A versatile chatbot that can be integrated into any website, capable of

  • Talking about the site's products or services
  • Handling customer support
  • Talk about custom topics as instructed

Able to:

  • Book an appointment
  • Collect leads
  • Create a ticket
  • Connect to a real agent
  • Knows about the order status
  • Send an email

For detailed documentation and setup guides, visit CogniX Docs. If you're not familiar with RAG (retrieval-augmented generation), watch this video.

Contributing

We need help in various areas, including web development (React), AI improvement, and Kubernetes/CICD pipelines.

We are specifically looking for developers skilled in:

  • React
  • Go
  • Python
  • DevOps

If you want to dedicate part of your time to help us create an innovative solution, you will be compensated with virtual checks that you can redeem once we secure early-stage funding.
We are actively applying to all the major early stage funds like YC, Sequoia ARC, and several more

And think we’re a good fit for each other, we’re eager to discuss a co-founder position.
This is a great opportunity to join us on this exciting journey. Join Cognix Group on Slack

join our community on Slack

Where We Are At

Watch the video to quickly get a glimpse of where we are and what we are doing next.

Watch the video

What we need to do next

  • Finalize the UI, and fix all bugs related to the UI
  • Create a generic chatbot plugin that will connect CogniX API task here
  • Create a chatbot pug-in for WordPress
  • All UI work here
  • All Back end work, here
  • All DevOps work here
  • Fix all bugs

Documentation

Funding

In our pitch deck video, we mentioned that we are seeking $1 million in funding for a 9% equity stake in the company. This investment will be crucial, with roughly 80% allocated for project development and 20% for marketing and sales. We are looking for venture capital and also considering crowdfunding. If you're interested, you can support us by purchasing a portion of this 9% stake.

Future Vision

Gian and Michael, the co-founders, are conducting interviews with CXOs and decision-makers to identify market needs and refine our offerings. As we progress, CogniX will evolve to address more complex use cases and expand its impact across various industries.

For more detailed information, visit the CogniX Documentation.

cognix's People

Contributors

apaladiychuk avatar apcognixch avatar gsantopaolo avatar noelhermans avatar nowelh avatar olehlitvin avatar olehlitvinpecode avatar vadym-maslovskyi avatar vadymmaslovskyipecode avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cognix's Issues

stream chat specs

The user can interact with CogniX, start a new chat in two different ways (in the future will be expanded)

  1. Chat with your documents (requires_llm = true)
  2. Search only (requires requires_llm=false)

The user asks a question (query): "How do I do get attention in Collaboard"

Server receives the chat

following are business logic methods:

method name strem_response
argument query (string)
returns the stream

strem_response calls vector_search
method name vector_search
argument query (string)
returns a list of documents

It performs a query against Milvus, with userid and tennantid
The query result will contian all the matching documents inside Milvus.

if equires_llm == true
It retrives the information regarding Persona, prompt and llm

Top X documents retrieved from Milvus query will be sent to the LLM (LLM Endpoint + eventually api_key) embedded into the prompt

LLM will stream the answer which will be streamed back to the client

if equires_llm == false
will send back only the list of documents

Chat and response is saved into chat history

Access Milvus from outside

Hi Noel,
For development purposes only (this will not be allowed in prod) we need to access Milvus as we do in our Htezner server.
We need, from our local PC to be able to connect to milvus and perform Insert, select etc.

image

can you help, please?

Define the queue type for NATS

We need to define

  • the message shall be taken by only one of the listener
  • number of retries before sent to dead letter
  • dead letter persistence (set max time?)
  • persistency with jetstream
  • serialization protobuf if possible

ConfigMap auto update

In our containers
We will have different config map file, and also some globals.

From the back office we will have the list of all config maps (K8 API) and then we will be able to select which one to edit.
THe UI will be able to save back config map changes

Once the changes are applied, there will be the config map reloader operator that will rolling upgrades all the pods that have the annotation to listen to the conmfigmap file change

Talk with @noelhermans about this, pls

chat api

finalize all chat apis as discussed

Pluggable chunking

Create two classes that inherit from the same interface
create a config map param:
Key: CHUNKING_STRATEGY
Values, STATIC, LLM

Implement first static:
It chunks the text with a fixed length (param coming from config map)

Key: CHUNKING_STATIC_CHARS
Values, (int)

Key: CHUNKING_STATIC_CHARS_OVERLAP
Values, (int)

LLM:
It took a big chunk (compatible with the LLM chat length)
Define in the LLM table the max text size

then make a query to the llm with a prompt (ask Gian for the right prompt)

Tenant registration user story

User should registered using endpoint https://github.com/gen-mind/ibv-docs/blob/main/IBV.Auth/Authenticating-using-local-accounts.md#user-registration

After complete registration UI should send a request api/tenant/registration . user and tenant data will send in Authentication token that receive from Auth service.
Backend should check if the tenant already exists.
Backend should prepare database tables for work.
add tenant
add user
add predefined persona for tenant
add embedded models for tenant
configure api key ( at least add openAI api key )

Question:
Need to clarify use case when tenant already registered
will the frontend communicate with Auth service directly ?
whether the admin should invite new users.
or backend must configure the user at first login

trace amount of gb used by the user

we need to trace the amount of gb/tb the user uses

relational database + vector database database + storage

we need to store in the user profile and in the tenant profile max storage allowed

after that they start paying an amount per gb

Slim

Introduce Slimtoolkit in our pipeline for Go lang and python

Chat AI response is not displayed

There is not displayed response on send message request .
Backend send several events
message
error
dcuments .

documents - still fake data
error - if has error in request data from milvus
message - response from AI

write design specs for connectors

have a look a the competitor connector to have an idea

write design specs for connectors.
my requirements.

  • all config params read from config map
  • use open telemetry for distributed tracing (need to pass the context to nats) and metrics
  • envet driven design
  • base class connector from which every connector inherits
  • there are personal connectors and tenant connectors

after the design is ready work on #3

incorrect token format

if a user wrowse
rag.cognix.ch
This is the first error it sees, as soon as arrived :) :) :)
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.