Giter VIP home page Giter VIP logo

chainlit's People

Contributors

alimtunc avatar brianantonelli avatar chenjuneking avatar clementsirieix avatar constantinidan avatar datapay-ai avatar fenglui avatar fgiuba avatar fvaleye avatar giulioottantotto avatar hans-sarpei avatar hayescode avatar jpolvto avatar kevinwmerritt avatar mathiasspanhove avatar mohamedalani avatar netrvin avatar onepointconsulting avatar ramnes avatar rickythefox avatar sandangel avatar saral avatar siddhantsadangi avatar steflommen avatar stian-a-johansen avatar tpatel avatar triantafillos avatar tylertitsworth avatar willydouhard avatar xsyann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chainlit's Issues

chainlit use graphsignal err

code

import logging
import os
import sys

import chainlit as cl
import graphsignal
from dotenv import load_dotenv
from graphsignal.callbacks.llama_index import GraphsignalCallbackHandler
from langchain.chat_models import ChatOpenAI
from llama_index import LLMPredictor, ServiceContext, GPTVectorStoreIndex, SimpleDirectoryReader
from llama_index.callbacks import CallbackManager
from llama_index.prompts.base import Prompt

load_dotenv()


logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

@cl.on_message
def main(message: str):
    graphsignal.configure(deployment='auto-fortune-telling-v1')
    documents = SimpleDirectoryReader("source_documents/zhou_yi").load_data()
    index = GPTVectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine()
    response = query_engine.query(message)
    cl.Message(content=str(response), ).send()

error:

Unknown max input size for gpt-3.5-turbo, using defaults.
Traceback (most recent call last):
  File "src/gevent/_abstract_linkable.py", line 287, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links
  File "src/gevent/_abstract_linkable.py", line 333, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links
AssertionError: (None, <callback at 0x7ff711ee0540 args=([],)>)
2023-06-02T04:54:12Z <callback at 0x7ff711ee0540 args=([],)> failed with AssertionError

Integrate multiple workflows as tabs or subpaths

image

Idea:

  • have multiple chains / workflows running side-by-side

one tab could then be

  • Q&A
  • one could be Google Query
  • Image Generation with DallE

This would enable also rapid prototyping and side-by-side testing of two versions, without running them in two different containers.

Please add tag:

  • enchancement

Azure OpenAI support

hi again and thanks again for chainlit! 🥳

Beside OpenAI, a lot of people like me, use Azure OpenAI API (Please see langchain docs). But currently it cant be easily used in chainlit.

Add cl.replace_message()

Oftentimes, we need to update the response to user's query, such as showing the progress (steps) of the generation before the final result comes, so it will be convenient if we can call cl.replace_message() to replace the message(s) that've been sent in the current response.

Update to real Async and Streaming

This is amazing work! Props to you! A lot of ideas are really future looking such as asking the user for input action!

I was looking into the examples and it seems like the current implementation is not really using asynchronous endpoints For instance:

  1. OpenAI python SDK offers openai.ChatCompletion.acreate which is an async generator
  2. LangChain offers AsyncCallbackHandler

This is specially helpful for Agents that can take a longtime to run and might clog the backend

Cheers

Issue when running with --port 8080

When running using chainlit run demo_app/main.py --port 8080, I get below error.

  File "/mnt/c/Users/User/Documents/python_projects/langchain-chainlit-docker-deployment-template/.venv/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/mnt/c/Users/User/Documents/python_projects/langchain-chainlit-docker-deployment-template/.venv/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/mnt/c/Users/User/Documents/python_projects/langchain-chainlit-docker-deployment-template/.venv/lib/python3.10/site-packages/chainlit/cli/__init__.py", line 74, in chainlit_run
    os.environ["CHAINLIT_PORT"] = port
UnboundLocalError: local variable 'os' referenced before assignment

After going through the code at https://github.com/Chainlit/chainlit/blob/main/src/chainlit/cli/__init__.py, I think the import os at line number 80 should be moved at top of the function. Hope it would fixed the issue. I am using the chainlit = "0.2.111".

NotImplementedError: Async generation not implemented for this LLM.

Issue Description

Problem: When attempting to use the RetrievalQA module with a custom finetuned Llama model and enable streaming, the following error occurs:

NotImplementedError: Async generation not implemented for this LLM.

Steps to Reproduce

  1. Enable streaming using the provided code:
streamer = TextStreamer(tokenizer)
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_length=2048,
    temperature=0.8,
    top_p=0.95,
    repetition_penalty=1.15,
    streamer=streamer
)
  1. Instantiate a RetrievalQA object using a custom Llama model with the from_chain_type method, specifying the necessary parameters.
qa_chain = RetrievalQA.from_chain_type(
    llm=llm_model,
    chain_type="stuff",
    retriever=retriever,
    return_source_documents=True,
    verbose=True,
)
  1. Attempt to stream in chainlit using the following code:
@cl.langchain_factory(use_async=True)
def main():
    return qa_chain

Expected Behavior

I expected the streaming functionality to work with the custom Language models and not encounter the NotImplementedError.

Additional Information

  • I have tested streaming with langchain in the command line, and it prints tokens correctly.
  • The error occurs specifically when using the code provided above.

Please suggest the appropriate way to achieve streaming with custom Language models.

It consumes a significant amount of computer resources.

I start Chainlit, the CPU usage reaches 50%, and the memory gradually increases by 5GB. My computer is equipped with an i7 12700 processor and 32GB of RAM. When I close the program, approximately 6GB of memory is released.

Message scope doesn't work with the 'page' display of Text element

While text element in 'page' display was sent, message scope didn't work. Here're the repo steps:

Step 1: Run this sample code snippet

#code example
@cl.on_message
def main(message: str):
    elements = [
        cl.Text(message, name='side_text', display='side'),
        cl.Text(message, name='page_text', display='page'),
        cl.Text(message, name='inline_text', display='inline'),
    ]

    cl.send_message(content=f"Received: {message}", elements=elements)

Step 2: Send two messages
image

Step 3: Click the page_text link in the 2nd message.

Expected to get an output message: "2nd msg: side_text, page_text, inline_text", but still got the 1st message output.

image

How to remove the button under a chat message?

Hi, I would like to remove the button that shows up under a user message. See image below - I'd like the remove the 'Took 1 step' button
image

Also, when I set stream=True, the text that shows up inside the debugging/intermediate/chain of thought textboxes (see image below) does get streamed but the final reply shown to the user doesn't get streamed. Can you please let me know how to stream the reply in real-time to the user?
image

Permission Denied Error on Windows

When using the qa example code from the cookbook repo, uploading a file results in a Permission Denied error.

Also there is a file size upload limit of 2MB. Is there a way to disable this feature?

Full traceback:

$ chainlit run app.py -w 
2023-05-26 16:42:56 - Loaded .env file
2023-05-26 16:43:06 - Your app is available at http://localhost:8000
    yield from self.parser.parse(blob)
  File "c:\users\ifeanyi pc\desktop\chat-with-github-repo\env\lib\site-packages\langchain\document_loaders\base.py", line 87, in parse
    return list(self.lazy_parse(blob))
  File "c:\users\ifeanyi pc\desktop\chat-with-github-repo\env\lib\site-packages\langchain\document_loaders\parsers\pdf.py", line 16, in lazy_parse
    with blob.as_bytes_io() as pdf_file_obj:
  File "C:\Users\IFEANYI PC\.pyenv\pyenv-win\versions\3.8.10\lib\contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "c:\users\ifeanyi pc\desktop\chat-with-github-repo\env\lib\site-packages\langchain\document_loaders\blob_loaders\schema.py", line 86, in as_bytes_io   
    with open(str(self.path), "rb") as f:
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\IFEANY~1\\AppData\\Local\\Temp\\tmp1_q_q2sk'

Screenshot (513)

Langchain v0.0.198 breaks chain of thought UI

Langchain 0.0.198 seems to break the chain of thought functionality when using the chain lit langchain integration and everything is flattened in the UI.
Downgrading to langchain 0.0.197 fixes this.

Runing on server emitting ERROR: "auth0-spa-js must run on a secure origin."

my code is shown in following

import uuid
from dataclasses import dataclass

import requests
import chainlit as cl


@dataclass
class ChatInfo:
    owner: str
    msg: str
    unique_id: str


@cl.on_chat_start
def start():
    unique_id = str(uuid.uuid1())
    cl.user_session.set('key', unique_id)


@cl.on_message
def main(msg: str):
    # Your custom logic goes here...

    unique_id = cl.user_session.get('key')

    owner = 'seeker'
    SeekerChatInfo: ChatInfo = {
        'owner': owner,
        'msg': msg,
        'unique_id': unique_id
    }
    try:
        res = requests.post(url='http://xxx/v1/chat', # my URL 
                            json=SeekerChatInfo)
        print(res.status_code)
        res = res.json()
        print('res', res)
        response = res['item']['msg']
        # Send a response back to the user
        cl.send_message(content=response)
    except Exception as e:
        print(f'ERROR: {e}')
        # Send a response back to the user
        cl.send_message(content='Server ERROR')

I run it on terminal with the command chainlit run app.py --port 8081 --host 0.0.0.0 --headless
Until now it is ok

But if I input my server address on the browser, nothing shown on the web page

image

Feature request: Bind Actions / Ask Users to the TextArea input menu

What: Allow devs to place actions into chat bar like Chat GPT

  1. Action buttons can be display in chat window
    image

  2. Action buttons can not be pinned
    image

Solution

One possibility is to pin them to the left of the input box

import chainlit as cl

@cl.action_callback("action_button")
def on_action(action):
    cl.Message(content=f"Executed {action.name}").send()
    # Optionally remove the action button from the chatbot user interface
    action.remove()

@cl.on_chat_start
def start():
    # Sending an action button within a chatbot message
    actions = [
        cl.Action(
          name="action_button",
          value="example_value", 
          description="Click me!",
          icon="icon-name"
        )
    ]

    cl.TextArea(actions=actions).build()

    cl.Message(content="Interact with this action button:", actions=actions).send()

Could set an icon to an action and show the actions name as alt text when hovering.

Requirements

  1. Be able to ask user: upload a file at any point for the LLM from the text area. (like Code Interpreter)
  2. Be able to trigger action at any point from the text area.

Some suggestions

This is a very excellent project that has solved some problems in the field and made writing artistic intelligence applications more conservative I hope my suggestions will make it better

[ ] 1. I hope the following tags and URLs can be customized
1685248400(1)
[ ] 2. I hope to have a login page and the ability to authenticate and log in to my own server
[ ] 3. Hope chat can have multiple sessions
[ ] 4. Internationalized language support
[x] 5. Adapt to mobile UI
image

Cannot run on macOS

I am using a Macbook pro (M2) Ventura Version 13.3.1
After installing chainlit, when I run chainlit hello, this is the error I am getting:

dhirajkhanna@Dhirajs-MacBook-Pro chainlit % chainlit hello
2023-06-04 13:07:49 - Created default config file at /Users/dhirajkhanna/Documents/chainlit/.chainlit/config.toml
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.10/bin/chainlit", line 5, in <module>
    from chainlit.cli import cli
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/chainlit/__init__.py", line 8, in <module>
    monkey.patch()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/chainlit/lc/monkey.py", line 9, in patch
    import langchain
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
    from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
    from langchain.agents.agent import (
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 16, in <module>
    from langchain.agents.tools import InvalidTool
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
    from langchain.tools.base import BaseTool, Tool, tool
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/tools/__init__.py", line 36, in <module>
    from langchain.tools.playwright import (
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/tools/playwright/__init__.py", line 3, in <module>
    from langchain.tools.playwright.click import ClickTool
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/tools/playwright/click.py", line 11, in <module>
    from langchain.tools.playwright.base import BaseBrowserTool
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/tools/playwright/base.py", line 15, in <module>
    from playwright.async_api import Browser as AsyncBrowser
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/playwright/async_api/__init__.py", line 25, in <module>
    import playwright.async_api._generated
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/playwright/async_api/_generated.py", line 25, in <module>
    from playwright._impl._accessibility import Accessibility as AccessibilityImpl
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/playwright/_impl/_accessibility.py", line 17, in <module>
    from playwright._impl._connection import Channel
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/playwright/_impl/_connection.py", line 35, in <module>
    from pyee import EventEmitter
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyee/__init__.py", line 120, in <module>
    from pyee.trio import TrioEventEmitter as _TrioEventEmitter  # noqa
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pyee/trio.py", line 7, in <module>
    import trio
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/trio/__init__.py", line 18, in <module>
    from ._core import (
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/trio/_core/__init__.py", line 27, in <module>
    from ._run import (
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/trio/_core/_run.py", line 2458, in <module>
    raise NotImplementedError("unsupported platform")
NotImplementedError: unsupported platform

SSL error using openai

Hi,

I am executing the below example code in poetry environment having python 3.11.3 but I am getting SSL error. When I execute openai script directly without using Chainlit then I don't see the error. Error appears when user sends message and Chainlit tries to make a connection to openai. I tried with and without VPN. I have also used cert.pem using which it usually works in case of any other libraries which sometimes throws SSL error. In my case the error is only appearing when I use Chainlit. Please help me out!

code:
`
import chainlit as cl
import openai
import os
from dotenv import load_dotenv
import certifi

load_dotenv()

os.environ["REQUESTS_CA_BUNDLE"] = certifi.where()

openai.api_key = os.getenv("OPENAI_API_KEY")
openai.verify_ssl_certs = False

prompt = """SQL tables (and columns):

  • Customers(customer_id, signup_date)
  • Streaming(customer_id, video_id, watch_date, watch_minutes)

A well-written SQL query that {input}:


model_name = "text-davinci-003"

settings = {
    "temperature": 0,
    "max_tokens": 500,
    "top_p": 1,
    "frequency_penalty": 0,
    "presence_penalty": 0,
    "stop": ["```"]
}

@cl.on_message
async def main(message: str):
    formatted_prompt = prompt.format(input=message)

    # Prepare the message for streaming
    msg = cl.Message(
        content="",
        language="sql",
        prompt=formatted_prompt,
        llm_settings=cl.LLMSettings(model_name=model_name, **settings),
    )

    async for stream_resp in await openai.Completion.acreate(
        model=model_name, prompt=formatted_prompt, stream=True, **settings
    ):
        token = stream_resp.get("choices")[0].get("text")
        await msg.stream_token(token)

    await msg.send()
`

**Error:**
2023-06-15 09:11:03 - Loaded .env file
2023-06-15 09:11:04 - Your app is available at http://localhost:8000
2023-06-15 09:11:09 - Error communicating with OpenAI
Traceback (most recent call last):
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/aiohttp/connector.py", line 980, in _wrap_create_connection
    return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1112, in create_connection
    transport, protocol = await self._create_connection_transport(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1145, in _create_connection_transport
    await waiter
  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/tasks.py", line 339, in __wakeup
    future.result()
  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 574, in _on_handshake_complete
    raise handshake_exc
  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 556, in _do_handshake
    self._sslobj.do_handshake()
  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 979, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 668, in arequest_raw
    result = await session.request(**request_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request
    conn = await self._connector.connect(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect
    proto = await self._create_connection(req, traces, timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection
    _, proto = await self._create_direct_connection(req, traces, timeout)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
    raise last_exc
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
    transp, proto = await self._wrap_create_connection(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection
    raise ClientConnectorCertificateError(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host api.openai.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)')]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/chainlit/__init__.py", line 60, in wrapper
    return await user_function(**params_values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "app2.py", line 44, in main
    async for stream_resp in await openai.Completion.acreate(
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/openai/api_resources/completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 372, in arequest
    result = await self.arequest_raw(
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ub/Library/CloudStorage/OneDrive/mac/Projects/04Pycodes/langchain/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 685, in arequest_raw
    raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI

Really bad memory leak

Version: 0.2.109

When I start chainlit with a completely empty file, there is a constant load on the CPU and the RAM usage goes constantly up at a very quick rate. After a few minutes the Python process fills up all RAM.
This only seems to happen with the watch mode (-w).

Steps to reproduce:

  • Create an empty file app.py
  • Start chainlit with the code below.
chainlit run app.py -w

Speech to text

I'm experimenting with Chainlit and it is awesome. Kudos to you all.

Future feature request: I would like to incorporate speech to text in the user interface for my application.

Please label as enhancement.

Answers are always returned in english when doing document QA

Hello everyone! Although my document is in Spanish, and I have changed the system_template variable in the document_qa.py demo and my question is also in Spanish, the answers continue to be in English. Thanks for your answer and for such a great library!

"Waiting for thread pool to idle before working"?

Hey folks, I'm on Mac OSX Ventura 13.4 and Chainlit 0.2.109. Whenever I run the following command:

chainlit hello

I get a stream of the following error repeatedly (I can't even control-c out of it):

E0603 21:07:29.252949000 8303107584 thread_pool.cc:230]                Waiting for thread pool to idle before forking

Any chance anyone knows what this means?

[Bug] Getting spammed with SSL Handshake Error when ru

The web application runs, but on the terminal I am getting spammed with SSL Certificate Verify Failed errors.

0608 23:35:42.397180000 6108459008 ssl_transport_security.cc:1420] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED.

Enable to use Chainlit as a Streamlit component

I wonder if we can enable the use of Chainlit chatbot UI as a Streamlit component besides the use of a standalone app, so users can leverage their existing knowledge of Streamlit to integrate Chainlit into their Streamlit apps.

openai- -error

I don't know why, but suddenly I'm experiencing communication failures with OpenAI. This has been the case from version 0.3 to 0.4. I'm not using langchain, just pure Python. I've checked my key and it should be able to communicate normally. I have my VPN turned on.

AttributeError: module 'select' has no attribute 'epoll'.

``Hey, i'm not able to run chainlit on my computer.

I just grab the demo code:

import chainlit as cl


@cl.on_message  # this function will be called every time a user inputs a message in the UI
def main(message: str):
    # this is an intermediate step
    cl.Message(author="Tool 1", content=f"Response from tool1", indent=1).send()

    # send back the final answer
    cl.Message(content=f"This is the final answer").send()

And run:

❯ chainlit run demo.py -w 
2023-06-02 14:48:47 - Created default config file at /home/pedro/dev/freela/ai-langchain-chatgpt-bot/.chainlit/config.toml
Traceback (most recent call last):
  File "/home/pedro/.local/bin/chainlit", line 5, in <module>
    from chainlit.cli import cli
  File "/home/pedro/.local/lib/python3.10/site-packages/chainlit/__init__.py", line 8, in <module>
    monkey.patch()
  File "/home/pedro/.local/lib/python3.10/site-packages/chainlit/lc/monkey.py", line 9, in patch
    import langchain
  File "/home/pedro/.local/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
    from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
  File "/home/pedro/.local/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
    from langchain.agents.agent import (
  File "/home/pedro/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 15, in <module>
    from langchain.agents.tools import InvalidTool
  File "/home/pedro/.local/lib/python3.10/site-packages/langchain/agents/tools.py", line 8, in <module>
    from langchain.tools.base import BaseTool, Tool, tool
  File "/home/pedro/.local/lib/python3.10/site-packages/langchain/tools/__init__.py", line 27, in <module>
    from langchain.tools.playwright import (
  File "/home/pedro/.local/lib/python3.10/site-packages/langchain/tools/playwright/__init__.py", line 3, in <module>
    from langchain.tools.playwright.click import ClickTool
  File "/home/pedro/.local/lib/python3.10/site-packages/langchain/tools/playwright/click.py", line 11, in <module>
    from langchain.tools.playwright.base import BaseBrowserTool
  File "/home/pedro/.local/lib/python3.10/site-packages/langchain/tools/playwright/base.py", line 15, in <module>
    from playwright.async_api import Browser as AsyncBrowser
  File "/home/pedro/.local/lib/python3.10/site-packages/playwright/async_api/__init__.py", line 25, in <module>
    import playwright.async_api._generated
  File "/home/pedro/.local/lib/python3.10/site-packages/playwright/async_api/_generated.py", line 25, in <module>
    from playwright._impl._accessibility import Accessibility as AccessibilityImpl
  File "/home/pedro/.local/lib/python3.10/site-packages/playwright/_impl/_accessibility.py", line 17, in <module>
    from playwright._impl._connection import Channel
  File "/home/pedro/.local/lib/python3.10/site-packages/playwright/_impl/_connection.py", line 35, in <module>
    from pyee import EventEmitter
  File "/home/pedro/.local/lib/python3.10/site-packages/pyee/__init__.py", line 73, in <module>
    from pyee._trio import TrioEventEmitter  # noqa
  File "/home/pedro/.local/lib/python3.10/site-packages/pyee/_trio.py", line 4, in <module>
    import trio
  File "/home/pedro/.local/lib/python3.10/site-packages/trio/__init__.py", line 18, in <module>
    from ._core import (
  File "/home/pedro/.local/lib/python3.10/site-packages/trio/_core/__init__.py", line 27, in <module>
    from ._run import (
  File "/home/pedro/.local/lib/python3.10/site-packages/trio/_core/_run.py", line 2452, in <module>
    from ._io_epoll import EpollIOManager as TheIOManager
  File "/home/pedro/.local/lib/python3.10/site-packages/trio/_core/_io_epoll.py", line 188, in <module>
    class EpollIOManager:
  File "/home/pedro/.local/lib/python3.10/site-packages/trio/_core/_io_epoll.py", line 189, in EpollIOManager
    _epoll = attr.ib(factory=select.epoll)
AttributeError: module 'select' has no attribute 'epoll'. Did you mean: 'poll'?

I'm missing something here?

Problem with user session variables

Seems like it's impossible to create user session variables.

This code snippet:

from chainlit import user_session
user_session.set("chat_history", "a b c")
print(f"Chat History: {user_session.get('chat_history')}")

produces: Chat History: None

Add drop-down button

Sometimes, it’s convenient to click a drop-down button with a list of available options, e.g., click the “generate image” button with the default value (option) set to StableDiffusion 1.5 or the last model the user uses, while the user can also try one of the other models to generate.

Streaming=True not working when i integrate Langchain.

import os
from langchain import PromptTemplate, OpenAI, LLMChain
import chainlit as cl

#os.environ["OPENAI_API_KEY"] = "YOUR_OPEN_AI_API_KEY"

template = """Question: {question}

Answer: Let's think step by step."""

llm=OpenAI(temperature=0,streaming=True)

@cl.langchain_factory
def factory():

prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm , verbose=True)

return llm_chain

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.