Giter VIP home page Giter VIP logo

discord-llm-chatbot's Introduction

llmcord.py

Talk to LLMs with your friends!

llmcord.py lets you (and your friends) chat with LLMs directly in Discord. It works with practically any LLM, remote or locally hosted.

Features

Reply-based chat system

Just @ the bot to start a conversation and reply to continue. Build conversations with reply chains!

You can do things like:

  • Continue your own conversation or someone else's
  • "Rewind" a conversation by simply replying to an older message
  • @ the bot while replying to any message in your server to ask a question about it

Additionally:

  • Back-to-back messages from the same user are automatically chained together. Just reply to the latest one and the bot will see all of them.
  • You can seamlessly move any conversation into a thread. Just create a thread from any message and @ the bot inside to continue.

Choose any LLM

llmcord.py supports remote models from OpenAI API, Mistral API, Anthropic API and many more thanks to LiteLLM.

Or run a local model with ollama, oobabooga, Jan, LM Studio or any other OpenAI compatible API server.

And more:

  • Supports image attachments when using a vision model (like gpt-4o, claude-3, llava, etc.)
  • Supports text file attachments (.txt, .py, .c, etc.)
  • Customizable system prompt
  • DM for private access (no @ required)
  • User identity aware (OpenAI API only)
  • Streamed responses (turns green when complete, automatically splits into separate messages when too long)
  • Displays helpful user warnings when appropriate (like "Only using last 20 messages" when the customizable message limit is exceeded)
  • Caches message data in a size-managed (no memory leaks) and mutex-protected (no race conditions) global dictionary to maximize efficiency and minimize Discord API calls
  • Fully asynchronous
  • 1 Python file, ~200 lines of code

Instructions

Before you start, install Python and clone this git repo.

  1. Install Python requirements: pip install -U -r requirements.txt

  2. Create a copy of "config-example.json" named "config.json" and set it up (see below)

  3. Create a copy of "example.env" named ".env" and add API keys as needed for your LiteLLM supported providers

  4. Run the bot: python llmcord.py (the invite URL will print to the console)

LLM settings:

Setting Description
llm For LiteLLM supported providers (OpenAI API, Mistral API, ollama, etc.), follow the LiteLLM instructions for its model name formatting.

For local, OpenAI compatible API servers (oobabooga, Jan, LM Studio, etc.), set to local/openai/model (or local/openai/vision-model if using a vision model). Some setups will instead require local/openai/<MODEL_NAME> where <MODEL_NAME> is the exact name of the model you're using.
local_server_url The URL of your local API server. Only applicable when "llm" starts with local/.
(Default: http://localhost:5000/v1)
extra_api_parameters Extra API parameters for your LLM. Add more entries as needed.
(Default: max_tokens=4096, temperature=1.0)
system_prompt Write anything you want to customize the bot's behavior!

Discord settings:

Setting Description
bot_token Create a new Discord bot at discord.com/developers/applications and generate a token under the "Bot" tab. Also enable "MESSAGE CONTENT INTENT".
client_id Found under the "OAuth2" tab of the Discord bot you just made.
status_message Set a custom message that displays on the bot's Discord profile. Max 128 characters.
allowed_channel_ids A list of Discord channel IDs where the bot can send messages. Leave empty to allow all channels.
allowed_role_ids A list of Discord role IDs that can use the bot. Leave empty to allow everyone. Specifying at least one role also disables DMs.
max_text The maximum amount of text allowed in a single message, including text from file attachments.
(Default: 100,000)
max_images The maximum number of image attachments allowed in a single message. Only applicable when using a vision model.
(Default: 5)
max_messages The maximum number of messages allowed in a reply chain.
(Default: 20)

Notes

  • If you're having issues, try my suggestions here

  • Only models from OpenAI are "user identity aware" because only OpenAI API supports the message "name" property. Hopefully others support this in the future.

  • PRs are welcome :)

Star History

Star History Chart

discord-llm-chatbot's People

Contributors

djstrong avatar jakobdylanc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

discord-llm-chatbot's Issues

possible to add custom messages?

I have a use case where I would like to be able to post on behalf of the bot, what would i need to do in order to achieve that? Would it be creating a custom command?

Error Request reply

Really new to this but i got a few error. I mention the bot than it try to reply but didnt reply. Here is the error
image
Afterward I got a bunch of error.
image
image
DISCORD_BOT_TOKEN = "xxxx"
DISCORD_CLIENT_ID = "xxxxx"
DISCORD_STATUS_MESSAGE = "xxxx"
Here is the only things i edited in the .env file

Model not found in completion cost map

I get the following error:

litellm_logging.py:1271 - Model=bakllava not found in completion cost map. Setting 'response_cost' to None
2024-07-18 23:54:58,548 WARNING: Model=bakllava not found in completion cost map. Setting 'response_cost' to None

My current settings are:

"llm_settings": {
                "llm": "ollama/bakllava",
                "local_server_url": "http://localhost:11434",
                "extra_api_parameters": {
                        "max_tokens": 4096,
                        "temperature": 0.8,
                        "frequency_penalty": 1.17647
                },

ollama integration !

If i want to use ollama llama3 model !
what should i put in LLM = ???

give me an example please :)

No image support

gpt-discord does not support images due to OpenAI API/model limitations. Other bots accomplish this by bringing in other APIs/models that have image support. But I want to keep gpt-discord as elegant and simple as possible, prioritizing GPT-4 support for its overall superiority.

Once OpenAI adds support, so will I. Hopefully soon!

Option to send as regular text

Hello, I was messing around with your bot and comparing it to chrisrude's oobabot plugin (when it was functional) for textgen webui. One thing that I think would enhance this software would be giving the bot the option to send as regular text like in oobabot or have the option to send it as embeds like what is currently possible in this software.

Does ollama supports images?

Is sending images to the bot supported using ollama?

This is what I get from the logs using llava model:

2024-04-10 21:25:09.161 INFO: Message received (user ID: 161792098901688320, attachments: 1, reply chain length: 1):
describe it
2024-04-10 21:25:09.480 INFO: HTTP Request: POST http://localhost:11434/api/generate "HTTP/1.1 400 Bad Request"
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/litellm/llms/ollama.py", line 270, in ollama_async_streaming
    raise OllamaError(
litellm.llms.ollama.OllamaError: b'{"error":"illegal base64 data at input byte 4"}'
2024-04-10 21:25:09.483 ERROR: Error while streaming response
Traceback (most recent call last):
  File "/root/discord-llm-chatbot/llmcord.py", line 176, in on_message
    async for curr_chunk in await acompletion(**kwargs):
  File "/usr/local/lib/python3.11/dist-packages/litellm/llms/ollama.py", line 284, in ollama_async_streaming
    raise e
  File "/usr/local/lib/python3.11/dist-packages/litellm/llms/ollama.py", line 270, in ollama_async_streaming
    raise OllamaError(
litellm.llms.ollama.OllamaError: b'{"error":"illegal base64 data at input byte 4"}'

Priv'd intents requirements missing from README

On start:

 python gpt-discord.py
2023-11-14 15:59:27.105 INFO: logging in using static token
Traceback (most recent call last):
  File "gpt-discord.py", line 110, in <module>
    if __name__ == "__main__": asyncio.run(main())
  File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "gpt-discord.py", line 109, in main
    async def main(): await discord_client.start(os.environ["DISCORD_BOT_TOKEN"])
  File "/home/someUser/.local/lib/python3.8/site-packages/discord/client.py", line 778, in start
    await self.connect(reconnect=reconnect)
  File "/home/someUser/.local/lib/python3.8/site-packages/discord/client.py", line 704, in connect
    raise PrivilegedIntentsRequired(exc.shard_id) from None
discord.errors.PrivilegedIntentsRequired: Shard ID None is requesting privileged intents that have not been explicitly enabled in the developer portal. It is recommended to go to https://discord.com/developers/applications/ and explicitly enable the privileged intents within your application's page. If this is not possible, then consider disabling the privileged intents instead.
2023-11-14 15:59:27.884 ERROR: Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x7f1880b31220>, 36723372.991516)]']
connector: <aiohttp.connector.TCPConnector object at 0x7f187fd64e80>

Which intents do we need aside from "message content intent" (already checked)?
image

Try this before submitting an issue

  • Force-update to the latest version: git fetch && git reset --hard origin/main
  • Update your python packages: pip install -U -r requirements.txt
  • Make sure your config.json is up-to-date with config-example.json (this project is WIP so config options are still changing)

If the bot errors...

Make sure your .env is up-to-date with .env.example. I've added some new config options.

discord bot

the discord bot just typing but it doesn't replay
ezgif-3-41333f17dd

Issues when using Oobabooga

Trying to use this with Oobabooga atm, but getting this error:

2024-03-31 23:36:25.227 ERROR: Error while streaming response
Traceback (most recent call last):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 254, in aiter
async for part in self._httpcore_stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 367, in aiter
raise exc from None
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 363, in aiter
async for part in self._stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 349, in aiter
raise exc
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 341, in aiter
async for chunk in self._connection._receive_response_body(**kwargs):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 210, in _receive_response_body
event = await self._receive_event(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 220, in _receive_event
with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
File "/usr/lib/python3.11/contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/llmcord.py", line 176, in on_message
async for chunk in await acompletion(**kwargs):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/litellm/utils.py", line 9864, in anext
raise e
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/litellm/utils.py", line 9748, in anext
async for chunk in self.completion_stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/openai/_streaming.py", line 150, in aiter
async for item in self._iterator:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/openai/_streaming.py", line 167, in stream
async for sse in iterator:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/openai/_streaming.py", line 158, in _iter_events
async for sse in self._decoder.aiter(self.response.aiter_lines()):
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/openai/_streaming.py", line 295, in aiter
async for line in iterator:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_models.py", line 963, in aiter_lines
async for text in self.aiter_text():
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_models.py", line 950, in aiter_text
async for byte_content in self.aiter_bytes():
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_models.py", line 929, in aiter_bytes
async for raw_bytes in self.aiter_raw():
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_models.py", line 987, in aiter_raw
async for raw_stream_bytes in self.stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_client.py", line 149, in aiter
async for chunk in self._stream:
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 253, in aiter
with map_httpcore_exceptions():
File "/usr/lib/python3.11/contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "/mnt/XDDA/Downloads/discord-llm-chatbot/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

The model I'm using is a GPTQ llama2 model running via Exllama.

Suggestion: Sampler settings

Not sure if I'm missing something, but having some measure of control over sampler settings would be nice. At the minimum, being able to control Top_P and Temperature would make this a lot more usable, currently using this with Mixtral through Openrouter and it's rambling like mad.

Encountering Error with OpenAi key

New to all this. Trying to run the script but keep getting this:

Traceback (most recent call last):
File "C:\Users\Chase\PythonProjects\DiscordLLM\Discord-LLM-Chatbot\llmcord.py", line 19, in
"api_key": os.environ["OPENAI_API_KEY"],
~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "", line 685, in getitem
KeyError: 'OPENAI_API_KEY'


my .env file is exact to your spec with my open ai key recently generated OPENAI_API_KEY = sk-cswxxxxxxxxxxxxxxxxxxxxxxxxxx

Bot real slow to post / edit.

Hi,

I had great success with this project when using ollama.

I swapped to a model that runs on koboldai (openai compatible) and now the bot responses take ages.

It processes the input quick and has the answer ready in ~30 seconds max, but when the bot is posting the answer to discord, python3 starts running 100% on a single core and the bot takes ages to finish posting.

Any idea what I'm doing wrong?

Feature req: Please integrate apipie.ai

Users want access to as much AI as they can get, they dont want to manage 50 accounts, they want the fastest AI they want the cheapest AI, and you can provide all of that for them with this update.

in addition to or in place of integrating with any aggregators - Please integrate APIpie so devs can access them all from one place/subscription and plus it also provides:

-The most affordable, reliable and fastest AI available
-One API to access ~500 Models and growing
-Language, embedding, voice, image, vision and more
-Global AI load balancing, route queries based on price or latency
-Redundancy for major models providing the greatest up time possible
-Global reporting of AI availability, pricing and performance

Its the same API format as openai, just change the domain name and your API key and enjoy a plethora of models without changing any of your code other than how you handle the models list.

This is a win win for everyone, any new AI's from any providers will be automatically integrated into your stack with this one integration. Not to mention all the other advantages.

Loading characters?

Is there any way I can load the character I made in oobabooga webui?
Really nice job btw, thanks!

Fresh Install - Trying to use ooba api

I did a fresh install and have the ooba api running, but can not run as anything I try different in the .env I keep getting this error

Traceback (most recent call last):
  File "/home/aaron/Documents/GitHub/llmcord/llmcord.py", line 18, in <module>
    LOCAL_LLM: bool = env["LLM"].startswith("local/")
  File "/usr/lib/python3.10/os.py", line 680, in __getitem__
    raise KeyError(key) from None
KeyError: 'LLM'

Any suggestions on things I can try? I've installed all requirements.

My env

LLM = local/openai/model
CUSTOM_SYSTEM_PROMPT = You are a snarky Discord chatbot.
CUSTOM_DISCORD_STATUS = 

ALLOWED_CHANNEL_IDS = 
ALLOWED_ROLE_IDS = 
MAX_IMAGES = 5
MAX_MESSAGES = 20

LOCAL_SERVER_URL = http://localhost:5000/v1
LOCAL_API_KEY = 
OOBABOOGA_CHARACTER = 

# LiteLLM settings:
OPENAI_API_KEY = 
MISTRAL_API_KEY = 
# Add more as needed for other providers

Not sure what I am missing here, I used it for weeks up to this point without any issue.

Oobabooga Support

Hello! Do you have any plans to support Oobaboga, other than LM Studio when you use the local model?

msg_noses dict needs size limit

The msg_nodes dict will keep growing indefinitely while the bot is kept running. Need to find the most elegant way to resolve this.

Probably need to use OrderedDict..or maybe just auto-wipe the entire msg_nodes dict on a configurable schedule (e.g every 24 hours).

Either way we would also need protection against race conditions (a msg_nodes entry being deleted while simultaneously being accessed).

"WARNING: Shard ID None heartbeat blocked for more than XXX seconds." has been shown up

Hi guys,

I am using latest code, but when I looked into log file,

"WARNING: Shard ID None heartbeat blocked for more than XXX seconds." has been shown up.

And then My Discord Bot has been off-line after several minutes.

I've checked the code around async and asyncio.Lock, it looks fine.

So far I have no idea for this issue.

Please help me if you have any solutions.

Thanks,

Best

Can't see images

After loading a model with vision capabilities into LM Studio and sending it an image through Discord it says that it can't see images.
Is this a limitation on the LM Studio's side or am I actually doing something wrong?

How to run this

How to run this? After set up the instructions trail off into experienced territory us newbs can't get a break what!

message shortened: design choise or my wrong setup?

@ZenitoGR

can you list 100 items of zen?

Zen Master AI

My friend, I sense that you are seeking a profound connection with the essence of zen. Very well, I shall provide you with 100 items that embody the spirit of zen. However, remember that true zen is not about quantity, but about the quality of the present moment.

Here's your list:

1. A gentle breeze on a summer day
2. The sound of a single raindrop on a leaf
3. A lotus flower blooming in the morning dew
4. Soft whispers of a lover's gentle voice
5. A warm cup of green tea on a chilly morning
...
96. The soft fur of a sleeping cat
97. A quiet walk in the forest at dawn
98. The beauty of impermanence
99. The simplicity of a blank canvas
100. The silence between two friends

As you peruse this list, remember that zen is not about the items themselves, but about the space between them. It's the pauses, the breaths, and the gaps that make life worth living.

Take a deep breath, my friend, and let the essence of zen permeate your being.

I use Groq and Llama3-70b

I asked for a list of 100 items and I got 1-5 and 96-100?

why did that happen?

chat/completions HTTP 404

I pinged gpt "Hello", which crashed the bot:

$ python gpt-discord.py
2023-11-14 16:12:39.884 INFO: logging in using static token
2023-11-14 16:12:40.679 INFO: Shard ID None has connected to Gateway (Session ID: some-id).
2023-11-14 16:12:44.978 INFO: Generating response for prompt:
you alive?
2023-11-14 16:12:45.189 INFO: HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 404 Not Found"
2023-11-14 16:12:45.192 ERROR: Ignoring exception in on_message
Traceback (most recent call last):
  File "/home/someUser/.local/lib/python3.8/site-packages/discord/client.py", line 441, in _run_event
    await coro(*args, **kwargs)
  File "gpt-discord.py", line 83, in on_message
    async for part in await openai_client.chat.completions.create(model=os.environ["GPT_MODEL"], messages=msgs, max_tokens=MAX_COMPLETION_TOKENS, stream=True):
  File "/home/someUser/.local/lib/python3.8/site-packages/openai/resources/chat/completions.py", line 1191, in create
    return await self._post(
  File "/home/someUser/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1474, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "/home/someUser/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1275, in request
    return await self._request(
  File "/home/someUser/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1318, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4-1106-preview` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

not start sertvice

ubuntu 22, python 3.10
u can help me?

root@gptsher:~/gpt-discord# python gpt-discord.py
Traceback (most recent call last):
  File "/root/gpt-discord/gpt-discord.py", line 7, in <module>
    encoding = tiktoken.get_encoding("cl100k_base")
  File "/root/gpt-discord-bot/tutorial-env/lib/python3.10/site-packages/tiktoken/registry.py", line 73, in get_encoding
    enc = Encoding(**constructor())
  File "/root/gpt-discord-bot/tutorial-env/lib/python3.10/site-packages/tiktoken_ext/openai_public.py", line 64, in cl100k_base
    mergeable_ranks = load_tiktoken_bpe(
  File "/root/gpt-discord-bot/tutorial-env/lib/python3.10/site-packages/tiktoken/load.py", line 117, in load_tiktoken_bpe
    return {
  File "/root/gpt-discord-bot/tutorial-env/lib/python3.10/site-packages/tiktoken/load.py", line 119, in <dictcomp>
    for token, rank in (line.split() for line in contents.splitlines() if line)
ValueError: not enough values to unpack (expected 2, got 1)

Ability to use plain text instead?

The title. Like, I get that you can show more info with the embeds, i.e. the colour of it and the title and whatnot, but would it be possible to use plain text messages for the LLM responses so it looks more believable in the context of a discord chat?

Suggestion: Consider adding a conda environment.yml file and containerizing it using Docker

Hey there,

Thank you for helping us to set up your chatbot on our Discord server. It's awesome! Very good job. I am also very glad that you started having traction. I am also trying to spread the word myself.

I would like to suggest the following:

  • Adding a conda environment.yml file (or a similar virtual environment file)
  • Adding a Dockerfile to use a Docker container

Both are good practises, because they ensure that other people who install it do not get the "but it works on my computer" problem.

For example, how would someone know what version of Python he should be using to install this? Also, what version of pip should he be using? Yes, you could write it in your README file, but that's suboptimal. A virtual environment like conda helps eliminate or at least mitigate this problem.

Here's an example of an environmenl.yml from one of my repositories (using Apache License 2.0). I am not saying that it's the best, but it should be good enough to get you started:

Happy to talk more about it over Discord, if you wish. 🙂

Want a guided install for Oobabooga/ Open AI. A newbie this side.

Hey Jakob,

I was trying your bot to install in my server and just could not succeed. I am just an oobabooga user and don't know much about Open AI API but I dearly want to make it work. What do you suggest? Can you help me via a session? Can you add @nickoaiart or @AiModelMaya on discord to discuss further?Offcourse paid gig if you are fine

Not working anymore for local models (LLM=Local/model LM Studio)

Hi,
I had previous version of Discord-LLM-ChatBot working well with LM Studio locally.

When I try, with the newest update the bot will not respond and I am not seeing anything in LM Studio logs.

I tried every thing, what did I do incorrectly? I can provide error logs if that can help.

I use python3.11.7 in conda env. and pip 23.3.1, LM Studio 0.2.13 on Windows.
(Not a native English, apologies my English.)

ollama vision

add another option for ollama vistion instead of openai !
like the text model will be llama3 and the vision model will be llava

I get this error in my nohup error logs and bot is not responding some times

async for raw_stream_bytes in self.stream:
  File "/usr/local/lib/python3.11/dist-packages/httpx/_client.py", line 149, in __aiter__
    async for chunk in self._stream:
  File "/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py", line 253, in __aiter__
    with map_httpcore_exceptions():
  File "/usr/lib/python3.11/contextlib.py", line 155, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.