Giter VIP home page Giter VIP logo

chatgpt-telegram-bot's People

Contributors

aes-alienrip avatar alexhtw avatar am1ncmd avatar bestmgmt avatar bjornb2 avatar bugfloyd avatar carlsverre avatar deanxizian avatar dkvdm-bot avatar eyadmahm0ud avatar gianlucaalfa avatar gilcu3 avatar ivanmilov avatar jnaskali avatar jokerqyou avatar jvican avatar k3it avatar kristaller486 avatar ledybacer avatar mirmakhamat avatar muhammed540 avatar n3d1117 avatar nmeln avatar noriellecruz avatar peterdavehello avatar rafael-6fx avatar slippersheepig avatar stanislavlysenko0912 avatar whyevenquestion1t avatar yurnov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-telegram-bot's Issues

Fail to transcribe audio messages

Hello, I deployed v0.1.3 with Docker Compose, but the audio message transcription feature is not working. The bot always returns "Failed to transcribe text".

(FFmpeg is installed).

Error: 'Chatbot' object has no attribute 'headers'

I did everything you said but i don't get anything back
running on Ubuntu 22.04
here logs:

(chatgpt-telegram-bot) root@linux:~/chatgpt-telegram-bot# python main.py
Logging in...
Error logging in (Probably wrong credentials)
Error refreshing session:
Error logging in
2022-12-07 14:36:54,351 - telegram.ext._application - INFO - Application started
2022-12-07 14:36:54,632 - root - INFO - New message received from user @ArchLUL
2022-12-07 14:36:54,852 - root - INFO - Error while getting the response: 'Chatbot' object has no attribute 'headers'
2022-12-07 14:37:35,805 - root - INFO - User @Archnet is not allowed to start the bot
2022-12-07 14:37:43,588 - root - INFO - Bot started
2022-12-07 14:37:47,411 - root - INFO - New message received from user @ArchLUL
2022-12-07 14:37:47,516 - root - INFO - Error while getting the response: 'Chatbot' object has no attribute 'headers'
2022-12-07 14:37:54,966 - root - INFO - New message received from user @ArchLUL
2022-12-07 14:37:55,317 - root - INFO - Error while getting the response: 'Chatbot' object has no attribute 'headers'

and in telegram :
image
What's problem?

bot dead with httpx.LocalProtocolError

occasionally the bot seems to die with the error below and stops processing messages. restarting the docker fixes the problem. (might be related to using non-english prompts, but i haven't narrowed it down). i wonder of anyone else ran into this.

2023-03-19 18:02:54,155 - telegram.ext._updater - ERROR - Error while getting Updates: httpx.LocalProtocolError: Invalid input ConnectionInputs.SEND_HEADERS in state ConnectionState.CLOSED

Publish to Docker Hub

This project looks amazing! Could you publish it to Docker Hub so it’ll be easy to deploy on serverless platforms? Thanks! 😊

Prompt token consumption grows until /reset

Hi.
Thank you for the updates and support of the turbo model!

I noticed that each subsequent query within a conversation uses up more prompt tokens. this continues until I reset the session. Does this sound like the correct behavior?

here is an example of the same prompt, with each iteration increasing token usage:

image

Handle responses longer than telegram message limit

Hey, I encountered the telegram error Message_too_long when transcribing a 6 minute audio file.
Any chance to split responses longer than the limit (I think 4096 characters) into multiple messages?
Might also apply to chatGPT responses although I have not managed to get such a long response yet but I think theoretically it could be possible (4096 tokens > 4096 characters).

Thank you and the other contributors for all your hard work!

How to use .env and load_dotenv()

I am trying to set up the project and enter my own parameters into the .env file which I've created. Shouldn't I use the .env.py extension instead? If I get an answer, I can add this information to the README so that any newbee would be able to understand configuration requirements

Transcribe then reply

When I set VOICE_REPLY_WITH_TRANSCRIPT_ONLY to false, can I let the bot transcribe my voice and output it first, then reply it? It's more user-friendly.

Originally posted by @deanxizian in #38 (comment)

v0.1.8 doesnt work

Hi.
I downloaded a new version (tried the last commit from the master and release 0.1.8) of the bot to the server and nothing but the help command works.
Moreover, if I roll back to the previous one, it works fine.
Does not work either in a group chat or in a personal message

New `.env` file:
OPENAI_API_KEY="my_openai_token"
TELEGRAM_BOT_TOKEN="my_bot_token"
# ALLOWED_TELEGRAM_USER_IDS="USER_ID_1,USER_ID_2" ###Yes I tried: ALLOWED_TELEGRAM_USER_IDS="*"
# MONTHLY_USER_BUDGETS="100.0,100.0"
# MONTHLY_GUEST_BUDGET="20.0"
# PROXY="http://localhost:8080"
OPENAI_MODEL="gpt-3.5-turbo"
ASSISTANT_PROMPT="You are a helpful assistant."
SHOW_USAGE=false
MAX_TOKENS=1200
MAX_HISTORY_SIZE=15
MAX_CONVERSATION_AGE_MINUTES=30
VOICE_REPLY_WITH_TRANSCRIPT_ONLY=false
N_CHOICES=1
TEMPERATURE=0.5
PRESENCE_PENALTY=0
FREQUENCY_PENALTY=0
IMAGE_SIZE="256x256"
GROUP_TRIGGER_KEYWORD="help"
IGNORE_GROUP_TRANSCRIPTIONS=true
TOKEN_PRICE=0.002
IMAGE_PRICES="0.016,0.018,0.02"
TRANSCRIPTION_PRICE=0.006
Old `.env` file:
OPENAI_API_KEY="my_openai_token"
TELEGRAM_BOT_TOKEN="my_bot_token"
ALLOWED_TELEGRAM_USER_IDS="*"
ASSISTANT_PROMPT="You are a helpful assistant."
SHOW_USAGE=false
MAX_TOKENS=1200
MAX_HISTORY_SIZE=10
MAX_CONVERSATION_AGE_MINUTES=30
VOICE_REPLY_WITH_TRANSCRIPT_ONLY=true

What I was doing:

cd chatgpt_bot_v2/ #old
git rev-parse HEAD #34b79a5e2eadfc6b237882cb08a5d11085098dc9  (the sixth commit after 0.1.5)
docker compose up -d #(the first time I started it with the "--build" key)
###Bot works fine###
docker logs chatgpt_bot_v2-chatgpt-telegram-bot-1 #(2023-03-18 14:00:31,381 - telegram.ext._application - INFO - Application started)
docker compose down
cd ../chatgpt_bot_0.1.8/ #latest commit
git rev-parse HEAD #73d200c64e95e481e2986caeaad70d6b339fb1d9
docker compose up -d
docker logs chatgpt_bot_018-chatgpt-telegram-bot-1#(2023-03-18 14:02:08,078 - telegram.ext._application - INFO - Application started)
###Bot doesnt work###

If you need any additional data, I will try to provide it.
Thanks

I got a error about token expired

{
    "detail": {
        "message": "Your authentication token has expired. Please try signing in again.",
        "type": "invalid_request_error",
        "param": null,
        "code": "token_expired"
    }
}

Maybe we can login in every 12 hours?

Allow support for Python 3.9

Some Linux distributions have not yet included Python 3.10 in their official repositories. It would be beneficial to enable the bot to run on Python 3.9 as well.

Upon reviewing the codebase, it appears that the only Python 3.9 incompatible syntax is the usage of the new union operator (|) in the get_chat_response function of openai_helper.py.

Is there anything else that I might be overlooking? If not, we could consider adding legacy support for Python 3.9 by using the Union type from the typing module in place of the new union operator for greater compatibility.

Please Add a Menu

Greetings,
the usability would improve if you added Menu Buttons for Telegram.

Thanks!

Extremely slow

Hey.
thanks for your awesome work.
the bot works, but it's extremely slow..
how may I increase its speed?
the server has enough resources.

/reset command can't fix "This model's maximum context length is 4096 tokens"

After sending /reset command, there are still following errors. How to clean the history? Thanks!

⚠️ OpenAI Invalid request ⚠️
This model's maximum context length is 4096 tokens. However, you requested 4728 tokens (3528 in the messages, 1200 in the completion). Please reduce the length of the messages or completion.

Need help

How can I enable debug logging for the revChatGpt package

Plz someone help

If we could allow the robot to be used by group members

If we could allow the robot to be used by group members, I think that would be a good idea:

  • The robot works in private mode by default, so it can only receive messages marked as REPLY.
  • Perhaps we also need to add a allowList of groups that are allowed to use the robot.

However, there are also some limitations to this approach.

  • it could lead to multiple people sharing the same conversation,
  • chatgpt's processing rate may be limited.

No direct answer on voice message

I first thought it was a error.
Would it be possible to answer (with chatgpt) directly after whisper has transcribe the audio message?

This would eliminate the manual step copying of the text after transcibe.

Screenshot ChatGPTBot

Failure in execute main.py shows TimeOut ,how can I slove that? Thanks.

:~/chatgpt-telegram-bot# python main.py
Debugger enabled on OpenAIAuth
Logging in...
Debugger enabled on OpenAIAuth
Beginning auth process
Beginning part two
Beginning part three
Beginning part four
Beginning part five
Beginning part six
Beginning part seven
Request went through
Response code is 302
New state found
Beginning part eight
Beginning part nine
SUCCESS
Part eight called
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/streams/tls.py", line 130, in _call_sslobject_method
result = func(*args)
File "/usr/lib/python3.10/ssl.py", line 975, in do_handshake
self._sslobj.do_handshake()
ssl.SSLWantReadError: The operation did not complete (read) (_ssl.c:997)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 67, in start_tls
ssl_stream = await anyio.streams.tls.TLSStream.wrap(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/streams/tls.py", line 122, in wrap
await wrapper._call_sslobject_method(ssl_object.do_handshake)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/streams/tls.py", line 137, in _call_sslobject_method
data = await self.transport_stream.receive()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 1265, in receive
await self._protocol.read_event.wait()
File "/usr/lib/python3.10/asyncio/locks.py", line 214, in wait
await fut
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
yield
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 76, in start_tls
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 66, in start_tls
with anyio.fail_after(timeout):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/anyio/_core/_tasks.py", line 118, in exit
raise TimeoutError
TimeoutError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
yield
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 253, in handle_async_request
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 237, in handle_async_request
response = await connection.handle_async_request(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection.py", line 86, in handle_async_request
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection.py", line 63, in handle_async_request
stream = await self._connect(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_async/connection.py", line 150, in _connect
stream = await stream.start_tls(**kwargs)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/backends/asyncio.py", line 64, in start_tls
with map_exceptions(exc_map):
File "/usr/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc)
httpcore.ConnectTimeout

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_httpxrequest.py", line 183, in do_request
res = await self._client.request(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1533, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1620, in send
response = await self._send_handling_auth(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1648, in _send_handling_auth
response = await self._send_handling_redirects(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1685, in _send_handling_redirects
response = await self._send_single_request(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_client.py", line 1722, in _send_single_request
response = await transport.handle_async_request(request)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 352, in handle_async_request
with map_httpcore_exceptions():
File "/usr/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectTimeout

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/root/chatgpt-telegram-bot/main.py", line 32, in
main()
File "/root/chatgpt-telegram-bot/main.py", line 28, in main
telegram_bot.run()
File "/root/chatgpt-telegram-bot/telegram_bot.py", line 125, in run
application.run_polling()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 670, in run_polling
return self.__run(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 858, in __run
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 847, in __run
loop.run_until_complete(self.initialize())
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_application.py", line 357, in initialize
await self.bot.initialize()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_extbot.py", line 252, in initialize
await super().initialize()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 499, in initialize
await self.get_me()
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_extbot.py", line 1639, in get_me
return await super().get_me(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 313, in decorator
result = await func(*args, **kwargs) # skipcq: PYL-E1102
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 644, in get_me
result = await self._post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 395, in _post
return await self._do_post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/ext/_extbot.py", line 306, in _do_post
return await super()._do_post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/_bot.py", line 426, in _do_post
return await request.post(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_baserequest.py", line 167, in post
result = await self._request_wrapper(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_baserequest.py", line 290, in _request_wrapper
raise exc
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_baserequest.py", line 276, in _request_wrapper
code, payload = await self.do_request(
File "/root/.local/share/virtualenvs/chatgpt-telegram-bot-IagjQVk0/lib/python3.10/site-packages/telegram/request/_httpxrequest.py", line 200, in do_request
raise TimedOut from err
telegram.error.TimedOut: Timed out

Hope to add HTTP proxy

In some application environments, I can only connect to telegram and openai through proxy. This means that I need to open the global proxy, which will affect the running of other programs. I tried to add

{os.environ["http_proxy"] = " http://127.0.0.1:1231 "} But it doesn't seem to work?

Captcha detected

I have valid email and password in my .env, my VPS IP is from Germany, my OpenAI account was registered in Germany too.

Beginning part three
Beginning part four
Beginning part five
Error in part five
Captcha detected
Login failed
Traceback (most recent call last):
  File "/home/ubuntu/chatgpt-telegram-bot/main.py", line 45, in <module>
    main()
  File "/home/ubuntu/chatgpt-telegram-bot/main.py", line 39, in main
    gpt3_bot = ChatGPT3Bot(config=chatgpt_config, debug=debug)
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 55, in __init__
    self.refresh_session()
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 305, in refresh_session
    raise exc
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 302, in refresh_session
    self.login(self.config["email"], self.config["password"])
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 340, in login
    raise exc
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 333, in login
    auth.begin()
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 83, in begin
    self.part_two()
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 112, in part_two
    self.part_three(token=csrf_token)
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 147, in part_three
    self.part_four(url=url)
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 181, in part_four
    self.part_five(state=state)
  File "/home/ubuntu/.local/share/virtualenvs/chatgpt-telegram-bot-HCyYIvgi/lib/python3.10/site-packages/OpenAIAuth/OpenAIAuth.py", line 219, in part_five
    raise ValueError("Captcha detected")
ValueError: Captcha detected

Session management

Similar to the browser UI, it would be cool if we would be able to save/retrieve sessions in addition to /reset
I'm thinking of:

  • /save for saving a specific session, potentially with a name
  • /sessions for listing all sessions
  • /session session_id for opening a specific session
  • /reset for resetting or deleting a session

Some Installing Suggestion

  1. Please make sure install python3.10 +,if the server just has 3.7/3.8, it has to be update.
  2. pip install python-dotenv
  3. pip install python-telegram-bot --pre, yes, makesure install "--pre"
  4. pip install telegram
  5. pip3 install revChatGPT --upgrade

Add session persistence

Upon redeployment the bot shouldn't lose track of its context.
I've yet to look into how the browser UI handles multiple sessions, but I assume that there is some sort of unique ID you can set. If that is the case, a potential solution would be to utilize the chat_id as a unique session ID.

Network error

Hi, when I try to install I get an error:

chatgpt-telegram-bot-chatgpt-telegram-bot-1 | File "/usr/local/lib/python3.9/site-packages/telegram/request/_httpxrequest.py", line 223, in do_request
chatgpt-telegram-bot-chatgpt-telegram-bot-1 | raise NetworkError(f"httpx.{err.class.name}: {err}") from err
chatgpt-telegram-bot-chatgpt-telegram-bot-1 | telegram.error.NetworkError: httpx.ConnectError: All connection attempts failed
chatgpt-telegram-bot-chatgpt-telegram-bot-1 exited with code 1

I tried to install the previous version and everything worked! Hope for help:)

Add support for using inline mode in private conversations

Currently, the bot only supports using inline mode in group chats.
I would like to request a new feature that would allow inline mode to be used in private conversations as well.
This would be useful for users who prefer to interact with the bot one-on-one instead of in a group chat.

OpenAI invalid requet.

After using for a while, the boot will generate such message even though I input a very short sentence:

⚠️ OpenAI Invalid request ⚠️
This model's maximum context length is 4096 tokens. However, you requested 4368 tokens (3168 in the messages, 1200 in the completion). Please reduce the length of the messages or completion.

After restarting the service, it will return normal

request xD

ALLOWED_TELEGRAM_CHAT_IDS="<CHAT_ID_1>,<CHAT_ID_2>,..."

Can everyone use it without selecting an ID? xD

login failed

yesterday i succussfully run this project, but today i found my lost the connection. (In my country internet is not that good :<)
so i reboot my process, i found this login failed error:

"Auth0 did not issue an access token"

  File "C:\Users\ting\anaconda3\envs\py310\lib\site-packages\OpenAIAuth\OpenAIAuth.py", line 309, in part_seven
    self.part_eight(old_state=state, new_state=new_state)
  File "C:\Users\ting\anaconda3\envs\py310\lib\site-packages\OpenAIAuth\OpenAIAuth.py", line 363, in part_eight
    raise Exception("Auth0 did not issue an access token")
Exception: Auth0 did not issue an access token

but i have checked my account & password that is correct, and could successfully login in Chrome browser.
after retried run main.py for 2times, i found this:
"Exception: You have been rate limited."

Beginning auth process
Beginning part two
Beginning part three
You have been rate limited
Login failed
Traceback (most recent call last):
  File "/app/main.py", line 46, in <module>
    main()
  File "/app/main.py", line 40, in main
    gpt3_bot = ChatGPT3Bot(config=chatgpt_config, debug=debug)
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 59, in __init__
    self.refresh_session()
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 309, in refresh_session
    raise exc
  File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.10/site-packages/asyncChatGPT/asyncChatGPT.py", line 306, in refresh_session
    self.login(self.config["email"], self.config["password"])

docker-compose ERROR: No matching distribution found for openaiauth

I am getting this error when trying docker-compose on Docker Desktop (Win10)
From docker logs:

#0 9.421 [pipenv.exceptions.InstallError]: ERROR: Could not find a version that satisfies the requirement openaiauth==0.0.6 (from versions: none)
#0 9.421 [pipenv.exceptions.InstallError]: ERROR: No matching distribution found for openaiauth==0.0.6

Wehn I try to install using pip command, same error:

>  pip install openaiauth
ERROR: Could not find a version that satisfies the requirement openaiauth (from versions: none)
ERROR: No matching distribution found for openaiauth

What am I missing?

4000 tokens for the completion?

"I am receiving an error that I have exceeded the maximum number of tokens. In this example, I have used 152 tokens for the prompt and 4000 for the completion. Why is the completion token usage so high?"

2023-03-18 11:25:18,531 - root - INFO - New message received from user @AlyxAbyss 2023-03-18 11:25:18,909 - openai - INFO - error_code=context_length_exceeded error_message="This model's maximum context length is 4097 tokens. Ho wever, you requested 4152 tokens (152 in the messages, 4000 in the completion). Please reduce the length of the messages or completion." error_par am=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False 2023-03-18 11:25:18,910 - root - ERROR - This model's maximum context length is 4097 tokens. However, you requested 4152 tokens (152 in the messag es, 4000 in the completion). Please reduce the length of the messages or completion. Traceback (most recent call last): File "/home/vicky/VickyAI/chatgpt-telegram-bot/openai_helper.py", line 49, in get_chat_response response = openai.ChatCompletion.create( File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/home/vicky/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 4152 tokens (152 in the messages, 400 0 in the completion). Please reduce the length of the messages or completion.

Please update

the revChatGPT has V2, please update, old one can't useable

About "Live answer updating as the bot types" problem

Hi!
After I updated this function, there were some problems: Sometimes when ChatGPT typed a long paragraph, it was easy to fail to load after half of the sentence (this cannot be solved by saying "Continue" with ChatGPT, this situation is true ChatGPT is not finished).

This feature also makes the "typing" on the Telegram dialog appear for a while and then disappear. I also don't know what causes this. In the absence of this function, Telegram's bot will always display "typing" until the message is sent. Relatively speaking, this function may still need some repairs, or leave an option for users to freely choose whether to enable "Live "model.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.