Comments (8)
Hi @k3it, good catch. Yeah, I think it's showing the sum of the prompt tokens used within a conversation.
I would say that's the expected behavior, since I'm not doing any calculations myself, just printing the usage
object that is returned by the API. Anyway I will keep this issue open in case anyone has more knowledge on this!
from chatgpt-telegram-bot.
I did some more checking and it looks like "Token used" is an individual counter for each prompt/response transaction. it is not a cumulative for the current conversation. So each new question within the same conversation becomes more expensive to ask.
After a long session without a /reset it may be cheaper to buy and weight the banana yourself instead of asking the bot about it :)
edit: i see now that message query and answer history is added to each completion. that explains the growth I think
chatgpt-telegram-bot/openai_helper.py
Lines 32 to 37 in 71209d6
chatgpt-telegram-bot/openai_helper.py
Line 56 in 71209d6
edit2: here is what the bot thinks about this (might not be accurate?)
The GPT completion API can remember the context of the conversation by itself using its internal memory, without needing to send the full message history back each time a new message is sent.
When you create a new completion request using the GPT API, you can include some context that the endpoint can use to better understand the request. This context can come from the last few messages in the conversation, as well as any additional information you provide. The GPT model can then use this context to generate a more accurate and relevant response.
from chatgpt-telegram-bot.
Indeed it looks like sending history also cosumes tokens.
Here's what the docs say:
Including the conversation history helps when user instructions refer to prior messages. [...] Because the models have no memory of past requests, all relevant information must be supplied via the conversation. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way.
So I agree that history needs to be truncated or summarized somehow. I commented on your PR for possible solutions
from chatgpt-telegram-bot.
@k3it Should be fixed in 946f6a4. Feel free to reopen if issue persists
from chatgpt-telegram-bot.
Hello and thanks a lot for the updates and fixes
I've seen this application of embeddings in another bot (link below) to solve the token issue. Would it be a good idea to implement to further reduce on the token consumption?
https://github.com/LagPixelLOL/ChatGPTCLIBot
from chatgpt-telegram-bot.
Hi @em108 I'm not familiar with embeddings or how they can be implemented in python. What are the advantages?
from chatgpt-telegram-bot.
From what I've gathered from multiple sources including the article below, embeddings can aid in long term memory / a way to store conversation data. Based on the cost and application, they can cost up to 5 - ~9 times less than sending chat history.
Article:
https://towardsdatascience.com/generative-question-answering-with-long-term-memory-c280e237b144
Also an example of a notebook utilizing embeddings:
https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb
from chatgpt-telegram-bot.
Very interesting, thanks @em108. Would be great, although I currently don't have the time to implement this
from chatgpt-telegram-bot.
Related Issues (20)
- question: what is andministrator HOT 2
- Any We can add Suno AI to our chatgpt bot?
- ⚠️ AsyncCompletions.create() got an unexpected keyword argument 'headers'
- [Errno 13] Permission denied: 'usage_logs'
- Add support for GPT4's vision HOT 2
- PDFs' Download Ability and Work with Them
- Up-to-date docker image ? HOT 1
- Timed out HOT 3
- HTTPError with web search plugin HOT 4
- How i add the plugins? HOT 3
- Any idea how to render Latex equations nicely in Telegram?
- Message is too long HOT 2
- Can I add a fixed message to each chatgpt generation?
- I would like to know how to change the print HOT 2
- socks5h proxy support for OPENAI_PROXY HOT 2
- Bot stuck without any log/traceback HOT 2
- How I solve this error with duckduckgo? HOT 1
- Need add new openai conversation features
- Bot creates screenshot instead of URL summary in v0.4.0
- failed with http proxy HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from chatgpt-telegram-bot.