Giter VIP home page Giter VIP logo

Comments (14)

tedspare avatar tedspare commented on July 21, 2024 4

Another approach could be to run history through an embeddings API, save the embeddings to a Vector DB, then do a lookup for relevant memories on each step.

from auto-gpt.

jantic avatar jantic commented on July 21, 2024 4

Another approach could be to run history through an embeddings API, save the embeddings to a Vector DB, then do a lookup for relevant memories on each step.

I really think this is an excellent idea. In fact it might be a huge win. This would basically give you an indefinite context window in effect, in terms of "long term" memory. Of course the discarding of "irrelevant" info in any given call to the model will be imperfect, but I'd bet it'll work pretty well.

I was thinking about this myself this morning and wondered if anybody else already mentioned it. Basically I see it as an "associative memory", much like what we have in our own minds. You could perhaps have the GPT model generate a few orthogonal short summaries of what it just output and responded to (top 5?), store these in the vector db, and then get the most relevant "memories" for subsequent calls based on this same process.

So combine these "N closest" memories with most recent and I think you'll get a very effective long term memory mechanism.

Is there anyone out there that sees problems with this idea or has way to improve upon it? It seems super awesome to me...

from auto-gpt.

dschonholtz avatar dschonholtz commented on July 21, 2024 4

@Torantulino I'm going to pick this up if it is ok with you.
Here is my laundry list:

  1. Store long term memory in pinecone: https://www.pinecone.io/. There are lots of options here, this is just fairly simple and is what babyagi is using: https://github.com/yoheinakajima/babyagi
  2. Pull in n closest memories. Default n to 5, but make it configurable. (Do some experimentation on what seems most useful.)
  3. Make this memory object a class that is optional. Specify the delete and add operations on the current memory dict obj to pinecone operations. I'll try to keep this fairly extensible so we could easily make classes with the same interface for different vector DBs
  4. Add in a pinecone api key in .env.template
  5. Update the readme to tell people to use it.
  6. If no api key is specified tell the user they are using a local memory (The current implementation). Also, support explicit local memory option.

Let me know if there is anything here you'd like me to change. I should have a working version of this by EOD tomorrow EST.

I would hope to then be able to extend this to processing files in large repos too and eventually I want to make this feed into the self improvement pipeline with respect to remembering where relevant local files are to large tasks.

from auto-gpt.

alreadydone avatar alreadydone commented on July 21, 2024 1

Actually, maybe we can make GPT models aware of the memory tool using the system message without the need of finetuning, since it's just a single simple tool. Something like

You are a language model with limited memory (or context length) so that you'll forget what's said 8,000 tokens (3,000 words?) earlier. However, you now have access to a key-value database that serve as your long-term memory. If you are about to forget something important, you may say <remember "k" "v"> to store it in the database, which you can later recall by saying <recall "k">.

I'm not experienced in prompt engineering so there's definitely room for improvement. Notice that

In general, gpt-3.5-turbo-0301 does not pay strong attention to the system message, and therefore important instructions are often better placed in a user message.

so this should work better with GPT-4 than 3.5. If you have access, please try!

from auto-gpt.

dschonholtz avatar dschonholtz commented on July 21, 2024 1

This works. Hard to test this kind of thing concretely. But anecdotally it seems like it is much smarter now.
I'm implementing a thing to actually track memory usage, number of memory keys taken up or number of vectors in DB to output between thoughts.
Then I'm gonna do another pass with the debugger and assuming it appears to be doing what I think it is doing I'll put it up for review

from auto-gpt.

claysauruswrecks avatar claysauruswrecks commented on July 21, 2024

From what I was reading, you can take the context window, and compress chunks at the rear into summaries.

from auto-gpt.

Torantulino avatar Torantulino commented on July 21, 2024

Interesting idea! This would expand short-term memory.

Currently Auto-GPT manages it's own "Long-Term Memory" which is "pinned" to the start of the context.

from auto-gpt.

Torantulino avatar Torantulino commented on July 21, 2024

I've been meaning to look into this.
Is it practical to regularly rebuild/add to an embedding?

Forgive my ignorance, I've never used them.

from auto-gpt.

tedspare avatar tedspare commented on July 21, 2024

All good! Thanks for your reply. In my (limited) understanding, adding embeddings is no more than adding a row to a DB (but with vector data).

from auto-gpt.

alreadydone avatar alreadydone commented on July 21, 2024

I believe it's possible to simply use a key-value store as memory and make it available to Auto-GPT as a tool, letting the model itself decide when and what to read from and write to the memory. Auto-GPT already has code execution implemented, so it has all Python functions available as tools, and this is just one more tool. To make the model aware of the memory tool and good at utilizing it, we would have to finetune it (e.g. using the Toolformer approach; there are two open-source implementations and this is more popular than the official one), and would need to collect some usage data (there isn't any paper or implementation that uses a memory tool yet AFAIK). Finetuning is available for ChatGPT-3.5 but not GPT-4, but I think we'll need to finetune anyway if we want Auto-GPT to create new tools and self-improve; we may also use an open model (many of them have LoRA finetuning implementations), which are be less powerful, but we may expose GPT-4 API to it and train it to use the API as a tool, so the whole system would not be less powerful.

from auto-gpt.

dschonholtz avatar dschonholtz commented on July 21, 2024

See pull: #122

from auto-gpt.

Pwuts avatar Pwuts commented on July 21, 2024

Is this resolved with the output of --debug?

from auto-gpt.

Boostrix avatar Boostrix commented on July 21, 2024

I'm implementing a thing to actually track memory usage, number of memory keys taken up or number of vectors in DB to output between thoughts.
Auto-GPT should be aware of it's short and long term memory usage so that it knows when something is doing to be deleted from it's memory due to context limits.

This would ideally be a part of a "quota"-like system so that sub-agents could be managed by agents higher up in the chain whenever there is a quota/constraint violation (soft/hard), as per #3466

from auto-gpt.

github-actions avatar github-actions commented on July 21, 2024

This issue was closed automatically because it has been stale for 10 days with no activity.

from auto-gpt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.