Giter VIP home page Giter VIP logo

Comments (15)

stan-voo avatar stan-voo commented on July 2, 2024 6

It did it! Completed a task on v.3.5!
Here's a screencast if anybody is curious: https://www.loom.com/share/9bf888d9c925474899257d072f1a562f

from auto-gpt.

Torantulino avatar Torantulino commented on July 2, 2024 6

Wow! 🤯
I didn't know that was possible, great work guys! @stan-voo @Koobah

I'd tried getting 3.5 to work in the past and it refused to acknowledge the prompt. Great idea asking it to parse it's final output as JSON.

If you want to submit a pull request that would be a huge help!:

  • - Add a model argument (If none is provided, default to gpt-4)
  • - Create different prompt.txt files for each model (this appears to be necessary), load the appropriate one when building the prompt.
  • - Add model to Config, set on start-up and get every time gpt-4 is currently mentioned.

GPT3.5 is so much cheaper, allowing for much more testing and development.

In future we could even get GPT4 instances of Auto-GPT to cheaply spin up entire sub-instances of Auto-GPT running GPT3.5 for multiple step tasks...

from auto-gpt.

Koobah avatar Koobah commented on July 2, 2024 3

Guys, glad that it helped you. Just be very cautious when using my code. This is my very first programming attempt. I have no clue what I am doing :)

from auto-gpt.

Koobah avatar Koobah commented on July 2, 2024 2

Btw, the idea I am working on is to create specialist GPT instances project managers, marketers, operators), where the bot would have it's own complex prompt replicating what a person in such role would do

from auto-gpt.

stan-voo avatar stan-voo commented on July 2, 2024 1

I'm playing with v3.5 thanks to @Koobah fork: https://github.com/Koobah/Auto-GPT
It seems to be able to do some things: https://www.loom.com/share/9ddf4b806dfb4a948c7d728669ebac28

Got stopped by this error:

Traceback (most recent call last):
File "/Users/stas/dev/Auto-GPT/AutonomousAI/main.py", line 134, in
assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stas/dev/Auto-GPT/AutonomousAI/chat.py", line 50, in chat_with_ai
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4121 tokens. Please reduce the length of the messages.

from auto-gpt.

Torantulino avatar Torantulino commented on July 2, 2024 1

Brilliant! Make sure you're up-to-date, as I'm adding features all the time.

Please share your journey with us over at Discussions, I'd love to see how things progress as you go.

from auto-gpt.

0xcha05 avatar 0xcha05 commented on July 2, 2024 1

#19

from auto-gpt.

Taytay avatar Taytay commented on July 2, 2024 1

Nice!

I think we should use both. I prefer dotenv, and was thrilled to see that PR, but was trying to keep my PR from fixing all the things at once. ;)

from auto-gpt.

0xcha05 avatar 0xcha05 commented on July 2, 2024 1

I agree, @jcp. I think this decision makes my PRs not useful anymore. lessons learned, don't format on save(when contributing to OSS), make the PRs small, and address each issue separately.

from auto-gpt.

Torantulino avatar Torantulino commented on July 2, 2024

You can apply for access to the GPT-4 API here:

https://openai.com/waitlist/gpt-4-api

Unfortunately, in my testing when I've tried to run Auto-GPT using GPT3.5 it does not function at all as the model does not understand it's task.

from auto-gpt.

xSNYPSx avatar xSNYPSx commented on July 2, 2024

I'm playing with v3.5 thanks to @Koobah fork: https://github.com/Koobah/Auto-GPT It seems to be able to do some things: https://www.loom.com/share/9ddf4b806dfb4a948c7d728669ebac28

Got stopped by this error:

Traceback (most recent call last):
File "/Users/stas/dev/Auto-GPT/AutonomousAI/main.py", line 134, in
assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stas/dev/Auto-GPT/AutonomousAI/chat.py", line 50, in chat_with_ai
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4121 tokens. Please reduce the length of the messages.

I dont understand it, I download Koobah's fork, how to run it with gpt 3.5 key ? I used command git clone https://github.com/Koobah/Auto-GPT , but I dont find main.py in his repository

Update: I am roll it out. Just need to create keys.py (read readme) in autonomousAI path and run main.py !

from auto-gpt.

alreadydone avatar alreadydone commented on July 2, 2024

Note that another "autonomous agent" experiment by Yohei (which is more quite popular on Twitter, but not open source) has produced impressive demos using GPT3. Yohei has recently publicized the architecture of the system, and I think there are things to learn from it, e.g. using a task queue, and using a vector store for long-term memory rather than files. But that system doesn't implement code execution yet, let alone code improvement.

from auto-gpt.

jcp avatar jcp commented on July 2, 2024

@Taytay and @0xcha05, an alternative solution involves PR #39. Rather than using command line arguments, you could introduce an environment variable that overrides the default model. This approach offers more flexibility, especially as the codebase becomes more modular.

from auto-gpt.

Torantulino avatar Torantulino commented on July 2, 2024

Absolutely agree with the "Make PR's small" part.
Big pull requests are actually slowing things down right now, there's a lot to get through.

from auto-gpt.

PurrsianMilkman avatar PurrsianMilkman commented on July 2, 2024

i would love to see local model support soon because i now have my api setup and would love to use my own language model. openai gets expensive.

from auto-gpt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.