Comments (15)
It did it! Completed a task on v.3.5!
Here's a screencast if anybody is curious: https://www.loom.com/share/9bf888d9c925474899257d072f1a562f
from auto-gpt.
Wow! 🤯
I didn't know that was possible, great work guys! @stan-voo @Koobah
I'd tried getting 3.5 to work in the past and it refused to acknowledge the prompt. Great idea asking it to parse it's final output as JSON.
If you want to submit a pull request that would be a huge help!:
- - Add a model argument (If none is provided, default to gpt-4)
- - Create different prompt.txt files for each model (this appears to be necessary), load the appropriate one when building the prompt.
- - Add model to Config, set on start-up and get every time gpt-4 is currently mentioned.
GPT3.5 is so much cheaper, allowing for much more testing and development.
In future we could even get GPT4 instances of Auto-GPT to cheaply spin up entire sub-instances of Auto-GPT running GPT3.5 for multiple step tasks...
from auto-gpt.
Guys, glad that it helped you. Just be very cautious when using my code. This is my very first programming attempt. I have no clue what I am doing :)
from auto-gpt.
Btw, the idea I am working on is to create specialist GPT instances project managers, marketers, operators), where the bot would have it's own complex prompt replicating what a person in such role would do
from auto-gpt.
I'm playing with v3.5 thanks to @Koobah fork: https://github.com/Koobah/Auto-GPT
It seems to be able to do some things: https://www.loom.com/share/9ddf4b806dfb4a948c7d728669ebac28
Got stopped by this error:
Traceback (most recent call last):
File "/Users/stas/dev/Auto-GPT/AutonomousAI/main.py", line 134, in
assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stas/dev/Auto-GPT/AutonomousAI/chat.py", line 50, in chat_with_ai
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4121 tokens. Please reduce the length of the messages.
from auto-gpt.
Brilliant! Make sure you're up-to-date, as I'm adding features all the time.
Please share your journey with us over at Discussions, I'd love to see how things progress as you go.
from auto-gpt.
from auto-gpt.
Nice!
I think we should use both. I prefer dotenv, and was thrilled to see that PR, but was trying to keep my PR from fixing all the things at once. ;)
from auto-gpt.
I agree, @jcp. I think this decision makes my PRs not useful anymore. lessons learned, don't format on save(when contributing to OSS), make the PRs small, and address each issue separately.
from auto-gpt.
You can apply for access to the GPT-4 API here:
https://openai.com/waitlist/gpt-4-api
Unfortunately, in my testing when I've tried to run Auto-GPT using GPT3.5 it does not function at all as the model does not understand it's task.
from auto-gpt.
I'm playing with v3.5 thanks to @Koobah fork: https://github.com/Koobah/Auto-GPT It seems to be able to do some things: https://www.loom.com/share/9ddf4b806dfb4a948c7d728669ebac28
Got stopped by this error:
Traceback (most recent call last):
File "/Users/stas/dev/Auto-GPT/AutonomousAI/main.py", line 134, in
assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stas/dev/Auto-GPT/AutonomousAI/chat.py", line 50, in chat_with_ai
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4121 tokens. Please reduce the length of the messages.
I dont understand it, I download Koobah's fork, how to run it with gpt 3.5 key ? I used command git clone https://github.com/Koobah/Auto-GPT , but I dont find main.py in his repository
Update: I am roll it out. Just need to create keys.py (read readme) in autonomousAI path and run main.py !
from auto-gpt.
Note that another "autonomous agent" experiment by Yohei (which is more quite popular on Twitter, but not open source) has produced impressive demos using GPT3. Yohei has recently publicized the architecture of the system, and I think there are things to learn from it, e.g. using a task queue, and using a vector store for long-term memory rather than files. But that system doesn't implement code execution yet, let alone code improvement.
from auto-gpt.
@Taytay and @0xcha05, an alternative solution involves PR #39. Rather than using command line arguments, you could introduce an environment variable that overrides the default model. This approach offers more flexibility, especially as the codebase becomes more modular.
from auto-gpt.
Absolutely agree with the "Make PR's small" part.
Big pull requests are actually slowing things down right now, there's a lot to get through.
from auto-gpt.
i would love to see local model support soon because i now have my api setup and would love to use my own language model. openai gets expensive.
from auto-gpt.
Related Issues (20)
- ModuleNotFoundError: No module named 'ta'
- ModuleNotFoundError: No module named 'ta'
- FileNotFoundError: [Errno 2] No such file or directory: '/Users/oleksandrmediantsev/PycharmProjects/AutoGPT/autogpt/...
- APITimeoutError: Request timed out.
- APIConnectionError: Connection error.
- APIConnectionError: Connection error.
- CommandExecutionError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed b...
- Figure out & document how to use llamafiles with WSL
- CommandExecutionError: Service /Users/michellechen/.wdm/drivers/chromedriver/mac64/126.0.6478.126/chromedriver-mac-arm64...
- FileNotFoundError: [Errno 2] No such file or directory: '/Users/masatsugu.yamada/PycharmProjects/AutoGPT/forge/azure...
- AssertionError: Azure endpoint not configured
- ValidationError: 3 validation errors for AppConfig
- APITimeoutError: Request timed out.
- ImportError: Using SOCKS proxy, but the 'socksio' package is not installed. Make sure to install httpx using `...
- Update Pydantic to V2
- Installment on i5 not working HOT 1
- Update `pytest-asyncio` dependency to `^0.23.x`
- Fix `autogpt` type checking error
- Set up open-source engagement & responsiveness dashboard
- Directory /Users/macmini/AutoGPT/agents/benchmark for agbenchmark does not seem to be a Python package HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from auto-gpt.