Giter VIP home page Giter VIP logo

Comments (94)

Torantulino avatar Torantulino commented on June 22, 2024 40

I like the idea of running it offline too, we're looking into it! It would make Auto-GPT that much more accessible.

Thanks for the outstanding interest.

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024 20

I might be able to swing that. Let's see if this merge gets approved. I'm also looking at implementing GPT4all.

from auto-gpt.

ResourceHog avatar ResourceHog commented on June 22, 2024 18

Would be huge if it can run llama.cpp locally.

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024 12

They moved the API call to GPT-4 to an external library in main.py, but there are still some scripts that call openai directly, like chat.py, browse.py etc.

GPT4all doesn't support x64 architecture. I also tried some APIs on hugging face, but it seems that it truncates responses on the free API endpoints.

I just converted BabyAGI to Oobabooga, but it's untested. Should be getting to that tonight. If it works, I will start working on porting AutoGPT to Oobabooga as well. The nice thing about this method is that it allows for local or remote hosting, and can handle many different language models without much issue.

Hugging face should work, though. I need to review Microsoft/Jarvis. They make heavy use of HF APIs.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 11

Ooba's UI is a lot of overhead just to send and receive requests from a different model.

AFAIK ooba supports two types of models, HuggingFace models and GGML (llama.cpp) models (like GPT4All). The former with HuggingFace libraries and the latter with these python bindings: https://github.com/thomasantony/llamacpp-python for llama.cpp

Adding even basic support for just one of these would surely bring in waves of developers who want local models and who would then contribute to improving Auto-GPT.

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024 7

There's an issue with this. Auto-GPT relies on specifically structured prompts in order to function correctly. Llama does not do well at providing prompts structured in the exact format that is required. Vicuna does a much better job. It's not perfect, but could probably get there with some fine tuning.

I have a fork of an older version of Auto-GPT that I am planning to hook up to vicuna. Right now, I am waiting on Oobabooga to fix a bug in their API. I've been working with BabyAGI at the moment because it is simpler than AutoGPT. Once BabyAGI is working, I will migrate the changes to AutoGPT as well.

from auto-gpt.

drusepth avatar drusepth commented on June 22, 2024 7

Of note, Basaran didn't support Llama-based models until today, but today's release finally adds support for them.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 6

well in the meantime i think i'll fork it to use llama instead, i got gpt4 access but i like the idea of being able to let it run for very long without worrying about cost or api overuse.

I think a lot of people want this but just don't know it yet. There are lots of interesting use cases which would wrack up a huge OpenAI bill that LLaMA-30B or 65B can probably handle fine for just the cost to power a 150watt $200 Nvidia Tesla P40.

from auto-gpt.

keldenl avatar keldenl commented on June 22, 2024 6

✨ FULL GUIDE POSTED on how to get Auto-GPT running with llama.cpp via gpt-llama.cpp in keldenl/gpt-llama.cpp#2 (comment).

Huge shoutout to @DGdev91 for the PR and hope it gets merged soon!

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 5

With minimal finetuning LLaMA can easily do better (yes better*) than GPT-4. Finetuning goes a long way and LLaMA is a very capable base model. The Vicuna dataset (ShareGPT) is available for finetuning here: https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/tree/main/HTML_cleaned_raw_dataset

The ideal finetuning would be based on a dataset of GPT-4's interactions with Auto-GPT though.

*To be fair, GPT-4 could do better than it already does "out of the box" with a few tweaks like using embeddings, but that is besides the point.

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024 5

@MarkSchmidty We absolutely could port the prompts and responses from autogpt to file and use that for fine tuning vicuna. I don't have a GPU, however, so I'm not able to perform the operation, and I don't have the money for gpt-4.

from auto-gpt.

drusepth avatar drusepth commented on June 22, 2024 5

Been following this thread while I implement local models in babyagi, but just wanted to pop in and voice my desire to see local models in this project, too. OpenAI is easy to use and implement, but local models have huge benefits in price and and customization which seem paramount to optimize for in projects like these.

from auto-gpt.

peakji avatar peakji commented on June 22, 2024 5

Basaran's author here, thanks for recommending our project!

Compared to other LLaMA-specific projects, the advantage of integrating Basaran is that it allows users to freely choose various models from the past and future, not just limited to LLaMA variants. We believe this is particularly important for non-English speaking users, as currently LLaMA is mainly trained on Latin languages.

For compatibility, currently you only need to modify one environment variable to migrate from the OpenAI text completion API to Basaran, without modifying a single line of code. In the next few weeks, we will also add message template support for models that have undergone instruction tuning, making them fully compatible with the OpenAI Chat API.

from auto-gpt.

mudler avatar mudler commented on June 22, 2024 5

for anyone that wants to try autogpt locally, I've created an example for LocalAI that you can run with docker-compose easily just in one command: https://github.com/go-skynet/LocalAI/tree/master/examples/autoGPT

there is no need to do any changes to AutoGPT, as it enough to set OPENAI_API_BASE as environment variable to point to the LocalAI instance.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 4

GPT4all supports x64 and every architecture llama.cpp supports, which is every architecture (even non-POSIX, and webassemly). Their moto is "Can it run Doom LLaMA" for a reason.

Ooga supports GPT4all (and all llama.cpp ggml models), since it packages llama.cpp and the llamacpp python bindings library. So porting it to ooba would effectively resolve this.

from auto-gpt.

Torantulino avatar Torantulino commented on June 22, 2024 3

Excellent work. Lots of people are asking for this submit a pull request!

In order to fully support gpt3.5 (and other models) we need to harden the prompt.

@Koobah had some success by adding this line to the end of prompt.txt:

Before submitting the response, simulate parsing the response with Python json.loads. Don't submit unless it can be parsed.

This would also help out #21

from auto-gpt.

alkeryn avatar alkeryn commented on June 22, 2024 3

well in the meantime i think i'll fork it to use llama instead, i got gpt4 access but i like the idea of being able to let it run for very long without worrying about cost or api overuse.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 3

@MarkSchmidty I've been looking for one. We've tried llama.cpp, oobabooga, and huggingface. Is there another option you have working with vicuna?

https://github.com/hyperonym/basaran works with all HuggingFace format GPU models.
"Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models."

image

https://github.com/alexanderatallah/Alpaca-Turbo is a local server implementation of the OpenAI API for Alpaca using alpaca.cpp which can be easily modified for other LLaMA based models. It's a drop-in replacement endpoint for the OpenAI API anywhere the OpenAI API works like basaran.

Are we talking local server or?

Yes

Maybe this? Maybe I have seen one with more stars but I don't remember where.

That looks similar to alpaca-turbo.

from auto-gpt.

keldenl avatar keldenl commented on June 22, 2024 3

i made a drop-in replacement for openai via llama.cpp (had some issues with the python binding llama-cpp-python). almost got it working with autogpt, just need to fully support same-dimension embeddings, if anybody has any pointers pls lmk, i got auto-gpt working for a couple cycles but it's really inconsistent

https://github.com/keldenl/gpt-llama.cpp

update: i JUST got autogpt working with llama.cpp! see keldenl/gpt-llama.cpp#2 (comment)

i'm using vicuna for embeddings and generation but it's struggling a bit to generate proper commands to not fall into a infinite loop of attempting to fix itself X( will look into this tmr but super exciting cuz i got the embeddings working! (turns out it was a bug on my end lol)

had to make some changes to autogpt (add base_url to openai_base_url, and adjust the dimensions of the vector, but otherwise left it alone)

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 3

@keldenl Have you heard of SuperCOT? It's a LoRA made for use with LangChain. It may work well merged with Vicuna as an Auto-GPT backend.

from auto-gpt.

Pwuts avatar Pwuts commented on June 22, 2024 3

Coming soon... :)

from auto-gpt.

github-actions avatar github-actions commented on June 22, 2024 3

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

from auto-gpt.

ntindle avatar ntindle commented on June 22, 2024 3

We've made lots of progress on this front recently with @Pwuts 's work on additional providers. We can now begin the work on more open providers. We will likely be starting with llamafile then moving out from there

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024 2

@alkeryn I managed to perform this using sentence_transformers library. This appears to work for Vicuna and pinecone, but you have to change your index dimensions from 1536 to 768 on pinecone. I think the model dictates the index dimensions. I couldn't find a way to adjust the dimensions otherwise.


model = SentenceTransformer('sentence-transformers/LaBSE')

def get_ada_embedding(text):
    # Get the embedding for the given text
    embedding = model.encode([text])
    return embedding[0]```

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 2

Neither of us can read @Torantulino's mind.

But if you're right and people want this functionality as much as I suspect they do, either a wave of enthusiastic support for it will sway his mind or the interest will turn into a fork more capable than Auto-GPT (due to the benefits outlined in #348).

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024 2

Nice. All three of these look promising. I'll test sometime this week.

from auto-gpt.

alkeryn avatar alkeryn commented on June 22, 2024 2

@MarkSchmidty wait that's great, if it works we can basically run it on local models with no modifications whatsoever.

from auto-gpt.

DGdev91 avatar DGdev91 commented on June 22, 2024 2

Thanks to @keldenl for his work!

I made a pull request for the changes mentioned by him #2594

from auto-gpt.

chymian avatar chymian commented on June 22, 2024 2

One Proxy to rule them all!

https://github.com/BerriAI/litellm/

is a API-Proxy with a vast choice on backends, like replicat, openai, petals, ...
and it works like charm.
pls implement!

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 1

There area already OpenAI API server clones for local models like LLaMA/Alpaca/Vicuna/GPT4All. All you need to do is change the endpoint to point at one. It's very much feasible.

from auto-gpt.

st4s1k avatar st4s1k commented on June 22, 2024 1

Would be huge if it can run llama.cpp locally.

  • also train it (fine tune it) in parallel with communicating with it (for locally installed LLMs)
  • having backup checkpoints

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 1

StableLM was released today. Has 4096 context window. StableLM 15B will be trained on 1.5T tokens (more than LLaMA)

from auto-gpt.

InfernalDread avatar InfernalDread commented on June 22, 2024 1

@keldenl Fantastic! Once again, thank you for all your efforts in making this dream a reality!

from auto-gpt.

keldenl avatar keldenl commented on June 22, 2024 1

Quick follow-up, although gpt-llama.cpp worked with auto-gpt at the time of posting the guide, there were various bugs that plague it from working consistently. I've went ahead and fixed all the ones i could find in yesterday and today, and it's running quite well! (full details here, but it runs continuously forever: keldenl/gpt-llama.cpp#2 (comment))

now it's down to specifically getting better responses from vicuna :')

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024 1

LoRAs are always optional. They're small files which modify a model. In this case, SuperCOT is a LoRA which modifies a model to make it work with LangChain (and Auto-GPT) better. Any model will work with Auto-GPT. But some work better than others. You can use the SuperCOT LoRAs with any LLaMA based model (including the two you listed). But they're in different formats if they're for cpp or for GPU models. (The different formats are clearly labeled at those links.)

from auto-gpt.

manyMachines avatar manyMachines commented on June 22, 2024 1

Documentation for Google's PaLM APIs:

https://developers.generativeai.google/api

Would be nice to optionally use their embeddings as well in Auto-GPT.

Currently, for preview users of text-bison-001, the input token limit is 8196, output 1024 and is rate limited to 30 requests a minute.

from auto-gpt.

lc0rp avatar lc0rp commented on June 22, 2024 1

Linking similar LLM-related comments here and closing them:

GPT4ALL:

from auto-gpt.

Boostrix avatar Boostrix commented on June 22, 2024 1

The ini file is bit funny given how everything else already uses yaml πŸ‘πŸ˜‰

from auto-gpt.

isaaclepes avatar isaaclepes commented on June 22, 2024 1

I still dream of a day when we can use Petals.dev's API for distributed processing. It even allows you to make your own private swarm. I am picking up old, free computers off Craigslist and adding cheap/free GPU's to them for my private swarm.

from auto-gpt.

DGdev91 avatar DGdev91 commented on June 22, 2024 1

I'm new to this. I'm wondering if autogpt is able to call custom url(other than openai api) to get response? So that we can use other serving systems like TGI or vllm to serve our own llm.

Oh, is this thread still open?

Well, it's possible now. Just set the OPENAI_API_BASE variable and you can use any service wich is compliant with OpenAI's API.

....But local LLMs aren't as good as GPT-4 and i never obtained much, even if is technically possible to use them. So i gave up some time ago.

Maybe some recent long-context LLM like llongma and so on can actually work, but i never tried that.

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024

I see you looking over my shoulder. Thoughts?

I've got a friend who is going to clone the branch and test for me. (hopefully) I don't have a working environment atm.

from auto-gpt.

Koobah avatar Koobah commented on June 22, 2024

I also changed the user prompt from NEXT COMMAND to GENERATE NEXT COMMAND JSON. Basically reminding it to use JSON whenever possible
It's still not 100% working, though.

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024

https://github.com/DataBassGit/Auto-GPT/blob/master/scripts/ai_function_lib.py

@Koobah This is basically what I'm working with atm. I think we can probably add a verify_json function to a gpt-3.5 segment of that function.

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024

Pull request on this is submitted. I'm going to start looking into more models and platforms that can be incorporated.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

@DataBassGit I see that PR got closed. What's the status of your fork?

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024

The Python Client for gpt4all only supports x86 Linux and ARM Mac.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

I'm running it on x64 right now.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

To be clear, the x86 architecture for gpt4all should really be called x86/x64. It supports either.

But none of the gpt4all libraries are required to run inference with gpt4all. They have their own fork to load the pre-prompt automatically. But you can load the same pre-prompt with one click in ooba's UI with standard llama.cpp and the regular llama.cpp python bindings as a back-end. You don't need anything but the model .bin and ooba's webui repo.

from auto-gpt.

alkeryn avatar alkeryn commented on June 22, 2024

wouldn't it be simpler to just make an api call to ooba's gui instead of managing the loading of models ?
it may be easier to just have a standardized api so you don't have to care about implementation details.

from auto-gpt.

alkeryn avatar alkeryn commented on June 22, 2024

@MarkSchmidty i see what you mean, though, maybe it would be simpler to make a separate project that exposes a standardized api and maybe some extensibility through plugins and not much else, so that other projects can just use the api without having to care about how to implement the various models and techniques.

either way in the long run i think it may be better if we have a standard api that was well thought out, just like language servers made our editors nicer, it would be nice to have a llm or even ai standardized api.

if we could avoid fragmentation that'd be great and there is no better time than now to do so.

from auto-gpt.

alkeryn avatar alkeryn commented on June 22, 2024

@DataBassGit yes this is what i found during trying to implement it, and that's before the pinecone update, after the pinecone update there is an additional use of openai to generate embbedings, which would also need to be made differently.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

@alkeryn I managed to perform this using sentence_transformers library. This appears to work for Vicuna and pinecone, but you have to change your index dimensions from 1536 to 768 on pinecone. I think the model dictates the index dimensions. I couldn't find a way to adjust the dimensions otherwise.


model = SentenceTransformer('sentence-transformers/LaBSE')

def get_ada_embedding(text):
    # Get the embedding for the given text
    embedding = model.encode([text])
    return embedding[0]```

Awesome!

There are offline embedding replacements for pinecone that might be more ideal. For example, https://github.com/wawawario2/long_term_memory is a fork of ooba which produces and stores embeddings locally using zarr and Numpy. See https://github.com/wawawario2/long_term_memory#how-it-works-behind-the-scenes

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024

Unfortunately, that would take a lot of chopping to apply to what I am using it for. This is designed for the webui, which I am not using. We're loading ooba in API mode so no --chat or --cai-chat flag.

python server.py --auto-devices --listen --no-stream

This is how I am initiating the server.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

Right, it would have to be re-implemented specifically for Auto-GPT. I just thought I'd point out that it is a future possibility.

I suppose local embeddings is a separate issue / feature we can look into.

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024

I don't expect that @Torantulino will implement local anything. That opens a window for lots of bugs and extra tech debt that he doesn't need. My intention was only to add the capacity for others to replace the api library with one of their own choosing, with the understanding that it's not supported. Thus offloading the api calls to a separate library would give us the ability to build an API interface for whatever we needed, without him needing to support it.

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

Been following this thread while I implement local models in babyagi, but just wanted to pop in and voice my desire to see local models in this project, too. OpenAI is easy to use and implement, but local models have huge benefits in price and and customization which seem paramount to optimize for in projects like these.

Local embeddings have these benefits and more as well.

If you like the sound of that, check out my meta-feature request at for #348 Fully Local/Offline Auto-GPT and give it a boost. :)

image

from auto-gpt.

9cento avatar 9cento commented on June 22, 2024

+1

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024

I've been working on making an offline port for BabyAGI because it's a much simpler model. One of the issues with offline ports is that each model has a different input format for interpretation. It can also change based on the host api interface. Does it take plain text? JSON?

You also have different token windows per model and different dimensions per memory index. I'm not sure that building a universal agi that can interface with many different models without having to rebuild from scratch is feasible.

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024

@MarkSchmidty I've been looking for one. We've tried llama.cpp, oobabooga, and huggingface. Is there another option you have working with vicuna?

from auto-gpt.

alreadydone avatar alreadydone commented on June 22, 2024

Maybe this? Maybe I have seen one with more stars but I don't remember where.

from auto-gpt.

9cento avatar 9cento commented on June 22, 2024

from auto-gpt.

9cento avatar 9cento commented on June 22, 2024

from auto-gpt.

0xgeert avatar 0xgeert commented on June 22, 2024

Just to add one: Drop in open API replacement as a frontend for Llama CPP. With 400+ stars atm, seems to be one of the more popular:

https://github.com/abetlen/llama-cpp-python

from auto-gpt.

9cento avatar 9cento commented on June 22, 2024

Basaran's author here, thanks for recommending our project!

Compared to other LLaMA-specific projects, the advantage of integrating Basaran is that it allows users to freely choose various models from the past and future, not just limited to LLaMA variants. We believe this is particularly important for non-English speaking users, as currently LLaMA is mainly trained on Latin languages.

For compatibility, currently you only need to modify one environment variable to migrate from the OpenAI text completion API to Basaran, without modifying a single line of code. In the next few weeks, we will also add message template support for models that have undergone instruction tuning, making them fully compatible with the OpenAI Chat API.

BASEDRAN

from auto-gpt.

PurpleStarChild avatar PurpleStarChild commented on June 22, 2024

I'm looking at doing something similar to this to make it so oobabooga API is used for accessing a local LLM. I am mostly looking at creating a proof of concept integration, but I'm still learning about the overall code structure of AutoGPT.

Could someone point me in the direction of the key areas that would need to be touched in order to do this. Or alternatively the areasthat make direct calls to openAI so I know where to start?

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

I'm looking at doing something similar to this to make it so oobabooga API is used for accessing a local LLM. I am mostly looking at creating a proof of concept integration, but I'm still learning about the overall code structure of AutoGPT.

Could someone point me in the direction of the key areas that would need to be touched in order to do this. Or alternatively the areasthat make direct calls to openAI so I know where to start?

Oogabooga's API is not OpenAI Completions API compatible so it will not work with Auto-GPT. Basarn is a local OpenAI Completions API server for local models. You'll want to use that instead. https://github.com/hyperonym/basaran

from auto-gpt.

PurpleStarChild avatar PurpleStarChild commented on June 22, 2024

Ahh thank you! I'll give that a try.

from auto-gpt.

keldenl avatar keldenl commented on June 22, 2024

i made a drop-in replacement for openai via llama.cpp (had some issues with the python binding llama-cpp-python). almost got it working with autogpt, just need to fully support same-dimension embeddings, if anybody has any pointers pls lmk, i got auto-gpt working for a couple cycles but it's really inconsistent

https://github.com/keldenl/gpt-llama.cpp

from auto-gpt.

Vitorhsantos avatar Vitorhsantos commented on June 22, 2024

There is this one too: https://github.com/rhohndorf/Auto-Llama-cpp

We face the same problem here, vicuna understands the comands given via prompt differently from GPT-4, so there is need to re-work the commands.

from auto-gpt.

keldenl avatar keldenl commented on June 22, 2024

i wonder if there's another similar agent based package that is designed with gpt3 in mind and shorter prompts that could help with vicuna generation

from auto-gpt.

InfernalDread avatar InfernalDread commented on June 22, 2024

i made a drop-in replacement for openai via llama.cpp (had some issues with the python binding llama-cpp-python). almost got it working with autogpt, just need to fully support same-dimension embeddings, if anybody has any pointers pls lmk, i got auto-gpt working for a couple cycles but it's really inconsistent
https://github.com/keldenl/gpt-llama.cpp

update: i JUST got autogpt working with llama.cpp! see keldenl/gpt-llama.cpp#2 (comment)

i'm using vicuna for embeddings and generation but it's struggling a bit to generate proper commands to not fall into a infinite loop of attempting to fix itself X( will look into this tmr but super exciting cuz i got the embeddings working! (turns out it was a bug on my end lol)

had to make some changes to autogpt (add base_url to openai_base_url, and adjust the dimensions of the vector, but otherwise left it alone)

Will you be sharing your fork of AutoGPT so that we can test it out? Thank you by the way for all the work you have done!

from auto-gpt.

keldenl avatar keldenl commented on June 22, 2024

@InfernalDread i'll do it after work today! will post in the issue in the repo but i'll post it here too

from auto-gpt.

keldenl avatar keldenl commented on June 22, 2024

awesome stuff @DGdev91 , i'll write up a quick guide tonight on using that fork + gpt-llama.cpp

from auto-gpt.

keldenl avatar keldenl commented on June 22, 2024

@MarkSchmidty i have not! i'm going to go ahead and try it tonight and report the results here~

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

@keldenl The merged versions you can find on HuggingFace are merged with LLaMA. They may perform well. But you can merge with a Vicuna model yourself or keep your eye on this Vicuna/SuperCOT finetune currently in the works: https://huggingface.co/reeducator/vicuna-13b-free/discussions/11#64453a62f993c804b0338fa8

Right now they're merging the two datasets and working on removing things like "As an AI language model, I can't do X" from the Vicuna dataset.

from auto-gpt.

keldenl avatar keldenl commented on June 22, 2024

I'm gonna try out the llama supercot version first, and i'll keep an eye on the vicuna version πŸ‘€ . thank you! didn't even hear about supercot before this

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

I forgot, llama.cpp supports applying LoRAs as of a few days ago. There's a GGML version of the SuperCOT LoRA which can be applied to Vicuna-13B at run time here: https://huggingface.co/kaiokendev/SuperCOT-LoRA/tree/main/13b/ggml/cutoff-2048

You might try that as well as the LLaMA merged version.

from auto-gpt.

9cento avatar 9cento commented on June 22, 2024

I forgot, llama.cpp supports applying LoRAs as of a few days ago. There's a GGML version of the SuperCOT LoRA which can be applied to Vicuna-13B at run time here: https://huggingface.co/kaiokendev/SuperCOT-LoRA/tree/main/13b/ggml/cutoff-2048

You might try that as well as the LLaMA merged version.

Hi there, quick question: does this work also with more demanding models like vicuna-13B-4bits-128 and gpt4-x-alpaca or is specifically tailored to llama.ccp/.ccp models in general? Thanks

from auto-gpt.

MarkSchmidty avatar MarkSchmidty commented on June 22, 2024

There are SuperCOT LoRAs for both GPU and CPU in 30B, 13B, and 7B sizes, as well as merged models for GPU. Some of them are listed in the readme here: https://huggingface.co/kaiokendev/SuperCOT-LoRA You can use the LoRA with any finetune. But they'll work best with finetunes trained with the same prompting format.

Others are buried in the files here: https://huggingface.co/kaiokendev/SuperCOT-LoRA/tree/main

For GPU 13B you would want one of these two LoRAs, depending on the cutoff length of the finetune you're using it with: https://huggingface.co/kaiokendev/SuperCOT-LoRA/tree/main/13b/gpu (If you don't know, just try both.)

from auto-gpt.

9cento avatar 9cento commented on June 22, 2024

There are SuperCOT LoRAs for both GPU and CPU in 30B, 13B, and 7B sizes, as well as merged models for GPU. Some of them are listed in the readme here: https://huggingface.co/kaiokendev/SuperCOT-LoRA You can use the LoRA with any finetune. But they'll work best with finetunes trained with the same prompting format.

Others are buried in the files here: https://huggingface.co/kaiokendev/SuperCOT-LoRA/tree/main

For GPU 13B you would want one of these two LoRAs, depending on the cutoff length of the finetune you're using it with: https://huggingface.co/kaiokendev/SuperCOT-LoRA/tree/main/13b/gpu (If you don't know, just try both.)

Sorry but I'm relatively new to this stuff so I'll ask you two more questions. First I just would like to know if your method works virtually for every model (not .ccp only, just to be clear) and then if a LoRA is mandatory or optional. Again, forgive my confusion.

from auto-gpt.

Boostrix avatar Boostrix commented on June 22, 2024

Also see: #2158 and #25 or #348 / #347

from auto-gpt.

alkeryn avatar alkeryn commented on June 22, 2024

@mudler neat, someone also mentioned basaran earlier : https://github.com/hyperonym/basaran
so how does it perform in your experience ?

from auto-gpt.

mudler avatar mudler commented on June 22, 2024

@mudler neat, someone also mentioned basaran earlier : https://github.com/hyperonym/basaran
so how does it perform in your experience ?

I didn't tried basaran, I've only tried with ggml models and LocalAI. With wizardLM and vicuna-cot seems promising. Not at OpenAI levels, but definitely in the good direction!

from auto-gpt.

mudler avatar mudler commented on June 22, 2024

The ini file is bit funny given how everything else already uses yaml πŸ‘πŸ˜‰

The env file you mean? You can also put the env variables in the docker compose file and make it a one file only :)

from auto-gpt.

xloem avatar xloem commented on June 22, 2024

It’s great to learn of SuperCOT, but of course any efforts to collect and/or curate data specifically for Auto-GPT will produce an even more powerful model.

from auto-gpt.

haochuan-li avatar haochuan-li commented on June 22, 2024

I'm new to this. I'm wondering if autogpt is able to call custom url(other than openai api) to get response? So that we can use other serving systems like TGI or vllm to serve our own llm.

from auto-gpt.

9cento avatar 9cento commented on June 22, 2024

I'm new to this. I'm wondering if autogpt is able to call custom url(other than openai api) to get response? So that we can use other serving systems like TGI or vllm to serve our own llm.

Oh, is this thread still open?

Well, it's possible now. Just set the OPENAI_API_BASE variable and you can use any service wich is compliant with OpenAI's API.

....But local LLMs aren't as good as GPT-4 and i never obtained much, even if is technically possible to use them. So i gave up some time ago.

Maybe some recent long-context LLM like llongma and so on can actually work, but i never tried that.

Did you gave a try to Llama 2? Codellama?

from auto-gpt.

DataBassGit avatar DataBassGit commented on June 22, 2024

The issue with open source models is that they are trained differently. Getting a response in a specific format requires either fine tuning of the model or modification of the prompts. AutoGPT wasn't designed to make it easy to edit the prompts, and fine tuning is expensive. Eventually, I just built my own agent framework.

from auto-gpt.

Wladastic avatar Wladastic commented on June 22, 2024

One Proxy to rule them all!

https://github.com/BerriAI/litellm/

is a API-Proxy with a vast choice on backends, like replicat, openai, petals, ... and it works like charm. pls implement!

Thank you for your suggestion,
I will take a look at this.

from auto-gpt.

alkeryn avatar alkeryn commented on June 22, 2024

@Wladastic just so you know textgen webui has a openai like api now, you can look in the wiki on how to force openai to use its api.

you can see how to set up here :
https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API

you will want to do something like : export OPENAI_API_BASE=http://localhost:5000/v1

from auto-gpt.

Wladastic avatar Wladastic commented on June 22, 2024

I know about that one,
I was talking about the project mentioned above

from auto-gpt.

impredicative avatar impredicative commented on June 22, 2024

This request is more relevant if Claude 3 Opus is actually better than GPT4, at least for some types of tasks.

from auto-gpt.

GoZippy avatar GoZippy commented on June 22, 2024

I got your activity

from auto-gpt.

alkeryn avatar alkeryn commented on June 22, 2024

i mean at that point it is pretty easy to tell it to use other OpenAPI compatible backends with the env variables.
and there must be proxies to use other providers too ie convert a non OpenAPI api to OpenAPI compatible, if not it's pretty trivial to build.

from auto-gpt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.