Comments (7)
As non-staff, I am not sure how to measure something that doesn't exist.
If a self-improving superhuman system exists, we can use that to measure gorilla.
from gorilla.
@fire Thanks for engaging : )
As non-staff, I am not sure how to measure something that doesn't exist.
Are you claiming "potential for systems built with gorilla to FOOM" doesn't exist? Proof of that claim is exactly what I'm asking for.
If a self-improving superhuman system exists, we can use that to measure gorilla.
My understanding is, "we" can't use a self-improving superhuman for anything. It decides what to do based on whatever encoding of goals were in it when it went through self-improvement, and currently encoding goals is similarly not understood.
from gorilla.
Also what is FOOM?
from gorilla.
Also I am unsure of the timeline since OpenAI’s models are stronger and the promised work of apizoo integration isn’t done.
As far as I know GPT4 is not considered super human.
Also from a point of view of resources, we don’t have self fabricating ais yet that are also self improving.
from gorilla.
FOOM is an acronym: Fast Onset of Overwhelming Mastery. It refers to the hypothesized point when an AI system gets capable enough to modify its own software and improve itself, and in doing so improves it's ability to improve itself recursively.
GPT4 is in many ways superhuman. It's breadth of knowledge and speed at processing is vastly beyond any human. Only its depth and persistence of thought are lacking, and it may be possible to improve those with add-ons that don't require re-training the underlying model. So, it's my view that tools built with GPT4 are ambiguously FOOM capable and nobody has any real proof one way or the other rn.
The issue is that if an AI self improves, we don't know where it's capabilities will plateau. It may be far enough that it gains a decisive strategic advantage over all other planning agents (humanity) and then optimizes the universe for whatever (poorly) specified goal it was optimizing when it underwent self improvement.
Also, self fabrication is unnecessary for improving algorithms and utilization of existing hardware.
I understand that there is a common view that these things are far from human intelligence and far from dangerous, but they have been advancing so quickly in the last 10 years, and we really don't understand how they work, so I'd feel a lot more relaxed if I though people working on making them have agency and more capabilities were aware of the state of AI safety and AI alignment research.
Thanks again for your time reading my thoughts : )
from gorilla.
can you eleborate why foom is needed
from gorilla.
Foom is something we want to avoid. If a system fooms, it may spread through cryptography and/or social exploits & cause unbounded harm by pursuing misaligned goals.
The fact that nobody knows what is required for a system to foom is a problem. Many people look at the state of AI today and think "this is obviously fine" while others think "we're already way past what is safe" and so we need to be getting people together on the same page about this and red teaming foom risk I think is a good step in that direction.
from gorilla.
Related Issues (20)
- how to test new model on BFCL? HOT 2
- [bug] openfunctions-v2 default chat template
- [feature] Add multi-turn conversational function calling category for benchmarking HOT 2
- the evaluation of class relevance in BFCL maybe unfair HOT 1
- What format was used for the final fine-tuning of LLaMA2-7B in RAFT? HOT 1
- [bug] Hosted Gorilla: <Issue> HOT 6
- The Urban Dictionary from the RapidAPI is not serving, can't evaluate execution data
- auto fill missed mandatory param is a nightmare HOT 3
- [bug] Hosted Gorilla: <Issue> HOT 2
- [bug] Hosted Gorilla: <Issue> HOT 1
- [bug] Hosted Gorilla: <Issue> HOT 2
- Rapid API error (Yahoo Finance, https://rapidapi.com/sparior/api/yahoo-finance15) is inaccessible HOT 6
- Local CUDA Support for RAFT
- Revamp Landing README HOT 3
- [bug] OpenFunctions-v2: <Issue> HOT 1
- [bug] OpenFunctions-v2: <HTTP code 502> HOT 1
- When [Evaluate the Response with AST tree matching]: TypeError: __init__() takes exactly 1 argument (2 given)
- Data issue HOT 1
- Question about AST evaluation for Java and JavaScript HOT 1
- [RAFT] Publish Pypi package with raft, eval and format scripts
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gorilla.