👋
nat / openplayground Goto Github PK
View Code? Open in Web Editor NEWAn LLM playground you can run on your laptop
License: MIT License
An LLM playground you can run on your laptop
License: MIT License
👋
We use Azure in-house, would love to be able to use them as a provider on the openplayground.
openplayground -e .env
will not respect the existing .env
file and instead overwrite it with a blank file.
As there are no Discussions, this is an "Issue". How was that demo video made? It is too... smooth and perfect to be a screen capture. A good UI showcase indeed.
It would be great to make available the gpt-4-32k model.
If you write a prompt in the Playground and then switch to compare without hitting Submit, the playground won't save your prompt and it will be lost when you return to the Playground tab.
When I do openplayground run
, this is what I'm getting:
Initializing download manager...
Download loop started...
* Serving Flask app 'server.app'
* Debug mode: off
An attempt was made to access a socket in a way forbidden by its access permissions
I ran the cmd prompt with admin and still got the same message.
Just want to confirm this is/is not possible atm. Can't see an option?
First, thank you a lot Nat and everybody who contributed to providing this as a hosted service as well as open-sourcing it.
This is genuinely really cool, especially considering that some of the models are themselves open source.
Would you consider adding a GDPR-compliant privacy policy (even just the IP address is considered personal data)?
That would be much appreciated.
i run the software in local host , i get text-davinchi-3 maximun token is 1024 ! how to make it 4000 ??????
Wondering how I can use this instead of openai api -- does this program provide a rest api?
The entire OS freezes up for me. Happens on both manjaro and ubuntu (server)
After downloading a few models the os freezes up.
After pip and run, in localhost:5432 I have a white page, only title in tab report OpenPlayground.
I open developer tool and in console I have this error message:
Failed to load module
script: Expected a JavaScript module script but the server responded with a MIME type of "text/plain". Strict MIME type checking is enforced for module scripts per HTML spec.
i added hugging face read api key, but when i try to run a query i get a 503
openplayground
why when i put in my huggingface api key and select a gpt4 model, does it just say pending? what am i supposed to do?
What does it mean "use a wsgi server"??
I'm trying to get the openai chat-gtp working, but I don't know how.
I have paid $5 for my another email count in nat.dev ,but the gmail account. is there any way to refund the money?
Can you please provide an example of how to load a .bin model please?
and thr json files i download in pc whats i need from it and how to use ?
i have dolly installed locally and try running a query and it just throws a generic error.
INFO:server.app:Received inference request huggingface-local
INFO:server.lib.inference:Requesting inference from databricks/dolly-v2-12b on huggingface-local
INFO:server.lib.inference:Starting inference for 1 - databricks/dolly-v2-12b
Error
Unknown error
Have been running into this error below building the frontend app npx parcel build src/index.html --no-cache
. I'm running node version 18.15. Is there any dependency that I misconfigured?
The error message is:
Error: Invalid regular expression: /^[$_\p{ID_Start}][$_\u200C\u200D\p{ID_Continue}]*$/: Invalid property name in character class
at /openplayground/app/node_modules/@parcel/packager-js/lib/ScopeHoistingPackager.js:91
@nat
Regrettably, I accidentally deleted my account while attempting to log out. I noticed that there are two buttons on the page, one for deleting the account and the other for logging out, which can be easily confused. When I logged back in, I realized that my balance was missing.
I was wondering if you could help me recover my balance. My Google account associated with this nat.dev account is [email protected]. I would greatly appreciate any assistance you can provide to help resolve this issue.
Help .. increase gpt4 model maximum tokens more than 8192 PLEASE in https://nat.dev/ website . need more than 8192 please ... if can make it 16000
please help , when i ask for question in chatgpt 4 , in half of answer my question i got pop up message that model error no response from openai after 60 seconds ! please help
为什么我冲了钱,用不了了?
Curious to know what build jobs you'd like to happen. Assuming these:
Let me know how I can help.
can we get llama and vicuna here as well? There's also oasst llama and RWKW.
please help i try many cards and get same issue Your card has been declined. this error message pop up after click buy .. please help
Add openAI API key
select model gpt-4
enter prompt
press submit
result: OpenAI API request was invalid: The model: gpt-4
does not exist
Macbook with M1 Chip
This is the error:
OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
fish: Job 1, 'openplayground run' terminated by signal SIGABRT (Abort)
Recently Dolly 2 came out and it's completely open source (not based on Llama). Some models from Open Assistant are already available and the official release is about to happen.
It would be nice to have Dolly 2 and Open Assistant support.
INFO:server.app:Received inference request huggingface-local
INFO:server.lib.inference:Requesting inference from chavinlo/gpt4-x-alpaca on huggingface-local
INFO:server.lib.inference:Starting inference for 1 - chavinlo/gpt4-x-alpaca
INFO:werkzeug:127.0.0.1 - - [25/Apr/2023 00:46:59] "POST /api/inference/text/stream HTTP/1.1" 200 -
ERROR:server.lib.inference:Error: Couldn't build proto file into descriptor pool: duplicate file name sentencepiece_model.proto
INFO:server.lib.inference:Completed inference for chavinlo/gpt4-x-alpaca on huggingface-local
INFO:server.lib.api.inference:Done streaming SSE
Given the existence of ChatGPT and the chat-mode completions OpenAI provides, it makes sense to be able to build a chat interface in openplayground.
However, I wonder if framing this feature as a "chat UI" is too limiting? The chat UI itself is really just a specific implementation of a stateful workflow: "process user input (in the chat case, wrap it in JSON), run completion with some history, process model output (in the chat case, parse JSON and extract/parse the response)."
Without going so far as saying "openplayground needs to be natto.dev", I wonder if there's a middle ground here.
🤔
I used to have 4 dollars, it just disappeared without usage
When I choose the 'gpt-4' model and just ask it 'which model are you? gpt-3 or gpt 4?' It just says that ' the model is gpt 3.5 turbo, the gpt 4 model is not released ' some kind of things, and when I compare the gpt-4 with gpt3.5-turbo, I can't find any conspicuous evidence that the gpt-4 option actually use GPT-4 model. Can somebody explain why?
Hello,
I have tried the web gui and found it quite nice, but I was wondering if it would be possible to run even just one model as an application outside of a web server completely "locally" idk how to explain this better, basically just having the ability to run models with a gui as an application not web server on a computer to get a ChatGPT like experience but completely just outside of the browser like searching in explorer or finder even if thats a terrible example.
Thanks!
Hard line breaks aren't working, e.g. copy-paste this text into the playground:
https://gist.github.com/nat/8b8788b5e4bf8fa3863f80da0c1332c9
Put code output in a mark down code block, accompanied by a copy button.
Is this a scam website? I topped up five dollars, used only a few cents, and then my balance was cleared to zero. They are asking me to top up again.
$ openplayground run
INFO:server.lib.sseserver:SUBSCRIBING TO: inferences
INFO:server.lib.sseserver:SUBSCRIBING TO: notifications
INFO:server.lib.sseserver:GETTING TOPIC: notifications
INFO:server.lib.sseserver:GETTING TOPIC: inferences
INFO:server.app:Initializing download manager...
INFO:server.app:Download loop started...
* Serving Flask app 'server.app'
* Debug mode: off
INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://localhost:5432
INFO:werkzeug:Press CTRL+C to quit
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:02:20] "GET / HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:02:20] "GET /index.d826928d.js HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:02:20] "GET /index.e96bf64d.css HTTP/1.1" 200 -
INFO:server.lib.api:Getting enabled models
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:02:20] "GET /api/models-enabled HTTP/1.1" 200 -
INFO:server.lib.api:Received notification request
INFO:server.lib.sseserver:LISTENING TO: notifications
INFO:server.lib.sseserver:LISTENING
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:02:20] "GET /favicon.ico HTTP/1.1" 200 -
INFO:server.lib.api:Getting enabled models
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:02:23] "GET /api/models-enabled HTTP/1.1" 200 -
INFO:server.lib.api:Getting enabled models
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:02:29] "GET /api/models-enabled HTTP/1.1" 200 -
INFO:server.lib.api:Getting providers with models
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:02:36] "GET /api/providers-with-key-and-models HTTP/1.1" 200 -
INFO:server.lib.api.provider:Searching Provider Models openplayground
ERROR:server.app:Exception on /api/provider/openplayground/models/search [GET]
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fee4b826bc0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/usr/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openplayground.filler', port=443): Max retries exceeded with url: /api/search?q=gpt4 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fee4b826bc0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ettinger/.local/lib/python3.10/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.full_dispatch_request()
File "/home/ettinger/.local/lib/python3.10/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/ettinger/.local/lib/python3.10/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/home/ettinger/.local/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/home/ettinger/.local/lib/python3.10/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/ettinger/.local/lib/python3.10/site-packages/server/lib/api/provider.py", line 111, in provider_models_search
response = requests.get(search_url)
File "/usr/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='openplayground.filler', port=443): Max retries exceeded with url: /api/search?q=gpt4 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fee4b826bc0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:02:57] "GET /api/provider/openplayground/models/search?query=gpt4 HTTP/1.1" 500 -
INFO:server.lib.api.provider:Searching Provider Models openplayground
ERROR:server.app:Exception on /api/provider/openplayground/models/search [GET]
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fee4b824610>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/usr/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openplayground.filler', port=443): Max retries exceeded with url: /api/search?q=gpt4 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fee4b824610>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ettinger/.local/lib/python3.10/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.full_dispatch_request()
File "/home/ettinger/.local/lib/python3.10/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/ettinger/.local/lib/python3.10/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/home/ettinger/.local/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/home/ettinger/.local/lib/python3.10/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/ettinger/.local/lib/python3.10/site-packages/server/lib/api/provider.py", line 111, in provider_models_search
response = requests.get(search_url)
File "/usr/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='openplayground.filler', port=443): Max retries exceeded with url: /api/search?q=gpt4 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fee4b824610>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:03:00] "GET /api/provider/openplayground/models/search?query=gpt4 HTTP/1.1" 500 -
INFO:server.lib.api:Getting enabled models
INFO:werkzeug:127.0.0.1 - - [23/Apr/2023 00:03:11] "GET /api/models-enabled HTTP/1.1" 200 -
INFO:server.lib.api:Getting enabled models
Where is this file?
If I run the server and go to localhost:5432/models.json there's no file to edit there.
Also, are there any templates for the llama models? I mean, I'm grateful for this entire project for free, but it seems kind of strange to have to write the template for each model manually, but I don't know...🤷♂️ I don't even know what all the parameters are or their ranges.
I had 5 bucks, it just disappeared without being used
i got this error every tine when ask for long answer in gpt4 , the error come up after 3 minuites and cut the answer ! please fix it . its cost money
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.