Giter VIP home page Giter VIP logo

gpt-llama.cpp's People

Contributors

adampaigge avatar afbenevides avatar eiriklv avatar keldenl avatar swg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt-llama.cpp's Issues

"Internal Server Error" on a remote server

Hi! Thank you very much for what you are doing.
Please tell me what this may be related to-I uploaded all the files to the server since my computer is running slowly, downloaded and changed the .env file, etc. I did a check and the check wrote to me that-

--RESULTS--
Curl command was successful!
To use any app with gpt-llama.cpp, please provide the following as the OPENAI_API_KEY:
/home/ubuntu/github/llama.cpp/models/7b/7b.bin

But after I launch the chatbot-ui by the command OPENAI_API_HOST=http://181.211.175.234:8000 npm run dev
, nothing opens at all, when I launch docker, the web ui opens and the error all the time

"Incorrect API key provided: /home/ub********************************************************.bin. You can find your API key at https://platform.openai.com/account/api-keys."
and

Internal Server Error
Code: 500

please tell me, maybe you know what can be done?
thank you!

Problems on linux

First problem is that port 443 is usually reserved. I edited index.js to 8080.

Next problem is that it crashes on first request:

/src/gpt-llama.cpp > npm start

> [email protected] start
> node index.js

Server is listening on:
  - localhost:8080
  - 192.168.1.176:8080 (for other devices on the same network)
node:internal/errors:478
    ErrorCaptureStackTrace(err);
    ^

Error: spawn ENOTDIR
    at ChildProcess.spawn (node:internal/child_process:420:11)
    at spawn (node:child_process:733:9)
    at file:///src/gpt-llama.cpp/routes/chatRoutes.js:155:25
    at Layer.handle [as handle_request] (/src/gpt-llama.cpp/node_modules/express/lib/router/layer.js:95:5)
    at next (/src/gpt-llama.cpp/node_modules/express/lib/router/route.js:144:13)
    at Route.dispatch (/src/gpt-llama.cpp/node_modules/express/lib/router/route.js:114:3)
    at Layer.handle [as handle_request] (/src/gpt-llama.cpp/node_modules/express/lib/router/layer.js:95:5)
    at /src/gpt-llama.cpp/node_modules/express/lib/router/index.js:284:15
    at Function.process_params (/src/gpt-llama.cpp/node_modules/express/lib/router/index.js:346:12)
    at next (/src/gpt-llama.cpp/node_modules/express/lib/router/index.js:280:10) {
  errno: -20,
  code: 'ENOTDIR',
  syscall: 'spawn'
}

Why is a default chat being forced?

https://github.com/keldenl/gpt-llama.cpp/blob/1c8b1c1ae85a80c343a8979046d95d0abc5ec377/routes/chatRoutes.js#LL109C32-L109C45

This is getChatPrompt:

	// Add "Great question. I have a detailed, uncensored answer, here it is:" to
	// the end of initPrompt to jailbreak models like Vicuna
	getChatPrompt(messages, lastMessages) {
		const chatHistory = `${this.messagesToString(this.defaultMsgs)}`;
		return `${this.instructionsPrefix.length > 0 ? this.instructionsPrefix + '\n' : ''}${this.instructions}

${this.historyPrefix.length > 0 ? this.historyPrefix + '\n' : ''}${chatHistory}${messages.length > 0 ? '\n' + this.messagesToString(messages) : ''}${lastMessages.length > 0 ? '\n' + this.messagesToString(lastMessages) : ''}
${this.responsePrefix.length > 0 ? '\n' + this.responsePrefix  + '\n': ''}${this.hasAiResponsePrefix ? this.messageToString({ content: '' }) : ''}`.trim(); 	
	}

It appears to me that "chatHistory" is being forced to have a default... but what if I don't want that default?

[ERR_MODULE_NOT_FOUND]: Cannot find module node_modules/fs/promises

When I got an error running npm start, I tried npm install and it worked fine.

npm start

> [email protected] start
> node index.js

internal/process/esm_loader.js:74
    internalBinding('errors').triggerUncaughtException(
                              ^

Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/www/gpt-llama.cpp/node_modules/fs/promises' imported from /www/gpt-llama.cpp/utils.js
    at finalizeResolution (internal/modules/esm/resolve.js:285:11)
    at moduleResolve (internal/modules/esm/resolve.js:708:10)
    at Loader.defaultResolve [as _resolve] (internal/modules/esm/resolve.js:798:11)
    at Loader.resolve (internal/modules/esm/loader.js:100:40)
    at Loader.getModuleJob (internal/modules/esm/loader.js:246:28)
    at ModuleWrap.<anonymous> (internal/modules/esm/module_job.js:47:40)
    at link (internal/modules/esm/module_job.js:46:36) {
  code: 'ERR_MODULE_NOT_FOUND'
}

following instructions, get this error

(base) D:\gpt-llama.cpp>gpt-llama.cpp start
'gpt-llama.cpp' is not recognized as an internal or external command,
operable program or batch file.

windows 10

already have llama.cpp up and working, did the command right before
(base) D:\gpt-llama.cpp>npm i gpt-llama.cpp -g

changed 102 packages, and audited 103 packages in 3s

9 packages are looking for funding
run npm fund for details

found 0 vulnerabilities

(base) D:\gpt-llama.cpp>gpt-llama.cpp start
'gpt-llama.cpp' is not recognized as an internal or external command,
operable program or batch file.

Running in instruct mode and model file in a different directory

Was wondering how I could pass the arguments --instruct and --model to the npm start command.
PORT=14003 npm start mlock ctx_size 1500 threads 12 instruct model ~/llama_models/wizardLM-7B-GGML/wizardLM-7B.ggml.q5_1.bin
I get an Args error: instruct is not a valid argument. model is not a valid argument.
These are valid arguments for llama.cpp to run alpaca style models from a directory other than the default model folder.

How to create a single binary

I need to create a single binary file with the configurations embeeded in the binary
so I can share internally

mybinary -o 127.0.0.1:4043

so the server starts on127.0.0.1:4043

Any way to do this ?

weird headers error in chatcompletion mode

##Request DONE
Request DONE
node:internal/errors:490
ErrorCaptureStackTrace(err);
^

Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
at new NodeError (node:internal/errors:399:5)
at ServerResponse.setHeader (node:_http_outgoing:649:11)
at ServerResponse.header (/home/momiro/jane-gpt-llama/node_modules/express/lib/response.js:794:10)
at ServerResponse.send (/home/momiro/jane-gpt-llama/node_modules/express/lib/response.js:174:12)
at ServerResponse.json (/home/momiro/jane-gpt-llama/node_modules/express/lib/response.js:278:15)
at Object.write (file:///home/momiro/jane-gpt-llama/routes/chatRoutes.js:308:8)
at ensureIsPromise (node:internal/webstreams/util:182:19)
at writableStreamDefaultControllerProcessWrite (node:internal/webstreams/writablestream:1115:5)
at writableStreamDefaultControllerAdvanceQueueIfNeeded (node:internal/webstreams/writablestream:1230:5)
at writableStreamDefaultControllerWrite (node:internal/webstreams/writablestream:1104:3) {
code: 'ERR_HTTP_HEADERS_SENT'
}

I am using the /chat feature, maintaining a conversation, usually somewhere between 5 and 10 messages in, an error like this can happen. Doesn't always happen though, sometimes llama.cpp just locks up instead.

Unable to run test-installation.sh in ubuntu

It shows the following error because the sh doesn’t support ubuntu.

./test-installation.sh: 17: [[: not found
./test-installation.sh: 23: [[: not found
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   726  100   460  100   266     23     13  0:00:20  0:00:19  0:00:01   121
./test-installation.sh: 47: [[: not found

Error: Curl command failed!
Is the gpt-llama.cpp server running? Try starting the server and running this script again.
Make sure you are testing on the right port. The Curl commmand server error port should match your port in the gpt-llama.cpp window.
Please check for any errors in the terminal window running the gpt-llama.cpp server. 

Bearer Token vs Model parameter?

Just curious why bearer token is used to determine model location, why not just a parameter in the json like "model" which is used to select model in openai? This to me seems more intuitive but perhaps you have a reason for bearer token?

It would allow a more "seamless" approach to changing the model between requests for example. (Although currently the code doesn't check model path is the same (ie: to restart model or not, just last messages)).

I think this IS the point of the "model" parameter for openai, just they know the location and it is by name, and here it is the location itself, although you could assume location (/llama.cpp/models) and just use the filename as model name. This seems more to me to be a 1:1 behaviour, no?

Happy to submit a PR to alter the api key check with a fall back to req checking the json.

Error after connecting to chat UI and sending message (Windows)

Hello again,

I went through the instructions to connect to the Chat UI and it worked! However, after sending a test "Hello" message, I got the following error code on my cmd:

(gptllamaapi) C:\Users\gamet\Documents\Transfer_To_External_Hard_Drive\gpt_llama_cpp\gpt-llama.cpp>npm start

[email protected] start
node index.js

Server is listening on:

  • localhost:443
  • 192.168.0.34:443 (for other devices on the same network)

--LLAMA.CPP SPAWNED--
C:\Users\gamet\Documents\Transfer_To_External_Hard_Drive\llama_cpp\llama-master-f7d0509-bin-win-avx-x64\ggml-model-q4_0.bin/llama.cpp/main -m C:\Users\gamet\Documents\Transfer_To_External_Hard_Drive\llama_cpp\llama-master-f7d0509-bin-win-avx-x64\ggml-model-q4_0.bin --temp 0 --n_predict 1000 --top_p 0.1 --top_k 40 -b 512 -c 2048 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt ## --reverse-prompt

--reverse-prompt ### -i -p ### Instructions

Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.

Inputs

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.

Response

user: Hello
assistant:

--REQUEST--
user: Hello
node:events:515
throw er; // Unhandled 'error' event
^

Error: spawn C:\Users\gamet\Documents\Transfer_To_External_Hard_Drive\llama_cpp\llama-master-f7d0509-bin-win-avx-x64\ggml-model-q4_0.bin/llama.cpp/main ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:283:19)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:289:12)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -4058,
code: 'ENOENT',
syscall: 'spawn C:\Users\gamet\Documents\Transfer_To_External_Hard_Drive\llama_cpp\llama-master-f7d0509-bin-win-avx-x64\ggml-model-q4_0.bin/llama.cpp/main',
path: 'C:\Users\gamet\Documents\Transfer_To_External_Hard_Drive\llama_cpp\llama-master-f7d0509-bin-win-avx-x64\ggml-model-q4_0.bin/llama.cpp/main',
spawnargs: [
'-m',
'C:\Users\gamet\Documents\Transfer_To_External_Hard_Drive\llama_cpp\llama-master-f7d0509-bin-win-avx-x64\ggml-model-q4_0.bin',
'--temp',
0,
'--n_predict',
1000,
'--top_p',
'0.1',
'--top_k',
'40',
'-b',
'512',
'-c',
'2048',
'--repeat_penalty',
'1.1764705882352942',
'--reverse-prompt',
'user:',
'--reverse-prompt',
'\nuser',
'--reverse-prompt',
'system:',
'--reverse-prompt',
'\nsystem',
'--reverse-prompt',
'##',
'--reverse-prompt',
'\n##',
'--reverse-prompt',
'###',
'-i',
'-p',
'### Instructions\n' +
'Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.\n' +
'\n' +
'### Inputs\n' +
'system: You are a helpful assistant.\n' +
'user: How are you?\n' +
'assistant: Hi, how may I help you today?\n' +
"system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.\n" +
'\n' +
'### Response\n' +
'user: Hello\n' +
'assistant:'
]
}

Node.js v18.4.0

(gptllamaapi) C:\Users\gamet\Documents\Transfer_To_External_Hard_Drive\gpt_llama_cpp\gpt-llama.cpp>

Module not found: Package path ./lite/tiktoken_bg.wasm?module is not exported from package

Getting this error after sending a message, it's coming from here:
error - ./pages/api/chat.ts:7:0

// @ts-expect-error
import wasm from '@dqbd/tiktoken/lite/tiktoken_bg.wasm?module';
import tiktokenModel from '@dqbd/tiktoken/encoders/cl100k_base.json';
import { Tiktoken, init } from '@dqbd/tiktoken/lite/init';

I've tried using official tiktoken but gpt-llama-cpp crashes

// @ts-expect-error
import wasm from 'tiktoken/lite/tiktoken_bg.wasm?module';
import tiktokenModel from 'tiktoken/encoders/cl100k_base.json';
import { Tiktoken, init } from 'tiktoken/lite/init';

Crash
===== CHAT COMPLETION REQUEST =====

AUTO MODEL DETECTION FAILED. LOADING DEFAULT CHATENGINE...
{ '--n_predict': 1000, '--temp': 1 }
node:internal/errors:484
ErrorCaptureStackTrace(err);
^

Error: spawn ENOTDIR

Slow speed Vicuna - 7B Help plz

when i ask a qestions it is soo slow it is taking forever to write one sentence how can i make it faster btw am using vicuna 7B to make it light wight for me and am using mac OS m2 chip and that doesnt even help :( so can i host the gpt-llama.cpp on render if so yes when i run sh ./scripts/test-installation.sh what should i put for the port and the locations of the file since am using render to render the model to make it faster ?

ERR_MODULE_NOT_FOUND

When I tried to use it, I got the following error, and I confirmed that the file in question actually existed

syc@ubuntu:~/llama/gpt-llama.cpp$ npm start

> [email protected] start
> node index.js

node:internal/errors:465
    ErrorCaptureStackTrace(err);
    ^

Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/home/syc/llama/gpt-llama.cpp/chatEngine/initializeChatEngine.js' imported from /home/syc/llama/gpt-llama.cpp/routes/chatRoutes.js
    at new NodeError (node:internal/errors:372:5)
    at finalizeResolution (node:internal/modules/esm/resolve:405:11)
    at moduleResolve (node:internal/modules/esm/resolve:966:10)
    at defaultResolve (node:internal/modules/esm/resolve:1176:11)
    at ESMLoader.resolve (node:internal/modules/esm/loader:605:30)
    at ESMLoader.getModuleJob (node:internal/modules/esm/loader:318:18)
    at ModuleWrap.<anonymous> (node:internal/modules/esm/module_job:80:40)
    at link (node:internal/modules/esm/module_job:78:36) {
  code: 'ERR_MODULE_NOT_FOUND'
}

Node.js v18.0.0

No error reported during installation

added 130 packages, and audited 131 packages in 2s

13 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

npm error on gpt-llama.cpp

npm install ─╯

[email protected] postinstall
npm run updateengines && cd InferenceEngine/embeddings/all-mpnet-base-v2 && python -m pip install -r requirements.txt

[email protected] updateengines
git submodule foreach git pull

sh: python: command not found
npm ERR! code 127
npm ERR! path /Users/khederyusuf/Desktop/llama.cpp/gpt-llama.cpp
npm ERR! command failed
npm ERR! command sh -c npm run updateengines && cd InferenceEngine/embeddings/all-mpnet-base-v2 && python -m pip install -r requirements.txt

npm ERR! A complete log of this run can be found in:
npm ERR! /Users/khederyusuf/.npm/_logs/2023-05-12T10_55_36_676Z-debug-0.log

Cannot GET /

I run it on ubuntu and it cursed until I changed the port, set it to 4000.
But at startup everything that is written on the page "Canot GET /" on white fone.
Firefox browser

could we have git tags?

First off - thank you for creating such a great project, and open sourcing it! I've been having a lot of fun playing around with this and testing out new models and prompts with it!

I was wondering if you'd be open to creating git tags/github releases along with new version bumps to package.json, for a few reasons:

  • Other systems can build and package things by these tags instead of commit hash (if pulling from github instead of npm)
  • People could subscribe to notifications when new releases come out (via github - i think for npm, this requires setting up a npm webhook + requires a paid account/organization)

Error: spawn ..\llama.cpp\main ENOENT at ChildProcess._handle.onexit

Really good project to adopt local GPT, thanks all for the effort!
I tried different models on w11, there are two models work fine with command line like ”main -m -p “, they are ggml-vic13b-uncensored-q4_0.bin and ggml-vic13b-q4_0.bin, still got errors "libc++abi: terminating due to uncaught exception of type std::runtime_error: unexpectedly reached end of file." for other models. Continue to run test-application.sh with server together, got errors below:

Message from test-application.sh (client terminal):
--GPT-LLAMA.CPP TEST INSTALLATION SCRIPT LAUNCHED--
PLEASE MAKE SURE THAT A LOCAL GPT-LLAMA.CPP SERVER IS STARTED. OPEN A SEPARATE TERMINAL WINDOW START IT.\n
What port is your server running on? (press enter for default 443 port): 8000
Please drag and drop the location of your Llama-based Model (.bin) here and press enter:
../llama.cpp/models/ggml-vic13b-uncensored-q4_0.bin

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 266 0 0 100 266 0 4977 --:--:-- --:--:-- --:--:-- 5018
curl: (56) Recv failure: Connection was reset

Error: Curl command failed!
Is the gpt-llama.cpp server running? Try starting the server and running this script again.
Make sure you are testing on the right port. The Curl commmand server error port should match your port in the gpt-llama.cpp window.
Please check for any errors in the terminal window running the gpt-llama.cpp server.

Message from server terminal:

REQUEST RECEIVED
PROCESSING NEXT REQUEST FOR /v1/chat/completions
LLAMA.CPP DETECTED

===== CHAT COMPLETION REQUEST =====

AUTO MODEL DETECTION FAILED. LOADING DEFAULT CHATENGINE...
{}

===== LLAMA.CPP SPAWNED =====
..\llama.cpp\main -m ..\llama.cpp\models\ggml-vic13b-uncensored-q4_0.bin --temp 0.7 --n_predict 1000 --top_p 0.1 --top_k 40 -c 2048 --seed -1 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt

-i -p Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are ChatGPT, a helpful assistant developed by OpenAI.
user: How are you doing today?
assistant:

===== REQUEST =====
user: How are you doing today?
node:events:491
throw er; // Unhandled 'error' event
^

Error: spawn ..\llama.cpp\main ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:283:19)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:289:12)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -4058,
code: 'ENOENT',
syscall: 'spawn ..\llama.cpp\main',
path: '..\llama.cpp\main',
spawnargs: [
'-m',
'..\llama.cpp\models\ggml-vic13b-uncensored-q4_0.bin',
'--temp',
'0.7',
'--n_predict',
'1000',
'--top_p',
'0.1',
'--top_k',
'40',
'-c',
'2048',
'--seed',
'-1',
'--repeat_penalty',
'1.1764705882352942',
'--reverse-prompt',
'user:',
'--reverse-prompt',
'\nuser',
'--reverse-prompt',
'system:',
'--reverse-prompt',
'\nsystem',
'--reverse-prompt',
'\n\n\n',
'-i',
'-p',
'Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.\n' +
'\n' +
'system: You are a helpful assistant.\n' +
'user: How are you?\n' +
'assistant: Hi, how may I help you today?\n' +
'system: You are ChatGPT, a helpful assistant developed by OpenAI.\n' +
'user: How are you doing today?\n' +
'assistant:'
]
}

Node.js v18.16.0

SERVER BUSY, REQUEST QUEUED

===== CHAT COMPLETION REQUEST =====

===== LLAMA.CPP SPAWNED =====
/root/llama.cpp/main -m /root/llama.cpp/models/7B/ggml-model-q4_0.bin --temp 0.7 --n_predict 4000 --top_p 0.1 --top_k 40 -b 2000 -c 4096 --seed -1 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt ## --reverse-prompt

--reverse-prompt ### -i -p ### Instructions

Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.

Inputs

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are ChatGPT, a helpful assistant developed by OpenAI.

Response

user: How are you doing today?
assistant:

===== REQUEST =====
user: How are you doing today?

===== RESPONSE =====

REQUEST RECEIVED
SERVER BUSY, REQUEST QUEUED

Cannot POST /V1/embeddings

I'm running AutoGPT + gpt-llama.cpp with the latest version from gpt-llama.cpp and autoGPT version suggested in this guide, i can start communicating with the model but once it gets to the action phase, it crashes AutoGPT with a html response suggesting it can't post to /v1/embeddings. I'm not sure whether i should be raising this with AutoGPT or gpt-llama.cpp repo.

gpt-llama.cpp:

Server is listening on:
  - http://localhost:443
  - http://10.0.0.10:443 (for other devices on the same network)

See Docs
  - http://localhost:443/docs

Test your installation
  - double click the scripts/test-installation.ps1 (powershell) or scripts/test-installation.bat (cmd) file

See https://github.com/keldenl/gpt-llama.cpp#usage for more guidance.
> REQUEST RECEIVED
> PROCESSING NEXT REQUEST FOR /V1/chat/completions
> LLAMA.CPP DETECTED

=====  CHAT COMPLETION REQUEST  =====
> VICUNA MODEL DETECTED. LOADING VICUNA ENGINE...
{ '--n_predict': 1021 }

=====  LLAMA.CPP SPAWNED  =====
..\llama.cpp\main -m ..\llama.cpp\Models\Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_1.bin --threads 12 --temp 0.7 --n_predict 1021 --top_p 0.1 --top_k 40 -c 2048 --seed -1 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt


 --reverse-prompt ## --reverse-prompt
## --reverse-prompt ### --reverse-prompt

 -i -p Complete the following chat conversation between the Human and the Assistant. System messages should be strictly followed as additional instructions.

### System: You are a helpful assistant.
### Human: How are you?
### Assistant: Hi, how may I help you today?
### System: You are Bob, An AI designed to assist on requests and research autonomously
Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.

GOALS:

1. find from 1 to 10 tech products released last week on Amazon with a spike in ratings
2. check reddit and youtube for positive feedback


Constraints:
1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.
2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.
3. No user assistance
4. Exclusively use the commands listed in double quotes e.g. "command name"
5. Use subprocesses for commands that will not terminate within a few minutes

Commands:
1. Google Search: "google", args: "input": "<search>"
2. Browse Website: "browse_website", args: "url": "<url>", "question": "<what_you_want_to_find_on_website>"
3. Start GPT Agent: "start_agent", args: "name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"
4. Message GPT Agent: "message_agent", args: "key": "<key>", "message": "<message>"
5. List GPT Agents: "list_agents", args:
6. Delete GPT Agent: "delete_agent", args: "key": "<key>"
7. Clone Repository: "clone_repository", args: "repository_url": "<url>", "clone_path": "<directory>"
8. Write to file: "write_to_file", args: "file": "<file>", "text": "<text>"
9. Read file: "read_file", args: "file": "<file>"
10. Append to file: "append_to_file", args: "file": "<file>", "text": "<text>"
11. Delete file: "delete_file", args: "file": "<file>"
### System: The current time and date is Tue May 30 19:00:41 2023
### System: This reminds you of these events from your past:



### Human: Determine which next command to use, and respond using the format specified above:
### Assistant:


=====  REQUEST  =====
### Human: Determine which next command to use, and respond using the format specified above:
=====  PROCESSING PROMPT...  =====
=====  PROCESSING PROMPT...  =====

=====  RESPONSE  =====
 Based on the given goals and constraints, I recommend starting with a search for tech products released last week on Amazon. Here's my response in JSON format:
{
 "command": {
 "name": "google",
 "args": {
 "input": "1 to 10 tech products released last week on Amazon with a spike in ratings"
 }
 },
 "thoughts": {
 "speak": "I will start by searching for the specified tech products.",
 "plan": "- Search for tech products released last week on Amazon with a spike in ratings",
 "reasoning": "This is an important task that needs to be completed quickly. I will use Google's search engine to find relevant information."
 }
}
user:Request DONE
> PROCESS COMPLETE
> REQUEST RECEIVED
> PROCESSING NEXT REQUEST FOR /V1/embeddings
> PROCESS COMPLETE

AutoGPT:

 THOUGHTS:  None
REASONING:  This is an important task that needs to be completed quickly. I will use Google's search engine to find relevant information.
PLAN:
-  Search for tech products released last week on Amazon with a spike in ratings
CRITICISM:  None
NEXT ACTION:  COMMAND = google ARGUMENTS = {'input': '1 to 10 tech products released last week on Amazon with a spike in ratings'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-= 
D:\Programs Files\Python\Python311\Lib\site-packages\duckduckgo_search\compat.py:20: UserWarning: ddg is deprecated. Use DDGS().text() generator
  warnings.warn("ddg is deprecated. Use DDGS().text() generator")
D:\Programs Files\Python\Python311\Lib\site-packages\duckduckgo_search\compat.py:24: UserWarning: parameter page is deprecated, use DDGS().text() generator
  warnings.warn("parameter page is deprecated, use DDGS().text() generator")
D:\Programs Files\Python\Python311\Lib\site-packages\duckduckgo_search\compat.py:26: UserWarning: parameter max_results is deprecated, use DDGS().text()
  warnings.warn("parameter max_results is deprecated, use DDGS().text()")
rbody = <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot POST /V1/embeddings</pre>
</body>
</html>


Traceback (most recent call last):
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 675, in _interpret_response_line
    data = json.loads(rbody)
           ^^^^^^^^^^^^^^^^^
  File "D:\Programs Files\Python\Python311\Lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Programs Files\Python\Python311\Lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Programs Files\Python\Python311\Lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "D:\projecto\Auto-GPT\autogpt\__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\click\decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\projecto\Auto-GPT\autogpt\cli.py", line 151, in main
    agent.start_interaction_loop()
  File "D:\projecto\Auto-GPT\autogpt\agent\agent.py", line 184, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "D:\projecto\Auto-GPT\autogpt\memory\local.py", line 78, in add
    embedding = create_embedding_with_ada(text)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\projecto\Auto-GPT\autogpt\llm_utils.py", line 155, in create_embedding_with_ada
    return openai.Embedding.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 621, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\Terramoto\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 677, in _interpret_response_line
    raise error.APIError(
openai.error.APIError: HTTP code 404 from API (<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot POST /V1/embeddings</pre>
</body>
</html>
)
Press any key to continue . . .

Are there different specific instructions for running Red Pajama?

I've followed the prerequisites, I can't run red pajama 3B with llama.cpp, I think it's only available inside the ggml repo right?
But I went ahead anyway assuming gpt-llama.cpp does something to enable it.
I've placed the model like so ../llama.cpp/models/ggml/gpt-neox/rp-instruct-3b-v1-ggml-model-q4_0.bin

Running http://localhost:443/v1/models returns
Missing API_KEY. Please set up your API_KEY (in this case path to model .bin in your ./llama.cpp folder).
I'm not sure where to put this path.
Tried API_KEY=<path to model> npm start
Tried entering <path to model> in Swagger's Bearer token. Where do I set this API_KEY?

Edit: So I tried ggml but it's also not working? I'm confused how to run Red Pajama

./bin/gpt-neox -m ../../models/rp-instruct-3b-v1-ggml-model-q4_0.bin -p "How do I build a website?"
main: seed = 1684913741
gpt_neox_model_load: loading model from '../../models/rp-instruct-3b-v1-ggml-model-q4_0.bin' - please wait ...
gpt_neox_model_load: n_vocab = 50432
gpt_neox_model_load: n_ctx   = 2048
gpt_neox_model_load: n_embd  = 2560
gpt_neox_model_load: n_head  = 32
gpt_neox_model_load: n_layer = 32
gpt_neox_model_load: n_rot   = 80
gpt_neox_model_load: par_res = 0
gpt_neox_model_load: ftype   = 2
gpt_neox_model_load: qntvr   = 0
gpt_neox_model_load: ggml ctx size = 3572.54 MB
gpt_neox_model_load: memory_size =   640.00 MB, n_mem = 65536
terminate called after throwing an instance of 'std::length_error'
  what():  basic_string::_M_create
Aborted
[fedorauser@W10JB1S9K3 build]$

llama.cpp unresponsive for 20 seconds

I'm trying to use this to run Auto-GPT. As a test, before hooking it up to use Auto-GPT, I tried it with Chatbot-UI. However, gpt-llama.cpp keeps locking up with LLAMA.CPP UNRESPONSIVE FOR 20 SECS. ATTEMPTING TO RESUME GENERATION whenever the LLM finishes its response. I'm using gpt4-x-alpaca-13B-GGML which I converted to gguf with the tools in llama.cpp. Using llama.cpp alone the model works fine (albeit not the smartest). What can I do to solve this issue?

Change listening ip to public ip?

I have a scenario where I want to run the API for a client app that runs on another server. Thus I need to run gpt-llama so it runs on the public ip. Is there a way to do this. Or is there a place in the code I can change it?

Thanks, Doug

no response message with Readable Stream: CLOSED

Really appreciate if anyone can shed light here. Seems requests have been received but response could not be seen, and got Readable Stream Closed on server console.

【test installation terminal】
D:\github\gpt-llama.cpp>sh test-installation.sh
--GPT-LLAMA.CPP TEST INSTALLATION SCRIPT LAUNCHED--
PLEASE MAKE SURE THAT A LOCAL GPT-LLAMA.CPP SERVER IS STARTED. OPEN A SEPARATE TERMINAL WINDOW START IT.\n
What port is your server running on? (press enter for default 443 port): 8000
Please drag and drop the location of your Llama-based Model (.bin) here and press enter: ../llama.cpp/models/ggml-alpaca-7b-q4.bin

<if terminate the server, got message below>

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 266 0 0 0 266 0 0 --:--:-- 0:30:05 --:--:-- 0
curl: (56) Recv failure: Connection was reset

--RESPONSE--

--RESULTS--
Curl command was successful!
To use any app with gpt-llama.cpp, please provide the following as the OPENAI_API_KEY:
../llama.cpp/models/ggml-alpaca-7b-q4.bin

【server console】
D:\github\gpt-llama.cpp>set PORT=8000
D:\github\gpt-llama.cpp>npm start
...

[email protected] start
node index.js
...
REQUEST RECEIVED
PROCESSING NEXT REQUEST FOR /v1/chat/completions
LLAMA.CPP DETECTED

===== CHAT COMPLETION REQUEST =====

ALPACA MODEL DETECTED. LOADING ALPACA ENGINE...
{}

===== LLAMA.CPP SPAWNED =====
..\llama.cpp\main -m ..\llama.cpp\models\ggml-alpaca-7b-q4.bin --temp 0.7 --n_predict 1000 --top_p 0.1 --top_k 40 -c 2048 --seed -1 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt

--reverse-prompt ## --reverse-prompt

--reverse-prompt ### -i -p Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

Instruction

Complete the following chat conversation between the user and the assistant. system messages should be strictly followed as additional instructions.
)

Inputs

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are ChatGPT, a helpful assistant developed by OpenAI.
user: How are you doing today?

Response

===== REQUEST =====
user: How are you doing today?
===== PROCESSING PROMPT... =====
===== PROCESSING PROMPT... =====
===== PROCESSING PROMPT... =====
......
===== PROCESSING PROMPT... =====

===== STDERR =====
stderr Readable Stream: CLOSED
done
llama_model_load: model size = 4017.27 MB / num tensors = 291

system_info: n_threads = 4 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
sampling parameters: temp = 0.700000, top_k = 40, top_p = 0.100000, repeat_last_n = 64, repeat_penalty = 1.176471

[end of text]

main: mem per token = 14368644 bytes
main: load time = 1305.05 ms
main: sample time = 42.33 ms
main: predict time = 63007.42 ms / 277.57 ms per token
main: total time = 65483.43 ms

Readable Stream: CLOSED

PROCESS COMPLETE

Duplication of capabilities?

Filing this issue primarily to make the developers aware of the existing llama-cpp-python web server that accomplishes the same thing and also has endpoint documentation with examples baked in:

https://abetlen.github.io/llama-cpp-python/#web-server

image

Hopefully this is useful or could reduce future development efforts in the attempt to address some of the other issues I'm seeing that request support for other popular open LLMs.

Add "--mlock" for M1 mac, on routes/chatRoutes.js

Adding the --mlock flag to the ./main call seemed to increase the speed of a 7B model to be much faster on an M1 Mac.

Modifying routes/chatRoutes.js with scriptArgs looking a little like this with the option enabled.

const scriptArgs = [
    '-m',
    modelPath,
    ...args,
    ...stopArgs,
    '--mlock',
    '-i',
    '-p',
    initPrompt
];

Thank you.

stuck

am fallowing the intructions to install but i hade to chnage the numpy version to 1.19.0 to work and also i install vicuna.bin to run my model since there is no bulided model with the repo so when i download the vicuna.bin from fastChat repo and create 7B folder in the models folder and i put the ggml-vocab.bin there i run this command from the intructions: ./main -m models/7B/ggml-vocab.bin -p "the sky is" i get this:

command: ./main -m models/7B/ggml-vocab.bin -p "the sky is"

error: main: build = 526 (e6a46b0) main: seed = 1683697939 llama.cpp: loading model from models/7B/ggml-vocab.bin error loading model: missing tok_embeddings.weight llama_init_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-vocab.bin' main: error: unable to load mode

can i skip this part and how can i go forward if there is not model bulided in with this repo?

plz help if you can also if i try to host the backend as a API how is that possible since am just using localhost::8080 as a backend endpoint

issue with chatbot-ui

yesterday the instructions were working. but today there's what appears to me to be a new option on the bottom left asking for the API key within the UI itself, when pasting the local address path for the model, it does not work and gives the following error:

ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from D:\chatbot-ui.env
event - compiled client and server successfully in 1921 ms (273 modules)
wait - compiling...
event - compiled successfully in 199 ms (233 modules)
wait - compiling / (client and server)...
event - compiled client and server successfully in 2.6s (5555 modules)
wait - compiling /api/models (client and server)...
event - compiled successfully in 89 ms (43 modules)
[TypeError: Failed to parse URL from http://localhost:443 /v1/models]
[TypeError: Failed to parse URL from http://localhost:443 /v1/models]
[TypeError: Failed to parse URL from http://localhost:443 /v1/models]
[TypeError: Failed to parse URL from http://localhost:443 /v1/models]
[TypeError: Failed to parse URL from http://localhost:443 /v1/models]
[TypeError: Failed to parse URL from http://localhost:443 /v1/models]
[TypeError: Failed to parse URL from http://localhost:443 /v1/models]
wait - compiling /api/chat (client and server)...
event - compiled successfully in 321 ms (53 modules)
[TypeError: Failed to parse URL from http://localhost:443 /v1/chat/completions]
[TypeError: Failed to parse URL from http://localhost:443 /v1/models]

trouble generating a response

I've taken steps to authorize the model by pasting the local address to the model and I'm running chatbotui with the env.local set to the local model. It's connecting, but as soon as I try to generate a response, I get this error and it exits the program:

Microsoft Windows [Version 10.0.19044.2846]
(c) Microsoft Corporation. All rights reserved.

D:\gpt-llama.cpp>npm start

[email protected] start
node index.js

Server is listening on:

See Docs

Test your installation

  • double click the test-installation.ps1 (powershell) or test-installation.bat (cmd) file

See https://github.com/keldenl/gpt-llama.cpp#usage for more guidance.

REQUEST RECEIVED
PROCESSING NEXT REQUEST FOR /v1/models
PROCESS COMPLETE
REQUEST RECEIVED
PROCESSING NEXT REQUEST FOR /v1/models
REQUEST RECEIVED
PROCESSING NEXT REQUEST FOR /v1/chat/completions

===== CHAT COMPLETION REQUEST =====

===== LLAMA.CPP SPAWNED =====
1i6Ld\llama.cpp\main -m sk- --temp 1 --n_predict 1000 --top_p 0.1 --top_k 40 -b 512 -c 2048 --repeat_penalty 1.1764705882352942 --reverse-prompt user: --reverse-prompt
user --reverse-prompt system: --reverse-prompt
system --reverse-prompt ## --reverse-prompt

--reverse-prompt ### -i -p ### Instructions

Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.

Inputs

system: You are a helpful assistant.
user: How are you?
assistant: Hi, how may I help you today?
system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.
user: prompt 1
user: hi

Response

user: hjghj
assistant:

===== REQUEST =====
user: hjghj
node:events:491
throw er; // Unhandled 'error' event
^

Error: spawn sk-\llama.cpp\main ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:283:19)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:289:12)
at onErrorNT (node:internal/child_process:476:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -4058,
code: 'ENOENT',
syscall: 'spawn sk-\llama.cpp\main',
path: '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\llama.cpp\main',
spawnargs: [
'-m',
'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',
'--temp',
1,
'--n_predict',
1000,
'--top_p',
'0.1',
'--top_k',
'40',
'-b',
'512',
'-c',
'2048',
'--repeat_penalty',
'1.1764705882352942',
'--reverse-prompt',
'user:',
'--reverse-prompt',
'\nuser',
'--reverse-prompt',
'system:',
'--reverse-prompt',
'\nsystem',
'--reverse-prompt',
'##',
'--reverse-prompt',
'\n##',
'--reverse-prompt',
'###',
'-i',
'-p',
'### Instructions\n' +
'Complete the following chat conversation between the user and the assistant. System messages should be strictly followed as additional instructions.\n' +
'\n' +
'### Inputs\n' +
'system: You are a helpful assistant.\n' +
'user: How are you?\n' +
'assistant: Hi, how may I help you today?\n' +
"system: You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.\n" +
'user: prompt 1\n' +
'user: hi\n' +
'\n' +
'### Response\n' +
'user: hjghj\n' +
'assistant:'
]
}

Node.js v18.13.0

D:\gpt-llama.cpp>

Add support for Auto-GPT

Get Auto-GPT working. It's blocked on the following 2 items

Embeddings aren't working quite right / very inconsistent between models, so having a built-in local embedding option is good.

For the second bullet point, we can technically work around and set the BASE_URL by modifying the code, but it would be nice to have it as an option / env variable so it's easier to get started.

I'll likely open a new PR against Auto-GPT for the second point and push to get that PR i linked above in

Server is not working on windows yet

Hey!

I am very interested in this project, so I wanted to test it out myself. I followed all the instructions and the command prompt does bring up a connection on localhost:443 after launching "npm start", but after trying to access the docs with the URL, there is no connection. I know that this is primarly being run on apple hardware, so take your time with future fixes, just wanted to let you know is all.

Finding last messages?

let lastMessages = [maybeLastMessage];
if (maybeLastMessage.role !== 'user') {
const lastLastMessage = messages.pop();
lastMessages = [lastLastMessage, ...lastMessages];
}

...
const samePrompt =
global.lastRequest &&
global.lastRequest.type === 'chat' &&
compareArrays(global.lastRequest.messages, messages);
const continuedInteraction =
!!global.childProcess && samePrompt && messages.length > 1;

Shouldn't the first quote just loop until it hits "assistant" as a role? I want to enter a "system" message and then a "user" message after. But then the logic here would cause the system to think it is a new conversation because it doesn't match the last conversation.

I can put a user message and after a system message but the chat response behaves badly when I do this.

Every Other Chat Response

I get a restart on the chatRoute because the last response is recorded in the global messages like this:

{"role":"assistant","content":"\\\b\\\b \b\nUSER:\\\b\\\b \b\ntests\nassistant:\nI'm happy to assist you in finding information related to tests. What specific topic or query are you interested in?"}

Only happens every other response, which might be the oddest piece.

It appears to me that it is doing something odd by combining the previous user response with the assistant response.... Did you run into this?

Using WizardLM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.