Giter VIP home page Giter VIP logo

samuraigpt / embedai Goto Github PK

View Code? Open in Web Editor NEW
2.8K 2.8K 298.0 99 KB

An app to interact privately with your documents using the power of GPT, 100% privately, no data leaks

Home Page: https://www.thesamur.ai/?utm_source=github&utm_medium=link&utm_campaign=github_privategpt

License: MIT License

Python 39.40% JavaScript 54.03% CSS 6.57%
chatbot chatgpt embedai embeddings generative gpt gpt4 gpt4all langchain models openai privategpt vectorstore whisper

embedai's People

Contributors

anil-matcha avatar ashtrindade avatar djibe avatar joemccann avatar ncodepro avatar revskill10 avatar staeiou avatar thephabulousphantom avatar vadootvpeer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

embedai's Issues

weird response

I trained the model using a PDF file of a book on ASO.
I asked "What is ASO?"
I get this answer (copied from the browser console to get the real response):

{"answer":"H9E,:?&+8&1<*<2E(#!=)G68/?5C/BGC,48BC292;3;1=%D$EC!;F%'-,81261/EB!52B159F26E49:15E!F4C(H17'&+#<\"'=F'A$)H;C?<(:9C\"H)F6=)8<+?.$'+B*2G/F:B7-4#,95C\"&E)&(,'12<);'(54<G37=,A'D#67E+\")E2*=+F$H*>$7$7BCE?#<1!:!=5<#6G\"*-D!>8'F8,B5A.;6*B!BA%(7/&>:G##=)3C->**'F-;GD+,$2","query":"how can i improve my app description","source":[{"name":"source_documents/The-Advanced-App-Store-Optimization_eBook_2022.pdf"},{"name":"source_documents/The-Advanced-App-Store-Optimization_eBook_2022.pdf"}]}

I tried using imartinez' privateGPT and there I get a valid, readable text response. I tried https://dencode.com/en/string but did not find any. So i'm not really sure what is going on. Does someone have an idea what is going on?

install vicuna13b?

I have tried changing the code, but I have not succeeded yet, someone has already managed to install it with the model ggml-vic13b-q5_1.bin
???? thank you

unable to run npm run dev

C:>cd C:\Users\User\Documents\Learning\privateGPT2\privateGPT-main\client

C:\Users\User\Documents\Learning\privateGPT2\privateGPT-main\client>npm run dev

[email protected] dev C:\Users\User\Documents\Learning\privateGPT2\privateGPT-main\client
next dev

C:\Users\User\Documents\Learning\privateGPT2\privateGPT-main\client\node_modules\next\dist\cli\next-dev.js:256
showAll: args["--show-all"] ?? false,
^

SyntaxError: Unexpected token '?'
at Module._compile (internal/modules/cjs/loader.js:892:18)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:973:10)
at Module.load (internal/modules/cjs/loader.js:812:32)
at Function.Module._load (internal/modules/cjs/loader.js:724:14)
at Module.require (internal/modules/cjs/loader.js:849:19)
at require (internal/modules/cjs/helpers.js:74:18)
at Object.dev (C:\Users\User\Documents\Learning\privateGPT2\privateGPT-main\client\node_modules\next\dist\lib\commands.js:15:30)
at Object. (C:\Users\User\Documents\Learning\privateGPT2\privateGPT-main\client\node_modules\next\dist\bin\next:150:28)
at Module._compile (internal/modules/cjs/loader.js:956:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:973:10)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] dev: next dev
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] dev script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\User\AppData\Roaming\npm-cache_logs\2023-05-29T07_17_34_828Z-debug.log

C:\Users\User\Documents\Learning\privateGPT2\privateGPT-main\client>

spanish language

I want to know how to configure it so that it speaks to me in Spanish. Ask in Spanish and answer in Spanish. I don't understand why if I ask in Spanish he answers me in English

client run error

(pgweb) rex@alienware17B:~/pgweb/privateGPT/client$ npm run dev

[email protected] dev
next dev

/home/rex/pgweb/privateGPT/client/node_modules/next/dist/cli/next-dev.js:256
showAll: args["--show-all"] ?? false,
^

SyntaxError: Unexpected token '?'
at wrapSafe (internal/modules/cjs/loader.js:915:16)
at Module._compile (internal/modules/cjs/loader.js:963:27)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
at Module.load (internal/modules/cjs/loader.js:863:32)
at Function.Module._load (internal/modules/cjs/loader.js:708:14)
at Module.require (internal/modules/cjs/loader.js:887:19)
at require (internal/modules/cjs/helpers.js:74:18)
at Object.dev (/home/rex/pgweb/privateGPT/client/node_modules/next/dist/lib/commands.js:15:30)
at Object. (/home/rex/pgweb/privateGPT/client/node_modules/next/dist/bin/next:150:28)
at Module._compile (internal/modules/cjs/loader.js:999:30)

500 Internal Server Error

After uploading a document, when asking questions, I get:
Error getting data.<!doctype html> <title>500 Internal Server Error</title>

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.10/dist-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/it/Scripts/PrivateChatGPT/server/privateGPT.py", line 146, in get_answer
res = qa(query)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 140, in call
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 134, in call
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/retrieval_qa/base.py", line 119, in _call
docs = self._get_docs(question)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/retrieval_qa/base.py", line 181, in _get_docs
return self.retriever.get_relevant_documents(question)
File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/base.py", line 377, in get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py", line 182, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py", line 229, in similarity_search_with_score
results = self.__query_collection(
File "/usr/local/lib/python3.10/dist-packages/langchain/utils.py", line 52, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py", line 121, in __query_collection
return self._collection.query(
File "/usr/local/lib/python3.10/dist-packages/chromadb/api/models/Collection.py", line 227, in query
return self._client._query(
File "/usr/local/lib/python3.10/dist-packages/chromadb/api/local.py", line 437, in _query
uuids, distances = self._db.get_nearest_neighbors(
File "/usr/local/lib/python3.10/dist-packages/chromadb/db/clickhouse.py", line 585, in get_nearest_neighbors
uuids, distances = index.get_nearest_neighbors(embeddings, n_results, ids)
File "/usr/local/lib/python3.10/dist-packages/chromadb/db/index/hnswlib.py", line 240, in get_nearest_neighbors
raise NoIndexException(
chromadb.errors.NoIndexException: Index not found, please create an instance before querying
192.168.70.36 - - [30/May/2023 08:26:06] "POST /get_answer HTTP/1.1" 500 -

npm run dev hang in certain point

When I run the above command in powershell, it hand in certain point and like not continue process.

Import trace for requested module:

./node_modules/next/dist/lib/is-api-route.js
./node_modules/next/dist/shared/lib/router/router.js

./node_modules/next/dist/lib/is-error.js
There are multiple modules with names that only differ in casing.
This can lead to unexpected behavior when compiling on a filesystem with other case-semantic.
Use equal casing. Compare these module identifiers:

  • D:\InnoGPT\WebPrivateGPT\privateGPT\client\node_modules\next\dist\lib\is-error.js
    Used by 3 module(s), i. e.
    D:\InnoGPT\WebPrivateGPT\privateGPT\client\node_modules\next\dist\compiled@next\react-refresh-utils\dist\loader.js!D:\InnoGPT\WebPrivateGPT\privateGPT\client\node_modules\next\dist\build\webpack\loaders\next-swc-loader.js??ruleSet[1].rules[3].oneOf[5].use[1]!D:\InnoGPT\WebPrivateGPT\privateGPT\client\node_modules\next\dist\client\router.js
  • D:\innoGPT\webprivategpt\privategpt\client\node_modules\next\dist\lib\is-error.js
    Used by 4 module(s), i. e.
    D:\InnoGPT\WebPrivateGPT\privateGPT\client\node_modules\next\dist\compiled@next\react-refresh-utils\dist\loader.js!D:\InnoGPT\WebPrivateGPT\privateGPT\client\node_modules\next\dist\build\webpack\loaders\next-swc-loader.js??ruleSet[1].rules[3].oneOf[5].use[1]!D:\innoGPT\webprivategpt\privategpt\client\node_modules\next\dist\client\index.js

Import trace for requested module:
./node_modules/next/dist/lib/is-error.js

It stop here and few hour also not continue. May I know it's completed run? What cause this error?
When I run the localhost to download the language models and it gave me an error also.

Error downloading model

Hi
when i hit download model return this error:

Unhandled Runtime Error
ReferenceError: response is not defined

Source
components/ConfigSideNav.js (27:3) @ response

  25 |   } catch (error) {
  26 |     setIsLoading(false);
> 27 | 	response.text().then(text => {toast.error("Error Ingesting data."+text);})
     | 	 ^
  28 |   }
  29 | };
  30 | 

gpt_tokenize: unknown token '?

from flask import Flask,jsonify, render_template, flash, redirect, url_for, Markup, request
gptj_model_load: loading model from 'models/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
gptj_model_load: ggml ctx size = 4505.45 MB
gptj_model_load: memory_size = 896.00 MB, n_mem = 57344
gptj_model_load: ................................... done
gptj_model_load: model size = 3609.38 MB / num tensors = 285
LLM0 GPT4All
Params: {'model': 'models/ggml-gpt4all-j-v1.3-groovy.bin', 'n_predict': 256, 'n_threads': 4, 'top_k': 40, 'top_p': 0.95, 'temp': 0.8}

  • Serving Flask app 'privateGPT'
  • Debug mode: off
    [2023-05-31 10:39:11,833] {_internal.py:186} INFO - WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on all addresses (0.0.0.0)
  • Running on http://127.0.0.1:5000
  • Running on http://10.253.1.21:5000
    [2023-05-31 10:39:11,834] {_internal.py:186} INFO - Press CTRL+C to quit
    Loading documents from source_documents
    Loaded 1 documents from source_documents
    Split into 90 chunks of text (max. 500 characters each)
    [2023-05-31 10:39:47,710] {_internal.py:186} INFO - 127.0.0.1 - - [31/May/2023 10:39:47] "GET /ingest HTTP/1.1" 200 -
    [2023-05-31 10:40:04,057] {_internal.py:186} INFO - 127.0.0.1 - - [31/May/2023 10:40:04] "OPTIONS /get_answer HTTP/1.1" 200 -
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '€'
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '€'
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '€'
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '€'
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '€'
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '€'
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '€'
    gpt_tokenize: unknown token '?
    gpt_tokenize: unknown token '?

How can I fix the issue ?

Langchain version

Upgrading langchain version to 0.0.177 in the requirements.txt file optimizes the RAM usage very well, I really recommend It.

Can't run on iMac M1

I removed the pywin32 module from the requirements, though I'm still getting errors when running install -r requirements.txt. This is the one I'm getting

ERROR: Could not find a version that satisfies the requirement putz-deprecation-shim==0.1.0.post0 (from versions: none) 
ERROR: No matching distribution found for putz-deprecation-shim==0.1.0.post0

And when running privateGPT.py -- I'm getting the error below

File "/Users/XX/Desktop/privateGPT/server/privateGPT.py", line 1, in <module>
    from flask import Flask,jsonify, render_template, flash, redirect, url_for, Markup, request
ModuleNotFoundError: No module named 'flask'

(base) XX@XXXX-iMac server % python3 privateGPT.py           
Traceback (most recent call last):
  File "/Users/am/Desktop/privateGPT/server/privateGPT.py", line 1, in <module>
    from flask import Flask,jsonify, render_template, flash, redirect, url_for, Markup, request

ModuleNotFoundError: No module named 'flask'

Appreciate any support! :)

Client folder

When I run "npm run dev" the console prints: '"next" is not an internal or external command, program or excecutable file.'

Traceback Error while Downloading a Model

I finally got it to start, I uploaded a file, ingested data and then it downloaded until 32% and then this happened

`Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\response.py", line 705, in _error_catcher
yield
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\response.py", line 830, in _raw_read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
urllib3.exceptions.IncompleteRead: IncompleteRead(1236664320 bytes read, 2548583961 more expected)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\models.py", line 816, in generate
yield from self.raw.stream(chunk_size, decode_content=True)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\response.py", line 935, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\response.py", line 874, in read data = self._raw_read(amt)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\response.py", line 808, in _raw_read
with self._error_catcher():
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\response.py", line 722, in _error_catcher
raise ProtocolError(f"Connection broken: {e!r}", e) from e
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(1236664320 bytes read, 2548583961 more expected)', IncompleteRead(1236664320 bytes read, 2548583961 more expected))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\flask_cors\extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "C:\Users\User\Downloads\privateGPT-main\server\privateGPT.py", line 190, in download_and_save
for chunk in response.iter_content(chunk_size=4096):
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\models.py", line 818, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(1236664320 bytes read, 2548583961 more expected)', IncompleteRead(1236664320 bytes read, 2548583961 more expected))`

error running python privateGPT. help

Running in /server

python privateGPT. error

Command 'python' not found, did you mean:
command 'python3' from deb python3
command 'python' from deb python-is-python3

or python3 privateGPT. error

python3: can't open file '/home/ai1/privateGPT/server/privateGPT.': [Errno 2] No such file or directory

Use up the query is very slow

Use up the query is very slow, what is the way to make the query faster can try, thank you all, can use FAISS embedded database to solve the query slow

Error when upload the file and ingest data

Unhandled Runtime Error
ReferenceError: response is not defined

Source
components\ConfigSideNav.js (27:3) @ response

25 | } catch (error) {
26 | setIsLoading(false);

27 | response.text().then(text => {toast.error("Error Ingesting data."+text);})
| ^
28 | }
29 | };
30 |

Loading models

I have noticed that the app loads the model twice, using more RAM than needed. Is there a reason for that or is it just a bug?

Runtime Error

I'm getting the following runtime error when running locally and after installing all the npm packages with npm i, as well as creating a venv for the python packages and installing all the requirements with pip.

"Unhandled Runtime Error
TypeError: event.target.file is undefined"

Screenshot 2023-05-25 at 10 23 53 AM

Error getting data

I'm facing an issue after uploading the document and successfully ingesting the data.
When I send a message in the prompt it says Error getting data and i get no response.
The server side terminal displays this -

127.0.0.1 - - [26/May/2023 16:13:19] "OPTIONS /get_answer HTTP/1.1" 200 -
Using embedded DuckDB with persistence: data will be stored in: db/
127.0.0.1 - - [26/May/2023 16:13:20] "POST /get_answer HTTP/1.1" 400 -

PrivateGPT gives errors

I have followed instructions, but when I go on localhost:3000 it gives the following errors.

When uploading document, it goes "Error uploading document"

When downloading model, it goes "Error downloading model"

When writing anything to the chat, it goes "Error fetching answer"

I tried to open PrivateGPT.py and it gave me this.

File "C:\Users\User\Downloads\privateGPT-main\server\privateGPT.py", line 1, in <module>
    from flask import Flask,jsonify, render_template, flash, redirect, url_for, Markup, request
ModuleNotFoundError: No module named 'flask'

I want push problem about client

I updated your code about client when upload file function error.
This code updated:

const handleFileChange = (event) => {
if (event.target.files && event.target.files.length > 0) {
setSelectedFile(event.target.files[0]);
}

};

Please update your code. Thanks.

error fetching answer | model not loaded.

I have downloaded the ggml-gpt4all-j-v1.3-groovy.bin and ggml-model-q4_0.bin int the server->models folder. Then uploaded my pdf and after that ingest all are successfully completed but when I am querying anything then it gives the error of fetching answer | model not downloaded. How to resolve this issue?

Switch default language

Hey there! Thank you for providing such a tool, very cool!
Is it possible to switch the default language? I would really like to test in German.

No such file or directory 'models/ggml-gpt4all-j-v1.3-groovy.bin

I got this all setup with a few issues. Nothing ran into an issue when starting setup on the local environments. I took a simple text document, uploaded, ingest data - then when I go to Download model,

This is the error that is showing up...

No such file or directory 'models/ggml-gpt4all-j-v1.3-groovy.bin

npm run dev error

[email protected] dev
next dev

(node:58556) ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time
(Use node --trace-warnings ... to show where the warning was created)

  • ready started server on 0.0.0.0:3000, url: http://localhost:3000
    (node:58557) ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time
    (Use node --trace-warnings ... to show where the warning was created)
    node:events:368
    throw er; // Unhandled 'error' event
    ^

Error: listen EADDRNOTAVAIL: address not available 10.33.42.50
at Server.setupListenHandle [as _listen2] (node:net:1317:21)
at listenInCluster (node:net:1382:12)
at GetAddrInfoReqWrap.doListen [as callback] (node:net:1520:7)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:73:8)
Emitted 'error' event on Server instance at:
at emitErrorNT (node:net:1361:8)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
code: 'EADDRNOTAVAIL',
errno: -49,
syscall: 'listen',
address: '10.33.42.50'
}

chromadb.errors.NoIndexException

I have downloaded a model
Uploaded a bitcoin white paper

Asked : What is that paper about and got

Using embedded DuckDB with persistence: data will be stored in: db/
[2023-05-29 00:45:56,320] ERROR in app: Exception on /get_answer [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/flask_cors/extension.py", line 165, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
  File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "privateGPT.py", line 146, in get_answer
    res = qa(query)
  File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 140, in __call__
    raise e
  File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/usr/local/lib/python3.8/dist-packages/langchain/chains/retrieval_qa/base.py", line 119, in _call
    docs = self._get_docs(question)
  File "/usr/local/lib/python3.8/dist-packages/langchain/chains/retrieval_qa/base.py", line 181, in _get_docs
    return self.retriever.get_relevant_documents(question)
  File "/usr/local/lib/python3.8/dist-packages/langchain/vectorstores/base.py", line 377, in get_relevant_documents
    docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/langchain/vectorstores/chroma.py", line 182, in similarity_search
    docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
  File "/usr/local/lib/python3.8/dist-packages/langchain/vectorstores/chroma.py", line 229, in similarity_search_with_score
    results = self.__query_collection(
  File "/usr/local/lib/python3.8/dist-packages/langchain/utils.py", line 52, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/langchain/vectorstores/chroma.py", line 121, in __query_collection
    return self._collection.query(
  File "/usr/local/lib/python3.8/dist-packages/chromadb/api/models/Collection.py", line 227, in query
    return self._client._query(
  File "/usr/local/lib/python3.8/dist-packages/chromadb/api/local.py", line 437, in _query
    uuids, distances = self._db.get_nearest_neighbors(
  File "/usr/local/lib/python3.8/dist-packages/chromadb/db/clickhouse.py", line 585, in get_nearest_neighbors
    uuids, distances = index.get_nearest_neighbors(embeddings, n_results, ids)
  File "/usr/local/lib/python3.8/dist-packages/chromadb/db/index/hnswlib.py", line 240, in get_nearest_neighbors
    raise NoIndexException(
chromadb.errors.NoIndexException: Index not found, please create an instance before querying
79.140.150.138 - - [29/May/2023 00:45:56] "POST /get_answer HTTP/1.1" 500 -
root@ns521942:/home/ubuntu/privateGPT/client/components# 

Model download error on 100% progress

I have tried to download a model and got this error:

Download Progress: 100.0%
Found model file.
gptj_model_load: loading model from 'models/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
[2023-06-03 18:09:31,688] ERROR in app: Exception on /download_model [GET]
Traceback (most recent call last):
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask_cors\extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\privateGPT\server\privateGPT.py", line 197, in download_and_save
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic\main.py", line 1102, in pydantic.main.validate_model
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\llms\gpt4all.py", line 139, in validate_environment values["client"] = GPT4AllModel(
^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\gpt4all\gpt4all.py", line 49, in init
self.model.load_model(model_dest)
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\gpt4all\pyllmodel.py", line 141, in load_model
llmodel.llmodel_loadModel(self.model, model_path.encode('utf-8'))
OSError: [WinError -1073741795] Windows Error 0xc000001d
127.0.0.1 - - [03/Jun/2023 18:09:31] "GET /download_model HTTP/1.1" 500 -

Is it possible to fix that?
Thank you and have a great day.

server run error

(pgweb) rex@alienware17B:~/pgweb/privateGPT/server$ python privateGPT.py
/home/rex/pgweb/privateGPT/server/privateGPT.py:1: DeprecationWarning: 'flask.Markup' is deprecated and will be removed in Flask 2.4. Import 'markupsafe.Markup' instead.
from flask import Flask,jsonify, render_template, flash, redirect, url_for, Markup, request
Traceback (most recent call last):
File "/home/rex/pgweb/privateGPT/server/privateGPT.py", line 213, in
load_model()
File "/home/rex/pgweb/privateGPT/server/privateGPT.py", line 210, in load_model
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/home/rex/miniconda3/envs/pgweb/lib/python3.11/site-packages/langchain/llms/gpt4all.py", line 170, in validate_environment
model_path=values["model"],
~~~~~~^^^^^^^^^
KeyError: 'model'

port occupied

The port is occupied when starting the server. If I change the server port, where is the port changed when the client initiates the request?

Server error 404

Hello, I am also getting 404 not found when when going to http://192.168.20.50:5000/

Serving Flask app 'privateGPT'
Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
Running on all addresses (0.0.0.0)
Running on http://127.0.0.1:5000/
Running on http://192.168.20.50:5000/
Press CTRL+C to quit
192.168.70.36 - - [29/May/2023 15:48:07] "GET / HTTP/1.1" 404 -
http://192.168.20.50:3000/ works but can't upload a document gives error

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.