ivanfioravanti / chatbot-ollama Goto Github PK
View Code? Open in Web Editor NEWChatbot Ollama is an open source chat UI for Ollama.
License: Other
Chatbot Ollama is an open source chat UI for Ollama.
License: Other
I change the .env.local file
.env.local
DEFAULT_MODEL="Mistral-7B:latest"
NEXT_PUBLIC_DEFAULT_SYSTEM_PROMPT="You are a helpful, respectful and honest assistant.Help humman as much as you can."
and run the script
OLLAMA_HOST="http://0.0.0.0:11434" DEFAULT_MODEL='Mistral-7B:latest' HOST=0.0.0.0 PORT=80 npm run dev
when creating new conversation.there is no option to choose custom model,it gave me error
[OllamaError: model 'mistral:latest' not found, try pulling it first] {
name: 'OllamaError'
}
how to solve it?
I use below script to run chatbot-ollama,the default port is 3000,how change it to 80?
OLLAMA_HOST="http://127.0.0.1:5050" npm run dev
Running the container is completely fine, but as soon as I open localhost:3000 in any graphical browser the tab pegs one CPU core to 100% and refuses to let go. I cannot scroll up with the scroll wheel, the site will immediately scroll back to the bottom (using the scrollbar at the side by hand works)
It feels like you don't sync to vblank or whatever so your site just tries to render as many frames as the CPU allows? Either way no sleep of any kind seems to happen, making the project almost unusable.
Unfortunately I'm not a web-dev, so even if I wish I could, I can't really help.
maybe this profiler callstack helps some. probably not given the obfuscation.
Hi, it looks like chatbot-ui has LaTeX support but chatbot-ollama seems to have it disabled/deleted.
Is it intentional ? It's my main motivation for quitting ollama CLI π
It seems that previous selected model is used in a chat instead of currently selected one.
I'm hosting the UI on a remote machine like so
export OLLAMA_HOST=0.0.0.0:11434
ollama serve
docker build -t chatbot-ollama .
docker run -p 3000:3000 -e DEFAULT_MODEL="llama3:latest" -e OLLAMA_HOST="http://localhost:11434" chatbot-ollama
and then I've port forwarded port 3000 and port 11434 to my local machine and went to localhost:3000/ in the browser. This resulted in the below errors when I opened the web page.
$ docker run -p 3000:3000 --add-host=host.docker.internal:host-gateway -e DEFAULT_MODEL="llama3:latest" chatbot-ollama
> [email protected] start
> next start
β² Next.js 14.1.0
- Local: http://localhost:3000
β Ready in 2.4s
[TypeError: fetch failed] { cause: [Error: AggregateError] }
[TypeError: fetch failed] { cause: [Error: AggregateError] }
[TypeError: fetch failed] { cause: [Error: AggregateError] }
[TypeError: fetch failed] { cause: [Error: AggregateError] }
[TypeError: fetch failed] { cause: [Error: AggregateError] }
[TypeError: fetch failed] { cause: [Error: AggregateError] }
[TypeError: fetch failed] { cause: [Error: AggregateError] }
And in the browser logs
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
debug.js:87 (3)(+0000000): No suitable translators found
debug.js:87 (5)(+0000000): Translate: Running handler 0 for translators
:3001/api/models:1
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
:3001/api/chat:1
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
models:1
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
The following only partially works:
podman run -it --rm -e OLLAMA_HOST=http://somewhere.com -p 3000:3000 -e DEFAULT_MODEL=mistral:7b-instruct ghcr.io/ivanfioravanti/chatbot-ollama:main
The UI seems to act like it's willing to follow the ENV, but then forces itself to default to mistral:latest
and blows up with errors:
[OllamaError: model 'mistral:latest' not found, try pulling it first] {
name: 'OllamaError'
}
chatbot-ollama can't find model on native Windows ollama built from source but when I run the precompiled binary it found the model just fine.
I set the OLLAMA_HOST
like so:
docker run -d -p 3000:3000 -e OLLAMA_HOST="http://127.0.0.1:11434/" --name chatbot-ollama ghcr.io/ivanfioravanti/chatbot-ollama:main
and can't connect the app to the server. docker logs chatbot-ollama
reads:
> [email protected] start
> next start
β² Next.js 13.5.4
- Local: http://localhost:3000
β Ready in 236ms
[TypeError: fetch failed] {
cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 11434
}
}
Looks like ollama now support image interpretation.
https://github.com/jmorganca/ollama/releases/tag/v0.1.15
Would be cool to add a drop zone, to upload base64 images.
Thanks in advanced for this cool project :)
I set up Chatbot Ollama via Poniko and it worked well until I restarted the server. After restarting, the UI will load, but when I try to submit a prompt I get a popup in the UI saying "Internal Server Error" and then [TypeError: Failed to parse URL from 0.0.0.0/api/tags] is written to the logs.
I have tried uninstalling and reinstalling the application via Poniko but this hasn't changed the behavior which makes me think it's not related to Poniko.
The Ollama server is running. When I visit the server root address 'localhost:11434' I am given the message 'Ollama is running'.
it doesn't change anything when I access the Chatbot Ollama server via 'localhost' or the LAN IP address.
I have attached the logs for the UI below. Please let me know if there are any suggestions of things to try or any additional information I can provide. Thanks in advance for the support.
Microsoft Windows [Version 10.0.22631.3880]
(c) Microsoft Corporation. All rights reserved.
C:\Users\Administrator\pinokio\api\chatbot-ollama.git\app>conda_hook && conda deactivate && conda deactivate && conda deactivate && conda activate base && npm start
> [email protected] start
> next start
β² Next.js 14.1.0
- Local: http://localhost:3000
β Ready in 606ms
[Start proxy] Local Sharing http://localhost:3000
Proxy Started {"target":"http://localhost:3000","proxy":"http://10.10.10.11:42421"}
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
(node:20384) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/generate]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
[TypeError: Failed to parse URL from 0.0.0.0/api/tags]
}
[TypeError: fetch failed] {
cause: [Error: getaddrinfo ENOTFOUND host.docker.internal] {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'host.docker.internal'
}
[TypeError: fetch failed] {
2024-03-09 14:04:12 cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] {
2024-03-09 14:04:12 errno: -111,
2024-03-09 14:04:12 code: 'ECONNREFUSED',
2024-03-09 14:04:12 syscall: 'connect',
2024-03-09 14:04:12 address: '127.0.0.1',
2024-03-09 14:04:12 port: 11434
2024-03-09 14:04:12 }
This is the Log after using the installing with docker (command from readme).
[email protected] start
οΏ½
> next start
β² Next.js 13.5.4
- Local: http://localhost:3000
β Ready in 455ms
(node:24) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
[TypeError: fetch failed] {
cause: [Error: getaddrinfo ENOTFOUND host.docker.internal] {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'host.docker.internal'
Hello everyoneοΌ
My ollama in My docker
docker Start ollama command is docker run -e OLLAMA_HOST=0.0.0.0:11434 -d -v ollama serve -p 11434:11434 --name ollama ollama/ollama
Then I in vscode open chatbot-ollama And then input npm run dev And then Report an error
ββββββββββββββββββββββββββββββββ Here is the error log ββββββββββββββββββββββββββββββββββββββββββ
PS G:\AI\chatbot-ollama> npm run dev
[email protected] dev
next dev
β² Next.js 13.5.6
β Ready in 2.9s
β Compiling / ...
β Compiled / in 3.3s (1652 modules)
β Fast Refresh had to perform a full reload. Read more: https://nextjs.org/docs/messages/fast-refresh-reload
β Compiled in 1699ms (1652 modules)
β Compiled in 519ms (1652 modules)
β Compiled /api/models in 245ms (68 modules)
[TypeError: fetch failed] {
cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 11434
}
}
β Compiled in 620ms (1720 modules)
[TypeError: fetch failed] {
cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 11434
}
}
[TypeError: fetch failed] {
cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 11434
}
}
[TypeError: fetch failed] {
cause: [Error: connect ECONNREFUSED 127.0.0.1:11434] {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 11434
}
}
SyntaxError: Unexpected non-whitespace character after JSON at position 104 (line 2 column 1)
at JSON.parse ()
at Object.start (webpack-internal:///(middleware)/./utils/server/index.ts:46:45)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
this error I think is similar to one GPT API has too...
Hi,
I noticed that there is no option to enable streaming. Is it planned to add the feature?
It would be much appreciated.
when I try to response from ollama a few second,
the error log:
β¨― Error: failed to pipe response 2024-03-03 21:28:21 at pipeToNodeResponse (/app/node_modules/next/dist/server/pipe-readable.js:111:15) 2024-03-03 21:28:21 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) 2024-03-03 21:28:21 at async NextNodeServer.runEdgeFunction (/app/node_modules/next/dist/server/next-server.js:1225:13) 2024-03-03 21:28:21 at async NextNodeServer.handleCatchallRenderRequest (/app/node_modules/next/dist/server/next-server.js:247:37) 2024-03-03 21:28:21 at async NextNodeServer.handleRequestImpl (/app/node_modules/next/dist/server/base-server.js:807:17) 2024-03-03 21:28:21 at async invokeRender (/app/node_modules/next/dist/server/lib/router-server.js:163:21) 2024-03-03 21:28:21 at async handleRequest (/app/node_modules/next/dist/server/lib/router-server.js:342:24) 2024-03-03 21:28:21 at async requestHandlerImpl (/app/node_modules/next/dist/server/lib/router-server.js:366:13) 2024-03-03 21:28:21 at async Server.requestListener (/app/node_modules/next/dist/server/lib/start-server.js:140:13) { 2024-03-03 21:28:21 [cause]: SyntaxError: Expected ',' or ']' after array element in JSON at position 1454 2024-03-03 21:28:21 at JSON.parse (<anonymous>) 2024-03-03 21:28:21 at Object.start (/app/.next/server/pages/api/chat.js:1:822) 2024-03-03 21:28:21 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) 2024-03-03 21:28:21 } 2024-03-03 21:28:21 Error: aborted 2024-03-03 21:28:21 at connResetException (node:internal/errors:787:14) 2024-03-03 21:28:21 at abortIncoming (node:_http_server:793:17) 2024-03-03 21:28:21 at socketOnClose (node:_http_server:787:3) 2024-03-03 21:28:21 at Socket.emit (node:events:530:35) 2024-03-03 21:28:21 at TCP.<anonymous> (node:net:337:12) { 2024-03-03 21:28:21 code: 'ECONNRESET' 2024-03-03 21:28:21 }
in ollama console every thing is all right.
I'm sure its something silly i'm missing.
I tried typing 'docker pull mistral:latest' but it said i didn't have permission
I noticed for importing data, it accepts only json
files at the moment, as seen in Import.tsx
.
For scaling this feature, we could go further and support multiple type of files to import, such as:
We can update this list as needed, but these were what I had in mind atm
right now only file type i can import seems to be json, wanted something that could help me chat with my pdfs using ollama with a beautiful UI. Hoping you can add that feature.
During a npm install
I get 3 deprecated notices:
npm WARN deprecated [email protected]: Use your platform's native atob() and btoa() methods instead
npm WARN deprecated @vitest/[email protected]: v8 coverage is moved to @vitest/coverage-v8 package
npm WARN deprecated [email protected]: Use your platform's native DOMException instead
When creating a new prompt, it shows a little bit of guidance on how to use variables using double curly braces.
Is there any documentation anywhere (even in the code) that expands upon this?
Issues:
Recording:
Running via docker, node 20
Docker run instruction documented to run the pre built container is not working.
$ docker run -e -p 3000:3000 ghcr.io/ivanfioravanti/chatbot-ollama:main
Unable to find image '3000:3000' locally
$ docker run -p 3000:3000 ghcr.io/ivanfioravanti/chatbot-ollama:main
Hi,
I'm using langflow to build the flow of chatbot. After building a flow, I can gen a code for integrate with external application.
Could you, please help to show me how to integrate link code that generated from langflow to chatbot-ollama?
Thank you a lot.
When I run it on the host machine I get this error.
LLaVA was recently released, and progress is being made upstream to support it.
Ollama has yet to begin discussions about this (at least that I can find), and I'm going to explore what it will take on that end too.
But I think it would be advantageous to at least start thinking about how you might want to implement this; I'd be happy to help.
Ofc, makes sense to wait on updates from ollama first, but just wanted to put it on the radar.
I think we could really get local LMM's on the map if we got this implemented:)
When internet is turned off, it throws below error
β¨― Error: failed to pipe response at pipeToNodeResponse (/app/node_modules/next/dist/server/pipe-readable.js:111:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async NextNodeServer.runEdgeFunction (/app/node_modules/next/dist/server/next-server.js:1225:13) at async NextNodeServer.handleCatchallRenderRequest (/app/node_modules/next/dist/server/next-server.js:247:37) at async NextNodeServer.handleRequestImpl (/app/node_modules/next/dist/server/base-server.js:807:17) at async invokeRender (/app/node_modules/next/dist/server/lib/router-server.js:163:21) at async handleRequest (/app/node_modules/next/dist/server/lib/router-server.js:342:24) at async requestHandlerImpl (/app/node_modules/next/dist/server/lib/router-server.js:366:13) at async Server.requestListener (/app/node_modules/next/dist/server/lib/start-server.js:140:13) { [cause]: SyntaxError: Unexpected non-whitespace character after JSON at position 101 at JSON.parse (<anonymous>) at Object.start (/app/.next/server/pages/api/chat.js:1:822) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) } Error: aborted at connResetException (node:internal/errors:787:14) at abortIncoming (node:_http_server:793:17) at socketOnClose (node:_http_server:787:3) at Socket.emit (node:events:530:35) at TCP.<anonymous> (node:net:337:12) { code: 'ECONNRESET' } β¨― Error: failed to pipe response at pipeToNodeResponse (/app/node_modules/next/dist/server/pipe-readable.js:111:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async NextNodeServer.runEdgeFunction (/app/node_modules/next/dist/server/next-server.js:1225:13) at async NextNodeServer.handleCatchallRenderRequest (/app/node_modules/next/dist/server/next-server.js:247:37) at async NextNodeServer.handleRequestImpl (/app/node_modules/next/dist/server/base-server.js:807:17) at async invokeRender (/app/node_modules/next/dist/server/lib/router-server.js:163:21) at async handleRequest (/app/node_modules/next/dist/server/lib/router-server.js:342:24) at async requestHandlerImpl (/app/node_modules/next/dist/server/lib/router-server.js:366:13) at async Server.requestListener (/app/node_modules/next/dist/server/lib/start-server.js:140:13) { [cause]: SyntaxError: Unexpected non-whitespace character after JSON at position 99 at JSON.parse (<anonymous>) at Object.start (/app/.next/server/pages/api/chat.js:1:822) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) } Error: aborted at connResetException (node:internal/errors:787:14) at abortIncoming (node:_http_server:793:17) at socketOnClose (node:_http_server:787:3) at Socket.emit (node:events:530:35) at TCP.<anonymous> (node:net:337:12) { code: 'ECONNRESET' } o β¨― Error: failed to pipe response at pipeToNodeResponse (/app/node_modules/next/dist/server/pipe-readable.js:111:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async NextNodeServer.runEdgeFunction (/app/node_modules/next/dist/server/next-server.js:1225:13) at async NextNodeServer.handleCatchallRenderRequest (/app/node_modules/next/dist/server/next-server.js:247:37) at async NextNodeServer.handleRequestImpl (/app/node_modules/next/dist/server/base-server.js:807:17) at async invokeRender (/app/node_modules/next/dist/server/lib/router-server.js:163:21) at async handleRequest (/app/node_modules/next/dist/server/lib/router-server.js:342:24) at async requestHandlerImpl (/app/node_modules/next/dist/server/lib/router-server.js:366:13) at async Server.requestListener (/app/node_modules/next/dist/server/lib/start-server.js:140:13) { [cause]: SyntaxError: Unexpected non-whitespace character after JSON at position 99 at JSON.parse (<anonymous>) at Object.start (/app/.next/server/pages/api/chat.js:1:822) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) } Error: aborted at connResetException (node:internal/errors:787:14) at abortIncoming (node:_http_server:793:17) at socketOnClose (node:_http_server:787:3) at Socket.emit (node:events:530:35) at TCP.<anonymous> (node:net:337:12) { code: 'ECONNRESET' }
I have handful of models I use and test all the time. However it is difficult to keep track of which model is for which purpose. Adding even a one line description for the model and showing it when selecting new chat and model would help a ton.
For example:
Mistral 7B: Small, yet very powerful for a variety of use cases. English and code, 8k context window.
Mixtral 8x7B: A 7B sparse Mixture-of-Experts (SMoE), Fluent in English, French, Italian, German, Spanish, and strong in code, 32k context window
Chatbar.tsx:153 Warning: Maximum update depth exceeded. This can happen when a component calls setState inside useEffect, but useEffect either doesn't have a dependency array, or one of the dependencies changes on every render.
at Chatbar (webpack-internal:///./components/Chatbar/Chatbar.tsx:44:79)
at div
at main
at Home (webpack-internal:///./pages/api/home/home.tsx:54:11)
at QueryClientProvider (webpack-internal:///./node_modules/react-query/es/react/QueryClientProvider.js:39:21)
at div
at App (webpack-internal:///./pages/_app.tsx:18:11)
at I18nextProvider (webpack-internal:///./node_modules/react-i18next/dist/es/I18nextProvider.js:10:19)
at AppWithTranslation (webpack-internal:///./node_modules/next-i18next/dist/esm/appWithTranslation.js:34:22)
at PathnameContextProviderAdapter (webpack-internal:///./node_modules/next/dist/shared/lib/router/adapters.js:80:11)
at ErrorBoundary (webpack-internal:///./node_modules/next/dist/compiled/@next/react-dev-overlay/dist/client.js:26:6864)
at ReactDevOverlay (webpack-internal:///./node_modules/next/dist/compiled/@next/react-dev-overlay/dist/client.js:26:9247)
at Container (webpack-internal:///./node_modules/next/dist/client/index.js:80:1)
at AppContainer (webpack-internal:///./node_modules/next/dist/client/index.js:213:11)
at Root (webpack-internal:///./node_modules/next/dist/client/index.js:437:11)
I'm trying to use a gguf model with ollama, but it seems to be incompatible
Is there some way to keepalive connections to a slow local model that would otherwise timeout before responding?
i tried to build the chat
216:6 Warning: React Hook useEffect has a missing dependency: 'dispatch'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
221:6 Warning: React Hook useEffect has a missing dependency: 'dispatch'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
231:6 Warning: React Hook useEffect has a missing dependency: 'defaultModelId'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
305:6 Warning: React Hook useEffect has missing dependencies: 'conversations' and 't'. Either include them or remove the dependency array. react-hooks/exhaustive-deps
./components/Chat/Chat.tsx
232:5 Warning: React Hook useCallback has a missing dependency: 'homeDispatch'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
./components/Chat/ChatInput.tsx
222:6 Warning: React Hook useEffect has a missing dependency: 'textareaRef'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
./components/Chatbar/Chatbar.tsx
158:6 Warning: React Hook useEffect has a missing dependency: 'chatDispatch'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
./components/Promptbar/Promptbar.tsx
117:6 Warning: React Hook useEffect has a missing dependency: 'promptDispatch'. Either include it or remove the dependency array. react-hooks/exhaustive-deps
info - Need to disable some ESLint rules? Learn more here: https://nextjs.org/docs/basic-features/eslint#disabling-rules
info - Linting and checking validity of types ...Failed to compile.
./pages/_app.tsx:26:10
Type error: 'Component' cannot be used as a JSX component.
Its element type 'Component<any, any, any> | ReactElement<any, any> | null' is not a valid JSX element.
Type 'Component<any, any, any>' is not assignable to type 'Element | ElementClass | null'.
Type 'Component<any, any, any>' is not assignable to type 'ElementClass'.
The types returned by 'render()' are incompatible between these types.
Type 'React.ReactNode' is not assignable to type 'import("/var/www/chat.moving-bytes.at/node_modules/@types/react-dom/node_modules/@types/react/ts5.0/index").ReactNode'.
Type 'ReactElement<any, string | JSXElementConstructor<any>>' is not assignable to type 'ReactNode'.
Property 'children' is missing in type 'ReactElement<any, string | JSXElementConstructor<any>>' but required in type 'ReactPortal'.
24 | />
25 | <QueryClientProvider client={queryClient}>
> 26 | <Component {...pageProps} />
| ^
27 | </QueryClientProvider>
28 | </div>
29 | );
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Hello, I am a beginner who is not familiar with front-end development. I have an nginx service. Could you please tell me how to deploy the chatbot-ollama project on a server that is completely offline? Thank you very much for your reply!
I was using ollama pull
to download new model, I can see the model in the dropdown menu, but they won't generate. (mistral works fine btw)
I am hosting ollama remotely in a server and trying to deploy the chatbot-ollama on a different server. Accordingly, I added the OLLAMA_HOST
env variable pointing to the address of my ollama server.
When I start chatbot-ollama, when I input something in the GUI, in the logs it says
[OllamaError: stat /root/.ollama/models/manifests/registry.ollama.ai/library/llama2/latest: no such file or directory] { name: 'OllamaError'}
Also, weirdly, I set the DEFAULT_MODEL
env variable to llama-2-70b-chat-nous-hermes
in my dockerfile but it is trying to search for llama2
model? The llama-2-70b-chat-nous-hermes
model is up and running on the ollama server btw (created from a Modelfile).
How do I fix this?
the error information is :
dell@dell:/mnt/data/chatbot-ollama$ npm run dev
[email protected] dev
next dev
β² Next.js 14.1.0
β Ready in 5.1s
β Compiled /api/models in 202ms (77 modules)
β Compiling /_error ...
β Compiled /_error in 3.9s (304 modules)
β Fast Refresh had to perform a full reload. Read more: https://nextjs.org/docs/messages/fast-refresh-reload
β Compiling / ...
β Compiled / in 980ms (1659 modules)
β Fast Refresh had to perform a full reload. Read more: https://nextjs.org/docs/messages/fast-refresh-reload
(node:1207575) [DEP0040] DeprecationWarning: The punycode
module is deprecated. Please use a userland alternative instead.
(Use node --trace-deprecation ...
to show where the warning was created)
β Compiled /api/chat in 103ms (80 modules)
Error: aborted
at abortIncoming (node:_http_server:806:17)
at socketOnClose (node:_http_server:800:3)
at Socket.emit (node:events:532:35)
at TCP. (node:net:339:12) {
code: 'ECONNRESET'
}
β¨― uncaughtException: Error: aborted
at abortIncoming (node:_http_server:806:17)
at socketOnClose (node:_http_server:800:3)
at Socket.emit (node:events:532:35)
at TCP. (node:net:339:12) {
code: 'ECONNRESET'
}
β¨― uncaughtException: Error: aborted
at abortIncoming (node:_http_server:806:17)
at socketOnClose (node:_http_server:800:3)
at Socket.emit (node:events:532:35)
at TCP. (node:net:339:12) {
code: 'ECONNRESET'
}
β¨― Error: failed to pipe response
at pipeToNodeResponse (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/pipe-readable.js:111:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async DevServer.runEdgeFunction (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/next-server.js:1225:13)
at async NextNodeServer.handleCatchallRenderRequest (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/next-server.js:247:37)
at async DevServer.handleRequestImpl (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/base-server.js:807:17)
at async /mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/dev/next-dev-server.js:331:20
at async Span.traceAsyncFn (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/trace/trace.js:151:20)
at async DevServer.handleRequest (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/dev/next-dev-server.js:328:24)
at async invokeRender (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/lib/router-server.js:163:21)
at async handleRequest (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/lib/router-server.js:342:24)
at async requestHandlerImpl (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/lib/router-server.js:366:13)
at async Server.requestListener (/mnt/data/Psy_KGLLM_LX/chatbot-ollama/node_modules/next/dist/server/lib/start-server.js:140:13) {
[cause]: SyntaxError: Unexpected non-whitespace character after JSON at position 97 (line 2 column 1)
at JSON.parse ()
at Object.start (webpack-internal:///(middleware)/./utils/server/index.ts:46:45)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
}
the chatbot UI:
Can you help me solve this problem? I will really appreciate that.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.