Giter VIP home page Giter VIP logo

farfalle's Introduction

hey, I'm rashad 👋🏿
i'm interested in llms, agents and search & retrieval

sf

farfalle's People

Contributors

arsaboo avatar manu-devloo avatar rashadphz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

farfalle's Issues

Error: 500 - Request URL is missing an 'http://' or 'https://' protocol

Description:

When attempting to make a request, I encountered a 500 error.

Error Message:

"500: Request URL is missing an 'http://' or 'https://' protocol."

Environment:

  • OS: Windows 11
  • Farfalle: Installed locally running on Docker Desktop
  • Ollama: Running on the default port
  • SearxNG: Used as the search provider

Additional Context:

  • Tried using the same configuration with "Groq" and it worked without issues.

Please let me know if further details are required.

500: 500: There was an error while searching.

when i try searching with searxng i get

image

.env
SEARCH_PROVIDER=searxng

compose file:
GNU nano 7.2 docker-compose.dev.yaml
services:
backend:
build:
context: .
dockerfile: ./src/backend/Dockerfile
restart: always
ports:
- "8000:8000"
environment:
- OLLAMA_HOST=${OLLAMA_HOST:-http://host.docker.internal:11434}
- TAVILY_API_KEY=${TAVILY_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GROQ_API_KEY=${GROQ_API_KEY}
- ENABLE_LOCAL_MODELS=${ENABLE_LOCAL_MODELS:-True}
- SEARCH_PROVIDER=${SEARCH_PROVIDER:-tavily}
- SEARXNG_BASE_URL=${SEARXNG_BASE_URL:-http://host.docker.internal:8080}
- REDIS_URL=${REDIS_URL}
develop:
watch:
- action: sync
path: ./src/backend
target: /workspace/src/backend
extra_hosts:
- "host.docker.internal:host-gateway"
frontend:
depends_on:
- backend
build:
context: .
dockerfile: ./src/frontend/Dockerfile
restart: always
environment:
- NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL:-http://localhost:8000}
- NEXT_PUBLIC_LOCAL_MODE_ENABLED=${NEXT_PUBLIC_LOCAL_MODE_ENABLED:-true}
ports:
- "3100:3000"
develop:
watch:
- action: sync
path: ./src/frontend
target: /app
ignore:
- node_modules/

searxng:
container_name: searxng
image: docker.io/searxng/searxng:latest
restart: unless-stopped
networks:
- searxng
ports:
- "127.0.0.1:8080:8080"
volumes:
- ./searxng:/etc/searxng:rw
environment:
- SEARXNG_BASE_URL=https://${SEARXNG_BASE_URL:-localhost}/

networks:
searxng:

Throws 500: All connection attempts. File ./workspace/src/backend/chat.py

image

Please help, I tried read issues and fix errors with existing recommendations, I also tried to google. I suppose I have lack of knowladge or lack of understanding.

I use llama3:latest
Ollama is running at http://127.0.0.1:11434/

This is log

sudo docker attach 5da5e2f8bfc4

INFO: 172.18.0.1:53226 - "OPTIONS /chat HTTP/1.1" 200 OK
INFO: 172.18.0.1:53226 - "POST /chat HTTP/1.1" 200 OK
Traceback (most recent call last):
File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 373, in handle_async_request
resp = await self._pool.handle_async_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
raise exc from None
File "/workspace/.venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
response = await connection.handle_async_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpcore/_async/connection.py", line 99, in handle_async_request
raise exc
File "/workspace/.venv/lib/python3.11/site-packages/httpcore/_async/connection.py", line 76, in handle_async_request
stream = await self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpcore/_async/connection.py", line 122, in _connect
stream = await self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpcore/_backends/auto.py", line 30, in connect_tcp
return await self._backend.connect_tcp(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 114, in connect_tcp
with map_exceptions(exc_map):
File "/usr/local/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/workspace/.venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: All connection attempts failed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/src/backend/chat.py", line 111, in stream_qa_objects
async for completion in response_gen:
File "/workspace/.venv/lib/python3.11/site-packages/llama_index/core/llms/callbacks.py", line 280, in wrapped_gen
async for x in f_return_val:
File "/workspace/.venv/lib/python3.11/site-packages/llama_index/llms/ollama/base.py", line 401, in gen
async with client.stream(
File "/usr/local/lib/python3.11/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1617, in stream
response = await self.send(
^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1661, in send
response = await self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1689, in _send_handling_auth
response = await self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1726, in _send_handling_redirects
response = await self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1763, in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 372, in handle_async_request
with map_httpcore_exceptions():
File "/usr/local/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: All connection attempts failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/workspace/src/backend/main.py", line 97, in generator
async for obj in stream_qa_objects(chat_request):
File "/workspace/src/backend/chat.py", line 140, in stream_qa_objects
raise HTTPException(status_code=500, detail=detail)
fastapi.exceptions.HTTPException: 500: All connection attempts failed

sudo docker compose -f docker-compose.dev.yaml up -d

WARN[0000] The "OPENAI_API_KEY" variable is not set. Defaulting to a blank string.
WARN[0000] The "GROQ_API_KEY" variable is not set. Defaulting to a blank string.
WARN[0000] The "REDIS_URL" variable is not set. Defaulting to a blank string.
[+] Running 3/3
✔ Container searxng Running 0.0s
✔ Container farfalle-main-backend-1 Started 2.4s
✔ Container farfalle-main-frontend-1 Started 1.3s

sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
48bc069e5349 farfalle-main-frontend "docker-entrypoint.s…" 12 minutes ago Up 12 minutes 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp farfalle-main-frontend-1
5da5e2f8bfc4 farfalle-main-backend "uvicorn backend.mai…" 12 minutes ago Up 12 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp farfalle-main-backend-1
80d014fd8ea4 searxng/searxng:latest "/sbin/tini -- /usr/…" 12 minutes ago Up 12 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp searxng

journalctl -u ollama

Jun 06 16:42:37 hentii ollama[2959]: [GIN] 2024/06/06 - 16:42:37 | 200 | 41.305369ms | 127.0.0.1 | GET >
Jun 06 17:16:04 hentii ollama[2959]: [GIN] 2024/06/06 - 17:16:04 | 200 | 30.445µs | 127.0.0.1 | GET >
Jun 06 17:16:04 hentii ollama[2959]: [GIN] 2024/06/06 - 17:16:04 | 404 | 8.224µs | 127.0.0.1 | GET >
Jun 06 17:58:41 hentii ollama[2959]: [GIN] 2024/06/06 - 17:58:41 | 200 | 77.015µs | 127.0.0.1 | HEAD >
Jun 06 17:58:41 hentii ollama[2959]: [GIN] 2024/06/06 - 17:58:41 | 200 | 6.530742ms | 127.0.0.1 | GET >

sudo docker attach 80d014fd8ea4

2024-06-06 15:38:48,618 WARNING:searx.engines.openverse: ErrorContext('searx/search/processors/online.py', 125, 'count_error(', None, '1 redirects, maximum: 0', ('200', 'OK', 'api.openverse.org')) True
2024-06-06 15:38:49,105 WARNING:searx.engines.qwant images: ErrorContext('searx/engines/qwant.py', 226, "title = item.get('title', None)", 'AttributeError', None, ()) False
2024-06-06 15:38:49,105 ERROR:searx.engines.qwant images: exception : 'str' object has no attribute 'get'
Traceback (most recent call last):
File "/usr/local/searxng/searx/search/processors/online.py", line 163, in search
search_results = self._search_basic(query, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/searxng/searx/search/processors/online.py", line 151, in _search_basic
return self.engine.response(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/searxng/searx/engines/qwant.py", line 151, in response
return parse_web_api(resp)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/searxng/searx/engines/qwant.py", line 226, in parse_web_api
title = item.get('title', None)
^^^^^^^^
AttributeError: 'str' object has no attribute 'get'

Feature Request: Use SearXNG

Can we please add support for Searxng? It is free and can be a great addition.

Several projects have implemented this, and I can point you to some links to get started if required.

Azure OpenAI API Support

Hi,

I want to utilize my Azure OpenAI API for this, can you add and option for the same? GPT-3.5-Turbo, and GPT-4o will be enough as an option. I am deploying on Vercel.

Thanks.

TAVILY replacment?

Hi,

Seems tavily is pretty good until we found its lowest price plan is $100/m, this is much more expensive than serp.

Is it possible to replace tavily with replacment? And how?

Thanks!

bug json decoder delimiter error

Build: 0f45bd461c85b08abeca04eb147a804ce69348cc44aa1357908791fa6bb7551a
Ollama: 0.1.41

Using ollama gemma I get the following error in the log:

2024-06-02 16:17:13 Traceback (most recent call last):
2024-06-02 16:17:13   File "/workspace/src/backend/chat.py", line 111, in stream_qa_objects
2024-06-02 16:17:13     async for completion in response_gen:
2024-06-02 16:17:13   File "/workspace/.venv/lib/python3.11/site-packages/llama_index/core/llms/callbacks.py", line 280, in wrapped_gen
2024-06-02 16:17:13     async for x in f_return_val:
2024-06-02 16:17:13   File "/workspace/.venv/lib/python3.11/site-packages/llama_index/llms/ollama/base.py", line 408, in gen
2024-06-02 16:17:13     chunk = json.loads(line)
2024-06-02 16:17:13             ^^^^^^^^^^^^^^^^
2024-06-02 16:17:13   File "/usr/local/lib/python3.11/json/__init__.py", line 346, in loads
2024-06-02 16:17:13     return _default_decoder.decode(s)
2024-06-02 16:17:13            ^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-06-02 16:17:13   File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
2024-06-02 16:17:13     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2024-06-02 16:17:13                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-06-02 16:17:13   File "/usr/local/lib/python3.11/json/decoder.py", line 353, in raw_decode
2024-06-02 16:17:13     obj, end = self.scan_once(s, idx)
2024-06-02 16:17:13                ^^^^^^^^^^^^^^^^^^^^^^
2024-06-02 16:17:13 json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1 column 1443 (char 1442)
2024-06-02 16:17:13 
2024-06-02 16:17:13 During handling of the above exception, another exception occurred:
2024-06-02 16:17:13 
2024-06-02 16:17:13 Traceback (most recent call last):
2024-06-02 16:17:13   File "/workspace/src/backend/main.py", line 97, in generator
2024-06-02 16:17:13     async for obj in stream_qa_objects(chat_request):
2024-06-02 16:17:13   File "/workspace/src/backend/chat.py", line 140, in stream_qa_objects
2024-06-02 16:17:13     raise HTTPException(status_code=500, detail=detail)
2024-06-02 16:17:13 fastapi.exceptions.HTTPException: 500: Expecting ',' delimiter: line 1 column 1443 (char 1442)

Part way through producing an answer, the UI clears out all details from the search and displays the error:

500: Expecting ',' delimiter: line 1 column 1443 (char 1442) 
ui_error

Feature Request: Abstract Search API

I'd like to swap out Tavily with my own search API. If you're interested, I can make abstract my work and contribute it so it's easier to add additional search backends.


To implement this, I would

  • Rename search_tavily to search_configured_client.
  • Add an environment variable SEARCH_BACKEND; it would default to tavily.
  • Write an if-else block in search_configured_client that would route to a specific search backend
    • (this is very simple, may be subject to change.)

Any additional search backends would be configurable here.

Ideally, I'd implement a dynamic plugin approach; any python module reference (e.g. my_third_party.search:search_method) that returns a SearchResponse could be use.

Unsupported config option for services.backend : 'develop'

when i run the docker-compose -f docker-compose.dev.yaml up -d. i get an error that says "

ERROR: The Compose file './docker-compose.dev.yaml' is invalid because:
Unsupported config option for services.backend: 'develop'
Unsupported config option for services.frontend: 'develop'

My docker compose version is :

~/PYTHON/farfalle$ docker compose version
Docker Compose version v2.24.7

im using :
Operating System: Debian GNU/Linux 12 (bookworm)
Kernel: Linux 6.1.0-18-amd64

No Search Results

Followed instructions on an LXC running updated Debian 12. Set my OpenAI API key and set SEARCH_PROVIDER=searxng. Search looks like it's executing, and I'm prompted for another chat entry, but no results show up.

I tried SEARCH_PROVIDER=tavily and got the same results.

Logs for frontend and searxng show no errors but...

:~/farfalle# docker logs farfalle-backend-1
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

I'm confused about whether or not this is telling me to install PyTorch somewhere or not...

Unable to access Ollama models

I managed to get everything running, but it is not able to access the local models. Even after I enable local models in the UI:
image

Nothing is returned. I also don't see any activity in Ollama logs. Here's my .env

OPENAI_API_KEY=KEY
TAVILY_API_KEY=KEY
ENABLE_LOCAL_MODELS=True
OLLAMA_HOST=http://host.docker.internal:11434

I don't see anything else in the logs. In the browser console, I see:

image

Here's my compose file (had to update the ports):

services:
  backend:
    build:
      context: .
      dockerfile: ./src/backend/Dockerfile.dev
    ports:
      - "8003:8000"
    environment:
      - OLLAMA_HOST=${OLLAMA_HOST:-http://host.docker.internal:11434}
      - TAVILY_API_KEY=${TAVILY_API_KEY}
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - GROQ_API_KEY=${GROQ_API_KEY}
      - ENABLE_LOCAL_MODELS=${ENABLE_LOCAL_MODELS:-True}

    env_file:
      - .env
    develop:
      watch:
        - action: sync
          path: ./src/backend
          target: /workspace/src/backend
    extra_hosts:
      - "host.docker.internal:host-gateway"

  frontend:
    depends_on:
      - backend
    build:
      context: .
      dockerfile: ./src/frontend/Dockerfile.dev
    environment:
      - NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL:-http://localhost:8003}
      - NEXT_PUBLIC_LOCAL_MODE_ENABLED=${NEXT_PUBLIC_LOCAL_MODE_ENABLED:-true}
    ports:
      - "3013:3000"
    develop:
      watch:
        - action: sync
          path: ./src/frontend
          target: /app
          ignore:
            - node_modules/

Request: Add OPENAI_API_URL

  1. Please consider adding the environment variable OPENAI_API_URL. This addition will facilitate communication with LiteLLM, which adheres to the OpenAI API protocol and acts as a local proxy.
    Through this configuration, you'll also gain the capability to connect to Ollama, enabling local LLM interactions.

  2. LiteLLM can be deploy as container... and support API method /models [/v1/models]
    so you can also add read list of models...

curl -X 'GET' \
  'http://localhost:4000/v1/models' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer sk-XXXXXXXXXXX'

Example reponse:

{
  "data": [
    {
      "id": "together_ai-CodeLlama-34b-Instruct",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4-turbo",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "anthropic-claude-3-haiku-20240307",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "google-gemini-1.5-pro-preview-0409",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "perplexity-mistral-7b-instruct",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "huggingface-zephyr-beta",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "together_ai-CodeLlama-34b-Python-completion",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-whisper",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-3.5-turbo-16k",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "ollama-mistral-7b",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "google-gemini-pro",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "groq-llama3-70b-8192",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "anthropic-claude-3-opus-20240229",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4-vision-preview",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4-32k",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "groq-llama3-8b-8192",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "huggingface-Xwin-Math-70B-V1.0",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-4o",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "groq-mixtral-8x7b-32768",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "google-gemini-1.5-flash-preview-0514",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "openai-gpt-3.5-turbo",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "anthropic-claude-3-sonnet-20240229",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "ollama-mxbai-embed-large",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "groq-gemma-7b-it-8192",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    },
    {
      "id": "perplexity-mixtral-8x22b",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    }
  ],
  "object": "list"
}

Benefits:

  • Facilitates seamless interaction with local instances of LiteLLM.
  • Enables the use of local LLMs through Ollama.
  • Increases flexibility in choosing between cloud-based and local AI resources.

Thank you for considering this enhancement.

Shows Rate limit exceeded

I am trying to querying the search app but shows rate limit exceeded.
Did you guys add some kind of rate limit for first user?

image

Feature Request: Search from browser urlbar / search field

Hello. The project is a amazing, thank you.
It would be handy to have an ability to search from url bar (search bar) right from the browser. This is very popular scenario.
For now Firefox, for example, doesn't recognize app as a search engine and doesn't allow to add it to search field or url bar.

Question : Custom model : Instructor does not support multiple tool calls, use List[Model] instead.

Hi,

I am trying to use a custom openai endpoint model (basically a service with openai api proxy that runs a custom model behind the scene, pretty much like ollama but fully compatible with openai module).

I am having this error:

Task exception was never retrieved
future: <Task finished name='Task-14' coro=<generate_related_queries() done, defined at /home/hangyu5/Documents/Gitrepo-My/AIResearchVault/repo/LLMApp/farfalle/src/backend/related_queries.py:45> exception=AssertionError('Instructor does not support multiple tool calls, use List[Model] instead.')>
Traceback (most recent call last):
  File "/home/hangyu5/Documents/Gitrepo-My/AIResearchVault/repo/LLMApp/farfalle/src/backend/related_queries.py", line 53, in generate_related_queries
    related = await client.chat.completions.create(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/instructor/client.py", line 273, in create
    return await self.create_fn(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/instructor/patch.py", line 119, in new_create_async
    response = await retry_async(
               ^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/instructor/retry.py", line 219, in retry_async
    async for attempt in max_retries:
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/tenacity/_asyncio.py", line 123, in __anext__
    do = await self.iter(retry_state=self._retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/tenacity/_asyncio.py", line 110, in iter
    result = await action(retry_state)
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/tenacity/_asyncio.py", line 78, in inner
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/tenacity/__init__.py", line 410, in exc_check
    raise retry_exc.reraise()
          ^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/tenacity/__init__.py", line 183, in reraise
    raise self.last_attempt.result()
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/instructor/retry.py", line 226, in retry_async
    return await process_response_async(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/instructor/process_response.py", line 75, in process_response_async
    model = response_model.from_response(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/instructor/function_calls.py", line 115, in from_response
    return cls.parse_tools(completion, validation_context, strict)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hangyu5/anaconda3/envs/farfalle/lib/python3.11/site-packages/instructor/function_calls.py", line 201, in parse_tools
    len(message.tool_calls or []) == 1
AssertionError: Instructor does not support multiple tool calls, use List[Model] instead.

Simply want to confirm that it is my model that cannot proceed with these function calls, nothing to do with instructor or farfalle, right?

Thanks!

500 errors incoming nonstop :-((

500: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
500: Unterminated string starting at: line 1 column 82 (char 81)
500: Unterminated string starting at: line 1 column 34 (char 33)

Please seriously look into it
Thanks

Local Models Unloading by Default

Hoping there has already been a solution to this but TTFT (time to first token) could be dramatically reduced if models were not forced out of memory each time a new query was entered.

500: 1 validation error for RelatedQueries questions List should have at least 3 items after validation

This a new one:
500: 1 validation error for RelatedQueries questions List should have at least 3 items after validation, not 1 [type=too_short, input_value=['What are the names of the affected cities?'], input_type=list] For further information visit https://errors.pydantic.dev/2.7/v/too_short

I'm not getting this consistenly, not sure what triggers that.
Using Azure Open AI as provider

image

Thanks Yves

Getting the following exception 500: Expecting value: line 1 column 1443 (char 1442)

Hello guys,
Thank you for this awesome project. I'm getting this exception: 500: Expecting value: line 1 column 1443 (char 1442). I'm using a local Ollama (0.1.39) docker instance with the llama3 model. Here is the details log:

INFO: 172.25.0.1:52334 - "POST /chat HTTP/1.1" 200 OK
Search provider: searxng
Traceback (most recent call last):
File "/workspace/src/backend/chat.py", line 111, in stream_qa_objects
async for completion in response_gen:
File "/workspace/.venv/lib/python3.11/site-packages/llama_index/core/llms/callbacks.py", line 280, in wrapped_gen
async for x in f_return_val:
File "/workspace/.venv/lib/python3.11/site-packages/llama_index/llms/ollama/base.py", line 408, in gen
chunk = json.loads(line)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/init.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1443 (char 1442)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/workspace/src/backend/main.py", line 97, in generator
async for obj in stream_qa_objects(chat_request):
File "/workspace/src/backend/chat.py", line 140, in stream_qa_objects
raise HTTPException(status_code=500, detail=detail)
fastapi.exceptions.HTTPException: 500: Expecting value: line 1 column 144

image

Thanks for your help,

docker frontend compose error

After the latest update, I get an error when running docker-compose. I tried clearing my Docker cache. Compose runs as expected up until:

=> ERROR [frontend 4/8] COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ 0.0s

[frontend 4/8] COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./:


failed to solve: failed to compute cache key: failed to calculate checksum of ref cc45bce7-4bd0-4851-a627-fba2df5403da::i83e7cgdwq8q8juezfvjt3zxs: "/package.json": not found

bug report: multi-language support. Could not responese in chinese and japanese

when I testing in Chinese and Japanese, it just reply in English.

Korean, french and germeny is ok, it would reply in same language to the input.
Example:

English:

Are cats and lions closely related?

Korean:

고양이와 사자, 친척인가요?

French:

Les chats et les lions sont-ils des parents proches ?

Germeny:

Sind Katzen und Löwen nahe Verwandte?

Chinese:

猫和狮子是近亲吗

Japanese:

猫とライオンは近縁ですか。

Random 500: Expecting ',' delimiter: line 1 column 7011 (char 7010)

Hello and congratulations for this amazing project :)

I'm facing some issues while running on local models (actually not tested with non local).
The search allways works but the AI insights rarely works. What happens most of the time is that it starts to type but aventually, before finish, it throws an error like this: 500: Expecting ',' delimiter: line 1 column 7011 (char 7010)
It's allways a different column though.
I have tested with llama3 and mistral over ollama and I'm using searxng.
image

docker-compose.yml:
`services:
backend:
build:
context: .
dockerfile: ./src/backend/Dockerfile
restart: unless-stopped
ports:
- "8004:8000"
environment:
- OLLAMA_HOST=${OLLAMA_HOST}
- TAVILY_API_KEY=${TAVILY_API_KEY}
- BING_API_KEY=${BING_API_KEY}
- SERPER_API_KEY=${SERPER_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GROQ_API_KEY=${GROQ_API_KEY}
- ENABLE_LOCAL_MODELS=${ENABLE_LOCAL_MODELS}
- SEARCH_PROVIDER=${SEARCH_PROVIDER}
- SEARXNG_BASE_URL=${SEARXNG_BASE_URL:-http://host.docker.internal:8099}
- REDIS_URL=${REDIS_URL}
develop:
watch:
- action: sync
path: ./src/backend
target: /workspace/src/backend
extra_hosts:
- "host.docker.internal:host-gateway"
frontend:
depends_on:
- backend
build:
context: .
dockerfile: ./src/frontend/Dockerfile
restart: always
environment:
- NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
- NEXT_PUBLIC_LOCAL_MODE_ENABLED=${NEXT_PUBLIC_LOCAL_MODE_ENABLED}
ports:
- "3005:3000"
develop:
watch:
- action: sync
path: ./src/frontend
target: /app
ignore:
- node_modules/

searxng:
container_name: searxngfarfalle
image: docker.io/searxng/searxng:latest
restart: unless-stopped
ports:
- "8099:8080"
volumes:
- ./searxng:/etc/searxng:rw
environment:
- SEARXNG_BASE_URL=https://192.168.60.260:8099/`

.env
ENABLE_LOCAL_MODELS=True OLLAMA_HOST=http://192.168.60.234:11434 SEARCH_PROVIDER=searxng #SEARXNG_BASE_URL=http://searxng:8090 NEXT_PUBLIC_API_URL=http://192.168.60.260:8004 NEXT_PUBLIC_LOCAL_MODE_ENABLED=true

Error Log:
image

Featue request : release mode configuration

Hi,

Are you ready to create a release version of docker compose for the community? I am enjoying your project, it works great. But performance can be even increased by using release mode if not being used yet, plus, everytime docker compose up would lead to compilation of js codes

Thanks!

fastapi.exceptions.HTTPException: 500: Request timed out.

I have deployed the backend and frontend locally, but I'm encountering an issue where after a single API call, the connection doesn't terminate, and after a while, it outputs a timeout error. Could you please explain what might be causing this?
721717147405_ pic_hd
for a while
711717147384_ pic

http://localhost:3000/ failed

config .env
TAVILY_API_KEY=tvly-xxx
OPENAI_API_KEY=sk-xxx
docker logs farfalle-frontend-1

[email protected] dev /app/src/frontend
next dev

▲ Next.js 14.2.3

✓ Starting...
Attention: Next.js now collects completely anonymous telemetry regarding usage.
This information is used to shape Next.js' roadmap and prioritize features.
You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
https://nextjs.org/telemetry

Downloading swc package @next/swc-linux-x64-gnu...
TypeError: terminated
at Fetch.onAborted (node:internal/deps/undici/undici:11190:53)
at Fetch.emit (node:events:517:28)
at Fetch.terminate (node:internal/deps/undici/undici:10375:14)
at Object.onError (node:internal/deps/undici/undici:11308:38)
at _Request.onError (node:internal/deps/undici/undici:7468:31)
at errorRequest (node:internal/deps/undici/undici:10038:17)
at TLSSocket.onSocketClose (node:internal/deps/undici/undici:9193:9)
at TLSSocket.emit (node:events:529:35)
at node:net:350:12
at TCP.done (node:_tls_wrap:657:7) {
[cause]: SocketError: other side closed
at TLSSocket.onSocketEnd (node:internal/deps/undici/undici:9169:26)
at TLSSocket.emit (node:events:529:35)
at endReadableNT (node:internal/streams/readable:1400:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: 'UND_ERR_SOCKET',
socket: {
localAddress: '172.18.0.3',
localPort: 41390,
remoteAddress: '104.16.2.35',
remotePort: 443,
remoteFamily: 'IPv4',
timeout: undefined,
bytesWritten: 432,
bytesRead: 26061868
}
}
}

A little reminder about copyright and unfair cloning of services

Greetings.
Could you please change the design of the service a bit? Your current design "slightly" copies the design and features of the existing service "Perplexity.AI".
Your OpenSource version is really good, but simply cloning an existing commercial product looks at least unfair to Perplexity developers.

Comparison of your service and Perplexity AI:

Your service Perplexity AI
Screenshot_2024-05-20-12-15-49-126_com android chrome~01 Screenshot_2024-05-20-12-16-01-051_com android chrome~01
Screenshot_2024-05-20-12-15-55-994_com android chrome~01 Screenshot_2024-05-20-12-16-08-498_com android chrome~01

Error when using docker-compose

ERROR: Invalid interpolation format for "backend" option in service "services": "OLLAMA_HOST=${OLLAMA_HOST:-http://localhost:11434}"
ERROR: Invalid interpolation format for "backend" option in service "services": "ENABLE_LOCAL_MODELS=${ENABLE_LOCAL_MODELS:-True}"

Getting these error when trying your docker-compose.dev.yml file.

Anything I have done wrong?

openrouter.ai support

It has variety of models (150+) starting from $0.015 so it's really affordable.
Also any plan to open your discord server ? Cuz I'm in love with you perfect it's soo soo good ❤️

OpenAI compatable API

Hi, is it possible to connect to a local LLM server via the OpenAI compatable API.

Basically I would like to use oobabooga/text-generation-webui with the OpenAI API extension.

I can't use ollama because it is almost 3x slower than text-generation-webui as it can't run EXL2.

docker-compose fails due to missing `distutils` module

I encountered an issue while trying to run docker-compose using the docker-compose.dev.yaml file in the Farfalle project.

First, I ran:

docker-compose -f docker-compose.dev.yaml up -d

The command fails with the following traceback:

Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 33, in <module>
    sys.exit(load_entry_point('docker-compose==1.29.2', 'console_scripts', 'docker-compose')())
  File "/usr/bin/docker-compose", line 25, in importlib_load_entry_point
    return next(matches).load()
  File "/usr/lib/python3.12/importlib/metadata/__init__.py", line 205, in load
    module = import_module(match.group('module'))
  File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 995, in exec_module
  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 9, in <module>
    from distutils.spawn import find_executable
ModuleNotFoundError: No module named 'distutils'

I attempted to install distutils:

sudo apt-get install python3-distutils

However, this package was not available:

Package python3-distutils is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'python3-distutils' has no installation candidate

I searched online and it looks like this packaged is deprecated/possibly removed in 3.12.

Could my use of Python 3.12 be causing any compatibility issues? If so, which version should I be running? Are there additional setup steps or dependencies I might be missing? I'm running Ubuntu 23.04 if that makes a difference. I saw the thread in /r/selfhosted and many voicing support for a container. If you have the time to put one together, I would love one!

I was running this on one of my computers with other containers that could be causes conflict. I could spin up a VM, but I'm curious what the dependencies are?

500: Tool name does not match

Received this error with Llama3.

  File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1661, in send
    response = await self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1689, in _send_handling_auth
    response = await self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1726, in _send_handling_redirects
    response = await self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1763, in _send_single_request
    response = await transport.handle_async_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 372, in handle_async_request
    with map_httpcore_exceptions():
  File "/usr/local/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/workspace/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ReadTimeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/workspace/src/backend/chat.py", line 63, in stream_qa_objects
    search_response = await perform_search(query)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/src/backend/search/search_service.py", line 101, in perform_search
    raise HTTPException(
fastapi.exceptions.HTTPException: 500: There was an error while searching.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/workspace/src/backend/main.py", line 97, in generator
    async for obj in stream_qa_objects(chat_request):
  File "/workspace/src/backend/chat.py", line 119, in stream_qa_objects
    raise HTTPException(status_code=500, detail=detail)
fastapi.exceptions.HTTPException: 500: 500: There was an error while searching.
INFO:     192.168.65.1:36091 - "POST /chat HTTP/1.1" 200 OK
Traceback (most recent call last):
  File "/workspace/src/backend/chat.py", line 97, in stream_qa_objects
    related_queries = await (
                      ^^^^^^^
  File "/workspace/src/backend/related_queries.py", line 11, in generate_related_queries
    related = llm.structured_complete(
              ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/src/backend/llm/base.py", line 50, in structured_complete
    return self.client.chat.completions.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/instructor/client.py", line 91, in create
    return self.create_fn(
           ^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/instructor/patch.py", line 143, in new_create_sync
    response = retry_sync(
               ^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/instructor/retry.py", line 152, in retry_sync
    for attempt in max_retries:
  File "/workspace/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 435, in __iter__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 368, in iter
    result = action(retry_state)
             ^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 410, in exc_check
    raise retry_exc.reraise()
          ^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 183, in reraise
    raise self.last_attempt.result()
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/workspace/.venv/lib/python3.11/site-packages/instructor/retry.py", line 158, in retry_sync
    return process_response(
           ^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/instructor/process_response.py", line 142, in process_response
    model = response_model.from_response(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/instructor/function_calls.py", line 115, in from_response
    return cls.parse_tools(completion, validation_context, strict)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/.venv/lib/python3.11/site-packages/instructor/function_calls.py", line 205, in parse_tools
    tool_call.function.name == cls.openai_schema["name"]  # type: ignore[index]
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Tool name does not match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/workspace/src/backend/main.py", line 97, in generator
    async for obj in stream_qa_objects(chat_request):
  File "/workspace/src/backend/chat.py", line 119, in stream_qa_objects
    raise HTTPException(status_code=500, detail=detail)
fastapi.exceptions.HTTPException: 500: Tool name does not match

Remember model settings

Every time I go to the home screen, the Local flag resets to Off (even when I have ENABLE_LOCAL_MODELS=True in the .env). Is there a way to retain the flag setting until the user toggles it?

UI can only be used on local machine.

Installed with docker-compose. Runs very well on local machine (using local AI).

If I try to access from other machine on the LAN, I get the UI, everything looks normal.
But when I ask a question, the question simply gets posted in the page instead of generating answer.

In the docker-compose the address of Ollama is : http://192.168.5.53:11434, which, again, works very well if I access from the browser at local machine with http://192.168.5.53:3000 and, in the UI, choosing "local".

But from other machine if I access http://192.168.5.53:3000, I get the behavior described above. i.e.: when I ask a question, the question simply gets posted in the page but no answer is generated.

My docker-compose

services:
  backend:
    build:
      context: .
      dockerfile: ./src/backend/Dockerfile
    restart: always
    ports:
      - "8000:8000"
    environment:
      - OLLAMA_HOST=${OLLAMA_HOST:-http://192.168.5.73:11434}
#      - TAVILY_API_KEY=${TAVILY_API_KEY}
#      - BING_API_KEY=${BING_API_KEY}
#      - SERPER_API_KEY=${SERPER_API_KEY}
#      - OPENAI_API_KEY=${OPENAI_API_KEY}
#      - GROQ_API_KEY=${GROQ_API_KEY}
      - ENABLE_LOCAL_MODELS=${ENABLE_LOCAL_MODELS:-True}
      - SEARCH_PROVIDER=${SEARCH_PROVIDER:-tavily}
      - SEARXNG_BASE_URL=${SEARXNG_BASE_URL:-http://host.docker.internal:8080}
#      - REDIS_URL=${REDIS_URL}
    develop:
      watch:
        - action: sync
          path: ./src/backend
          target: /workspace/src/backend
    extra_hosts:
      - "host.docker.internal:host-gateway"
  frontend:
    depends_on:
      - backend
    build:
      context: .
      dockerfile: ./src/frontend/Dockerfile
    restart: always
    environment:
      - NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL:-http://localhost:8000}
      - NEXT_PUBLIC_LOCAL_MODE_ENABLED=${NEXT_PUBLIC_LOCAL_MODE_ENABLED:-true}
    ports:
      - "3000:3000"
    develop:
      watch:
        - action: sync
          path: ./src/frontend
          target: /app
          ignore:
            - node_modules/

  searxng:
    container_name: searxng
    image: docker.io/searxng/searxng:latest
    restart: unless-stopped
    networks:
      - searxng
    ports:
      - "8080:8080"
    volumes:
      - ./searxng:/etc/searxng:rw
    environment:
      - SEARXNG_BASE_URL=https://${SEARXNG_BASE_URL:-localhost}/

networks:
  searxng:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.