Giter VIP home page Giter VIP logo

Comments (5)

shahrukhx01 avatar shahrukhx01 commented on May 24, 2024

For conversational aspect we pass the config to langchain, probably checking langchain docs would be your best bet. Otherwise, please feel free to share the error logs here. We might be able to help!

from chatgpt-memory.

BadlyDrawnBoy avatar BadlyDrawnBoy commented on May 24, 2024

INFO: 127.0.0.1:53614 - "POST /converse/ HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 429, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/fastapi/applications.py", line 276, in call
await super().call(scope, receive, send)
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/starlette/applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in call
raise exc
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in call
await self.app(scope, receive, _send)
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in call
raise exc
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in call
raise e
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/starlette/routing.py", line 718, in call
await route.handle(scope, receive, send)
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/fastapi/routing.py", line 237, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/rest_api.py", line 47, in converse
response = chat_gpt_client.converse(**message_payload.dict())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/chatgpt_memory/llm_client/openai/conversation/chatgpt_client.py", line 67, in converse
chat_gpt_answer = self.chatgpt_chain.predict(prompt=prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/chains/llm.py", line 151, in predict
return self(kwargs)[self.output_key]
^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/chains/base.py", line 116, in call
raise e
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/chains/base.py", line 113, in call
outputs = self._call(inputs)
^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/llms/base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/llms/base.py", line 140, in generate
raise e
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/llms/base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/llms/openai.py", line 274, in _generate
response = completion_with_retry(self, prompt=_prompts, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/llms/openai.py", line 98, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/tenacity/init.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/langchain/llms/openai.py", line 96, in _completion_with_retry
return llm.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/home/martin/src/chatgpt-memory/gpt-memory/lib/python3.11/site-packages/openai/api_requestor.py", line 765, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

from chatgpt-memory.

BadlyDrawnBoy avatar BadlyDrawnBoy commented on May 24, 2024

Hint: #30 (comment)
Updating to the latest langchain helps.. sort of..:

$ poetry run uvicorn rest_api:app --host 0.0.0.0 --port 8000
/home/martin/src/chatgpt-memory/memory_env/lib/python3.11/site-packages/langchain/init.py:40: UserWarning: Importing LLMChain from langchain root module is no longer supported.
warnings.warn(
/home/martin/src/chatgpt-memory/memory_env/lib/python3.11/site-packages/langchain/init.py:40: UserWarning: Importing OpenAI from langchain root module is no longer supported.
warnings.warn(
/home/martin/src/chatgpt-memory/memory_env/lib/python3.11/site-packages/langchain/init.py:40: UserWarning: Importing PromptTemplate from langchain root module is no longer supported.
warnings.warn(
/home/martin/src/chatgpt-memory/memory_env/lib/python3.11/site-packages/langchain/llms/openai.py:216: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: from langchain.chat_models import ChatOpenAI
warnings.warn(
/home/martin/src/chatgpt-memory/memory_env/lib/python3.11/site-packages/langchain/llms/openai.py:811: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: from langchain.chat_models import ChatOpenAI
warnings.warn(
INFO: Started server process [959681]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

Goldfish

from chatgpt-memory.

BadlyDrawnBoy avatar BadlyDrawnBoy commented on May 24, 2024

Changes I have made (please forgive me for being a bit sarcastic...):
main...BadlyDrawnBoy:chatgpt-goldfish:main

At the moment, it doesn't remember what was written before, having given a response, and says it's GPT-3. In contrast, open-ai indicates that all traffic is counted as GPT-4 traffic. I haven't had time to investigate this further.

Edit: There are indications, that the identification as GPT-3 is 'normal':
https://community.openai.com/t/gpt-4-through-api-says-its-gpt-3/286881
https://www.reddit.com/r/ChatGPT/comments/1473u0i/understanding_gpt4s_selfidentification_as_gpt3/

from chatgpt-memory.

nps1ngh avatar nps1ngh commented on May 24, 2024

Hi, thanks a lot for your interest in the project.

Due to time constraints, we are unable to continue development on this repository.

Please check out OpenAI's official retrieval plugin, offering similar functionality and active development: https://github.com/openai/chatgpt-retrieval-plugin

from chatgpt-memory.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.