Giter VIP home page Giter VIP logo

chatgpt-memory's Introduction

Development on this repository has discontinued. Please check out OpenAI's retrieval plugin instead: https://github.com/openai/chatgpt-retrieval-plugin

ChatGPT Memory

Allows to scale the ChatGPT API to multiple simultaneous sessions with infinite contextual and adaptive memory powered by GPT and Redis datastore. This can be visualized as follows



Getting Started

  1. Create your free Redis datastore here.
  2. Get your OpenAI API key here.
  3. Install dependencies using poetry.
poetry install

Use with UI

Screenshot 2023-04-17 at 10 26 59 PM

Start the FastAPI webserver.

poetry run uvicorn rest_api:app --host 0.0.0.0 --port 8000

Run the UI.

poetry run streamlit run ui.py

Use with Terminal

The library is highly modular. In the following, we describe the usage of each component (visualized above).

First, start out by setting the required environment variables before running your script. This is optional but recommended. You can use a .env file for this. See the .env.example file for an example.

from chatgpt_memory.environment import OPENAI_API_KEY, REDIS_HOST, REDIS_PASSWORD, REDIS_PORT

Create an instance of the RedisDataStore class with the RedisDataStoreConfig configuration.

from chatgpt_memory.datastore import RedisDataStoreConfig, RedisDataStore


redis_datastore_config = RedisDataStoreConfig(
    host=REDIS_HOST,
    port=REDIS_PORT,
    password=REDIS_PASSWORD,
)
redis_datastore = RedisDataStore(config=redis_datastore_config)

Create an instance of the EmbeddingClient class with the EmbeddingConfig configuration.

from chatgpt_memory.llm_client import EmbeddingConfig, EmbeddingClient

embedding_config = EmbeddingConfig(api_key=OPENAI_API_KEY)
embed_client = EmbeddingClient(config=embedding_config)

Create an instance of the MemoryManager class with the Redis datastore and Embedding client instances, and the topk value.

from chatgpt_memory.memory.manager import MemoryManager

memory_manager = MemoryManager(datastore=redis_datastore, embed_client=embed_client, topk=1)

Create an instance of the ChatGPTClient class with the ChatGPTConfig configuration and the MemoryManager instance.

from chatgpt_memory.llm_client import ChatGPTClient, ChatGPTConfig

chat_gpt_client = ChatGPTClient(
    config=ChatGPTConfig(api_key=OPENAI_API_KEY, verbose=True), memory_manager=memory_manager
)

Start the conversation by providing user messages to the converse method of the ChatGPTClient instance.

conversation_id = None
while True:
    user_message = input("\n Please enter your message: ")
    response = chat_gpt_client.converse(message=user_message, conversation_id=conversation_id)
    conversation_id = response.conversation_id
    print(response.chat_gpt_answer)

This will allow you to talk to the AI assistant and extend its memory by using an external Redis datastore.

Putting it together

Here's all of the above put together. You can also find it under examples/simple_usage.py

## set the following ENVIRONMENT Variables before running this script
# Import necessary modules
from chatgpt_memory.environment import OPENAI_API_KEY, REDIS_HOST, REDIS_PASSWORD, REDIS_PORT
from chatgpt_memory.datastore import RedisDataStoreConfig, RedisDataStore
from chatgpt_memory.llm_client import ChatGPTClient, ChatGPTConfig, EmbeddingConfig, EmbeddingClient
from chatgpt_memory.memory import MemoryManager

# Instantiate an EmbeddingConfig object with the OpenAI API key
embedding_config = EmbeddingConfig(api_key=OPENAI_API_KEY)

# Instantiate an EmbeddingClient object with the EmbeddingConfig object
embed_client = EmbeddingClient(config=embedding_config)

# Instantiate a RedisDataStoreConfig object with the Redis connection details
redis_datastore_config = RedisDataStoreConfig(
    host=REDIS_HOST,
    port=REDIS_PORT,
    password=REDIS_PASSWORD,
)

# Instantiate a RedisDataStore object with the RedisDataStoreConfig object
redis_datastore = RedisDataStore(config=redis_datastore_config)

# Instantiate a MemoryManager object with the RedisDataStore object and EmbeddingClient object
memory_manager = MemoryManager(datastore=redis_datastore, embed_client=embed_client, topk=1)

# Instantiate a ChatGPTConfig object with the OpenAI API key and verbose set to True
chat_gpt_config = ChatGPTConfig(api_key=OPENAI_API_KEY, verbose=True)

# Instantiate a ChatGPTClient object with the ChatGPTConfig object and MemoryManager object
chat_gpt_client = ChatGPTClient(
    config=chat_gpt_config,
    memory_manager=memory_manager
)

# Initialize conversation_id to None
conversation_id = None

# Start the chatbot loop
while True:
    # Prompt the user for input
    user_message = input("\n Please enter your message: ")


    # Use the ChatGPTClient object to generate a response
    response = chat_gpt_client.converse(message=user_message, conversation_id=conversation_id)

    # Update the conversation_id with the conversation_id from the response
    conversation_id = response.conversation_id


    # Print the response generated by the chatbot
    print(response.chat_gpt_answer)

Acknowledgments

UI has been added thanks to the awesome work by avrabyt/MemoryBot.

chatgpt-memory's People

Contributors

nps1ngh avatar shahrukhx01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-memory's Issues

add dotenv template

For ease of usage maybe later on we can provide a .env template for reference.

  • add the template file
  • read the env file in environment if the env file exists

Important Question / History by user

Congratulations on the code and creation.
I have doubts about the individualization of contexts.
I would like to know how I can make it so that each user who interacts with the application has their individualized message history, so that the conversation history of a user does not mix with that of the other.

update pip package

Hello,

I'm starting to use your package but it doesn't correspond to the version on pypi, I've cloned the repository and I'm having fun modifying the different files for the moment however can you update the package?

https://pypi.org/project/chatgpt-memory/0.0.1/

Are you going to continue updating it? Or don't you have enough volunteers for maintenance?

I'd like to know this before going any further on my own.

Thank you in advance for your answers.

Retaining conversation id's between process executions?

Hello, I just started messing with this library and from what I can tell, the conversation ID's are only valid during an active process session, but if the process/application is restarted, then there seems to no longer be any context when trying to re-use any of these previous conversation ID's.

Is this expected behavior? Is there a way to make these previous conversation ID's re-usable across process executions?

Add detailed Docs

The following is part of the acceptance criteria:

  • add a user flow diagram
  • add example snippets
  • add a description of the usage

Model name change

when changing the model name to: model_name: str = "gpt-4"
we get an error? the change was made in chatgpt_memory/llm_client/openai/conversation/config.py
why are we getting this error?

Do I need to initialize the Redis database first?

Traceback (most recent call last): File "D:\Memory\RedisMemeory\chatgpt-memory\examples\simple_usage.py", line 31, in <module> memory_manager = MemoryManager(datastore=redis_datastore, embed_client=embed_client, topk=1) File "D:\Memory\RedisMemeory\chatgpt-memory\chatgpt_memory\memory\manager.py", line 34, in __init__ Memory(conversation_id=conversation_id) for conversation_id in datastore.get_all_conversation_ids() File "D:\Memory\RedisMemeory\chatgpt-memory\chatgpt_memory\datastore\redis.py", line 132, in get_all_conversation_ids result_documents = self.redis_connection.ft().search(query).docs File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\commands\search\commands.py", line 420, in search res = self.execute_command(SEARCH_CMD, *args) File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\client.py", line 1269, in execute_command return conn.retry.call_with_retry( File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\retry.py", line 46, in call_with_retry return do() File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\client.py", line 1270, in <lambda> lambda: self._send_command_parse_response( File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\client.py", line 1246, in _send_command_parse_response return self.parse_response(conn, command_name, **options) File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\client.py", line 1286, in parse_response response = connection.read_response() File "E:\Coding\Miniconda3\envs\mem\lib\site-packages\redis\connection.py", line 905, in read_response raise response redis.exceptions.ResponseError: idx: no such index
How to create the idx index?

Also, I am using Python 3.10.12 under conda virtual environment, powershell, windows 11 with Intel x64 CPU. Why I cannot use Tiktoken for OpenAI?
D:\Memory\RedisMemeory\chatgpt-memory>python .\examples\simple_usage.py OpenAI tiktoken module is not available for Python < 3.8,Linux ARM64 and AARCH64. Falling back to GPT2TokenizerFast.

gpt-4 model not supported

Getting this error when I try using "gpt-4" as the model:
InvalidRequestError
openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

ModuleNotFoundError: No module named 'redis'

Any thoughts on this error I'm getting? Can't seem to wrap my head around it. I set up a redis cloud environment, copied .env.example to .env and updated all the values.

>  uvicorn rest_api:app --host localhost --port 8000
Traceback (most recent call last):
  File "/Users/og/Library/Python/3.9/bin/uvicorn", line 8, in <module>
    sys.exit(main())
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/uvicorn/main.py", line 403, in main
    run(
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/uvicorn/main.py", line 568, in run
    server.run()
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/uvicorn/server.py", line 59, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/uvicorn/server.py", line 66, in serve
    config.load()
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/uvicorn/config.py", line 471, in load
    self.loaded_app = import_from_string(self.app)
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/Users/og/Library/Python/3.9/lib/python/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/Users/og/src/ufr/sandbox/chatgpt-memory/rest_api.py", line 6, in <module>
    from chatgpt_memory.datastore import RedisDataStore, RedisDataStoreConfig
  File "/Users/og/src/ufr/sandbox/chatgpt-memory/chatgpt_memory/datastore/__init__.py", line 2, in <module>
    from chatgpt_memory.datastore.redis import RedisDataStore  # noqa: F401
  File "/Users/og/src/ufr/sandbox/chatgpt-memory/chatgpt_memory/datastore/redis.py", line 5, in <module>
    import redis
ModuleNotFoundError: No module named 'redis'

Generates irrelevant / too much conversation?

Sometimes, when I simple just type "hi" as an input. The response is a whole conversation generated between human and the assistant. I only want assistant's responses to be generated. Why might this be happening?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.