Giter VIP home page Giter VIP logo

blazickjp / gpt-codeapp Goto Github PK

View Code? Open in Web Editor NEW
29.0 3.0 12.0 7.99 MB

This project is everything Chat-GPT should be for developers! An advanced AI-driven coding companion tailored for developers. Seamlessly bridging the gap between traditional coding and AI capabilities, we offer real-time chat interactions, on-demand agent functions, and intuitive code management. Feedback welcome!

License: MIT License

Python 56.92% JavaScript 27.98% CSS 0.26% Jupyter Notebook 14.69% Dockerfile 0.15%
gpt llm llms pair-programming ai openai codeapp agents aider anthropic

gpt-codeapp's Introduction

Hi there ๐Ÿ‘‹

gpt-codeapp's People

Contributors

blazickjp avatar mfalcioni1 avatar sweep-ai[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gpt-codeapp's Issues

Sweep: Auto Summarize Working Context

Details

Add a feature to the working context that automatically summarizes the working context after every 5 turns of the conversation.

Checklist
  • Modify backend/memory/memory_manager.py โœ“ b11cd11 Edit
  • Running GitHub Actions for backend/memory/memory_manager.py โœ“ Edit
  • Modify backend/tests/test_memory_manager.py โœ“ 459d4cf Edit
  • Running GitHub Actions for backend/tests/test_memory_manager.py โœ“ Edit
  • Modify backend/agent/coding_agent.py โœ“ 1e0a34a Edit
  • Running GitHub Actions for backend/agent/coding_agent.py โœ“ Edit

UI Not Correct and OpenAI Error

App Frontend Issues

Getting ReferenceError: BiErrorCircle is not defined from the app. Also the app looks wonky, potentially missing dependencies?
image

Python Issues

OpenAI error:

Traceback (most recent call last):
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\tenacity\__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\database\my_codebase.py", line 240, in encode
    result = openai.Embedding.create(input=text_or_tokens, model=model)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 8994 tokens (8994 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

Other error:

Traceback (most recent call last):
  File "C:\Python311\Lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python311\Lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\asyncio\runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\asyncio\runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\server.py", line 68, in serve
    config.load()
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\main.py", line 32, in <module>
    codebase = MyCodebase("../")
               ^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\database\my_codebase.py", line 100, in __init__
    self._update_files_and_embeddings(directory)
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\database\my_codebase.py", line 137, in _update_files_and_embeddings
    self.update_file(file_path)
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\database\my_codebase.py", line 171, in update_file
    embedding = pickle.dumps(self.encode(text))
                             ^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\tenacity\__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\tenacity\__init__.py", line 326, in iter
    raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x24e5aa5f690 state=finished raised InvalidRequestError>]

Correction in Readme + Suggestion

It looks like you have the configuration for Amazon AWS Bedrock models listed under the Anthropic config, and I don't see instructions on how to use this with Anthropic.

Also, I found webAI-to-API that can use your browser cookie and interface with Claude without an official API key (which I don't have). I have no idea how or if I could configure something like this or other tools needing an API key to use a tool like WebAI-to-AI- so if you can make this work with the cookie like WebAI-to-AI, or integrate something like that into this, that would make it way more practical for me than anything I have found.

This looks like a really cool project, so thanks for sharing it!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.