Giter VIP home page Giter VIP logo

mygirlgpt's People

Contributors

bluehenggege avatar synthintel0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mygirlgpt's Issues

Can server.py be runed with more than one extension?

I want to run both openai and elevenlabs_tts extensions, I've tried with --extensions openai,elevenlabs_tts (which runs neither of the extensions), --extensions elevenlabs_tts,openai (which runs just openai) and --extensions openai --extensions elevenlabs_tts and viceversa (which always appear to just run the last typed extension).

PD: also the script.py has an outdated function (save_bytes_to_path) which was switched to (save_audio_v2) on the elevenlabslib

Invalid chat message transport

I found that when there are many chat mssages, many old messages will not be sent to LLM due to the limitation of token.However, when calling GPT API in TelegramBot, all messages are sent in the past.
I think it can be optimized to limit the number of sessions per transmission and reduce invalid transmission. I made some changes and will submit them later.

Can't fix this error

[{
"resource": "/Users/davidtingey/MyGirlGPT/TelegramBot/tsconfig.json",
"owner": "typescript",
"severity": 8,
"message": "Cannot find type definition file for 'd3'.\n The file is in the program because:\n Entry point for implicit type library 'd3'",
"source": "ts",
"startLineNumber": 1,
"startColumn": 1,
"endLineNumber": 1,
"endColumn": 2
}]

Can You please help?

torch.cuda.OutOfMemoryError

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 32.00 MiB. GPU 0 has a total capacty of 4.00 GiB of which 41.56 MiB is free. Including non-PyTorch memory, this process has 3.44 GiB memory in use. Of the allocated memory 2.58 GiB is allocated by PyTorch, and 1.90 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.

In there not have other running processer and the 4GB is all free.
What is the reason?

Instructions not descriptive enough

I was following the instructions to get the program running on my local machine. However, I'm stuck at the part where it says:
Start the Stable Diffusion Webui

Where is the Stable Diffusion Webui, and how do I start it? Is it something I need to install, or is it something that is already on aios? If I need to install it, where do I get it? If it's already on aios, how do I run it?

Update's to support LLM instead of openai?

Good afternoon, I was just curious if the repository was still being maintained? If so do you have a projected timeline on utilizing open-source LLM's instead of using open-ai API?

Bot stuck on typing

The offical bot, as well as my deployment is stuck on typing... message on telegram
cherry

Not receiving pictures from bot

I am not receiving any pictures from the bot. Below are my setups.

  • Launch an instance on Runpod to host SD service API. Run in terminal python launch.py --api --listen --share. Retrieve the public url
  • Create template per here. I used the public url from above as SD_ADDRESS. Launch an instance on Runpod to host the TTS and LLM.
  • Interact with the bot on Telegram

ERROR:Task exception was never retrieved

Every time I try to do something in the gui, I get this error:

INFO:HTTP Request: POST http://localhost:7860/reset "HTTP/1.1 200 OK"
ERROR:Task exception was never retrieved
future: <Task finished name='y5mh04db7uq_90' coro=<Queue.process_events() done, defined at /home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/gradio/queueing.py:343> exception=1 validation error for PredictBody
event_id
Field required [type=missing, input_value={'fn_index': 90, 'data': ...on_hash': 'y5mh04db7uq'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing>
Traceback (most recent call last):
File "/home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/gradio/queueing.py", line 347, in process_events
client_awake = await self.gather_event_data(event)
File "/home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/gradio/queueing.py", line 220, in gather_event_data
data, client_awake = await self.get_message(event, timeout=receive_timeout)
File "/home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/gradio/queueing.py", line 456, in get_message
return PredictBody(**data), True
File "/home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/pydantic/main.py", line 171, in init
self.pydantic_validator.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for PredictBody
event_id
Field required [type=missing, input_value={'fn_index': 90, 'data': ...on_hash': 'y5mh04db7uq'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
INFO:HTTP Request: POST http://localhost:7860/reset "HTTP/1.1 200 OK"

I've tried fixing it for a while, but I'm not sure what the problem is. It seems to be a problem with the gui, not a problem with the LLM. I tried running it in CPU mode too, and it has the same problem

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.