synthintel0 / mygirlgpt Goto Github PK
View Code? Open in Web Editor NEWMyGirl GPT is a project to build your own AI girlfriend Running on Your Personal Server with local LLM.
Home Page: https://opendan.ai
MyGirl GPT is a project to build your own AI girlfriend Running on Your Personal Server with local LLM.
Home Page: https://opendan.ai
I want to run both openai and elevenlabs_tts extensions, I've tried with --extensions openai,elevenlabs_tts (which runs neither of the extensions), --extensions elevenlabs_tts,openai (which runs just openai) and --extensions openai --extensions elevenlabs_tts and viceversa (which always appear to just run the last typed extension).
PD: also the script.py has an outdated function (save_bytes_to_path) which was switched to (save_audio_v2) on the elevenlabslib
I found that when there are many chat mssages, many old messages will not be sent to LLM due to the limitation of token.However, when calling GPT API in TelegramBot, all messages are sent in the past.
I think it can be optimized to limit the number of sessions per transmission and reduce invalid transmission. I made some changes and will submit them later.
[{
"resource": "/Users/davidtingey/MyGirlGPT/TelegramBot/tsconfig.json",
"owner": "typescript",
"severity": 8,
"message": "Cannot find type definition file for 'd3'.\n The file is in the program because:\n Entry point for implicit type library 'd3'",
"source": "ts",
"startLineNumber": 1,
"startColumn": 1,
"endLineNumber": 1,
"endColumn": 2
}]
Can You please help?
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 32.00 MiB. GPU 0 has a total capacty of 4.00 GiB of which 41.56 MiB is free. Including non-PyTorch memory, this process has 3.44 GiB memory in use. Of the allocated memory 2.58 GiB is allocated by PyTorch, and 1.90 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.
In there not have other running processer and the 4GB is all free.
What is the reason?
I was following the instructions to get the program running on my local machine. However, I'm stuck at the part where it says:
Start the Stable Diffusion Webui
Where is the Stable Diffusion Webui, and how do I start it? Is it something I need to install, or is it something that is already on aios? If I need to install it, where do I get it? If it's already on aios, how do I run it?
Good afternoon, I was just curious if the repository was still being maintained? If so do you have a projected timeline on utilizing open-source LLM's instead of using open-ai API?
Getting this error when runnig the LLM server
Requirements has accelerate==0.18.0
I am not receiving any pictures from the bot. Below are my setups.
python launch.py --api --listen --share
. Retrieve the public urllet's get in touch
Every time I try to do something in the gui, I get this error:
INFO:HTTP Request: POST http://localhost:7860/reset "HTTP/1.1 200 OK"
ERROR:Task exception was never retrieved
future: <Task finished name='y5mh04db7uq_90' coro=<Queue.process_events() done, defined at /home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/gradio/queueing.py:343> exception=1 validation error for PredictBody
event_id
Field required [type=missing, input_value={'fn_index': 90, 'data': ...on_hash': 'y5mh04db7uq'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing>
Traceback (most recent call last):
File "/home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/gradio/queueing.py", line 347, in process_events
client_awake = await self.gather_event_data(event)
File "/home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/gradio/queueing.py", line 220, in gather_event_data
data, client_awake = await self.get_message(event, timeout=receive_timeout)
File "/home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/gradio/queueing.py", line 456, in get_message
return PredictBody(**data), True
File "/home/jet_mouse/anaconda3/envs/textgen/lib/python3.10/site-packages/pydantic/main.py", line 171, in init
self.pydantic_validator.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for PredictBody
event_id
Field required [type=missing, input_value={'fn_index': 90, 'data': ...on_hash': 'y5mh04db7uq'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
INFO:HTTP Request: POST http://localhost:7860/reset "HTTP/1.1 200 OK"
I've tried fixing it for a while, but I'm not sure what the problem is. It seems to be a problem with the gui, not a problem with the LLM. I tried running it in CPU mode too, and it has the same problem
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.