Giter VIP home page Giter VIP logo

talkgpt4all's Introduction

talkGPT4All

A voice chatbot based on GPT4All and talkGPT.

Video demo.

Please check more details in this blog post (in Chinese).

If you are looking for the older version of talkGPT4All, please checkout to dev/v1.0.0 branch.

Installation

Install using pip (Recommend)

talkgpt4all is on PyPI, you can install it using simple one command:

pip install talkgpt4all

Install from source code

Clone the code:

git clone https://github.com/vra/talkGPT4All.git <ROOT>

Install the dependencies and talkGPT4All in a python virtual environment:

cd <ROOT>
python -m venv talkgpt4all
source talkgpt4all/bin/activate
pip install -U pip
pip install -r requirements.txt

Extra dependencies for Linux users

We use pyttsx3 to convert text to voice. Please note that on Linux ,You need to install dependencies:

sudo apt update && sudo apt install -y espeak ffmpeg libespeak1

Usage

Open a terminal and type talkgpt4all to begin:

talkgpt4all

Use different LLMs

You can choose different LLMs using --gpt-model-type <type>, all available choices:

{
"ggml-gpt4all-j-v1.3-groovy"
"ggml-gpt4all-j-v1.2-jazzy"
"ggml-gpt4all-j-v1.1-breezy"
"ggml-gpt4all-j"
"ggml-gpt4all-l13b-snoozy"
"ggml-vicuna-7b-1.1-q4_2"
"ggml-vicuna-13b-1.1-q4_2"
"ggml-wizardLM-7B.q4_2"
}

Use different Whisper models

You can choose whisper model type using --whisper-model-type <type>, all available choices:

{
"tiny.en"
"tiny"
"base.en"
"base"
"small.en"
"small"
"medium.en"
"medium"
"large-v1"
"large-v2"
"large"
}

Tune voice rate

You can tune the voice rate using --voice-rate <rate>, default rate is 165. the larger the speak faster.

e.g.,

talkgpt4all --whisper-model-type large --voice-rate 150

RoadMap

  • Add source building for llama.cpp, with more flexible interface.
  • More LLMs
  • Add support for contextual information during chating.
  • Test code on Linux,Mac Intel and WSL2.
  • Add support for Chinese input and output.
  • Add Documents and Changelog

contributions are welcomed!

talkgpt4all's People

Contributors

vra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

talkgpt4all's Issues

mac m1 运行崩溃

运行环境:Mac M1 Pro

which python3
/Users/xxx/project/talkGPT4All/talkgpt4all/bin/python3

运行命令:

python3 chat.py --platform mac-m1
[1]    76330 segmentation fault  /Users/xxx/project/talkGPT4All/talkgpt4all/bin/python3 chat.py  mac-m1

然后就弹出Python崩了,发送错误报告给Apple
另外,这个项目的依赖中似乎缺了一个:pyttsx3,不安装会报错
ModuleNotFoundError: No module named 'pyttsx3'

Hi I'm stuck with this error

  1. I'm getting the following error "AttributeError: 'GPT4All' object has no attribute 'chat_completion'" and not sure what to do? After searching all I can come up with is article about removed attribute? I might be using wrong version of python? Maybe 'chat_completion doesn't work anymore?

  2. Also the the "'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0" Error but I think I can ignore this?

PS C:\TalkGPTforAll> python - version
Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] on win32

Win10/64
AMD RY5
GTX1060GPU 3GB

PS C:\TalkGPTforAll> talkgpt4all
Found model file at C:\\Users\\S373NTH\\.cache\\gpt4all\ggml-gpt4all-j-v1.3-groovy.bin
gptj_model_load: loading model from 'C:\\Users\\S373NTH\\.cache\\gpt4all\ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
gptj_model_load: ggml ctx size = 5401.45 MB
gptj_model_load: kv self size = 896.00 MB
gptj_model_load: ................................... done
gptj_model_load: model size = 3609.38 MB / num tensors = 285
Listening...
C:\Users\S373NTH\AppData\Local\Programs\Python\Python310\lib\site-packages\whisper\timing.py:58: NumbaDeprecationWarning: The 'nopython' keyword argument wa
s not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0
. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def backtrace(trace: np.ndarray):
===> question: Have a good night. Have a good night.ipes Hello. I hope you lack any time your story.
Traceback (most recent call last):
File "C:\Users\S373NTH\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Users\S373NTH\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\S373NTH\AppData\Local\Programs\Python\Python310\Scripts\talkgpt4all.exe_main
.py", line 7, in
File "C:\Users\S373NTH\AppData\Local\Programs\Python\Python310\lib\site-packages\talkgpt4all_init
.py", line 44, in main
chat_bot.run()
File "C:\Users\S373NTH\AppData\Local\Programs\Python\Python310\lib\site-packages\talkgpt4all\chat.py", line 27, in run
answer = self.run_gpt(input_words)
File "C:\Users\S373NTH\AppData\Local\Programs\Python\Python310\lib\site-packages\talkgpt4all\chat.py", line 46, in run_gpt
response = self.gpt_model.chat_completion(
AttributeError: 'GPT4All' object has no attribute 'chat_completion'
PS C:\TalkGPTforAll>

PS C:\TalkGPTforAll> pip list
Package Version Editable project location


aiohttp 3.8.4
aiosignal 1.3.1
altgraph 0.17.3
annotated-types 0.5.0
anyio 3.7.0
async-generator 1.10
async-timeout 4.0.2
attrs 23.1.0
beautifulsoup4 4.12.2
BingImageCreator 0.4.2
blinker 1.6.2
boto3 1.26.114
botocore 1.29.154
certifi 2023.5.7
cffi 1.15.1
charset-normalizer 3.1.0
click 8.1.3
colorama 0.4.6
comtypes 1.2.0
dataclasses-json 0.5.14
DateTime 5.1
EasyProcess 1.1
EdgeGPT 0.3.2
einops 0.6.1
elevenlabslib 0.5.2
encodec 0.1.1
entrypoint2 1.1
exceptiongroup 1.1.1
ffmpeg 1.4
ffmpeg-python 0.2.0
filelock 3.12.2
Flask 2.3.2
frozenlist 1.3.3
fsspec 2023.6.0
funcy 2.0
future 0.18.3
gitdb 4.0.10
GitPython 3.1.31
glob2 0.7
gpt4all 1.0.5
greenlet 2.0.2
h11 0.14.0
httpcore 0.17.2
httpx 0.24.1
huggingface-hub 0.16.2
idna 3.4
itsdangerous 2.1.2
Jinja2 3.1.2
jmespath 1.0.1
kokoroio 0.0.3
langchain 0.0.270
langsmith 0.0.25
llvmlite 0.40.1rc1
markdown-it-py 3.0.0
MarkupSafe 2.1.3
marshmallow 3.20.1
mdurl 0.1.2
more-itertools 9.1.0
MouseInfo 0.1.3
mpmath 1.3.0
mss 9.0.1
multidict 6.0.4
mypy-extensions 1.0.0
networkx 3.1
notification 0.2.1
numba 0.57.0
numexpr 2.8.5
numpy 1.24.3
openai 0.27.4
openai-whisper 20230314
outcome 1.2.0
packaging 23.1
pefile 2023.2.7
Pillow 9.5.0
pip 23.3.1
playsound 1.3.0
plyer 2.1.0
prompt-toolkit 3.0.38
PyAudio 0.2.13
PyAutoGUI 0.9.54
pycparser 2.21
pydantic 2.2.1
pydantic_core 2.6.1
pydub 0.25.1
pygame 2.4.0
PyGetWindow 0.0.9
Pygments 2.15.1
pyinstaller 5.10.1
pyinstaller-hooks-contrib 2023.3
pyjokes 0.6.0
pyllamacpp 2.4.1
PyMsgBox 1.0.9
pypandoc 1.11
pyperclip 1.8.2
pypiwin32 223
PyRect 0.2.0
pyscreenshot 3.1
PyScreeze 0.1.29
PySocks 1.7.1
python-dateutil 2.8.2
pyttsx3 2.90
pytweening 1.0.7
pytz 2023.3
pywhatkit 5.4
pywin32 306
pywin32-ctypes 0.2.1
PyYAML 6.0
random2 1.0.1
regex 2023.6.3
requests 2.31.0
rich 13.4.2
s3transfer 0.6.1
safetensors 0.3.1
scipy 1.11.1
selenium 4.10.0
setuptools 57.4.0
six 1.16.0
smmap 5.0.0
sniffio 1.3.0
socksio 1.0.0
sortedcontainers 2.4.0
sounddevice 0.4.6
soundfile 0.12.1
soupsieve 2.4.1
SpeechRecognition 3.10.0
speedtest 0.0.1
SQLAlchemy 2.0.20
sseclient-py 1.7.2
suno-bark 0.0.1a0 C:\Users\S373NTH\modules\bark
sympy 1.12
talkgpt4all 2.1.1
tenacity 8.2.3
tiktoken 0.3.1
tokenizers 0.13.3
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
tqdm 4.65.0
transformers 4.30.2
trio 0.22.0
trio-websocket 0.10.3
typing 3.7.4.3
typing_extensions 4.6.3
typing-inspect 0.9.0
urllib3 1.26.16
vocos 0.0.3
wcwidth 0.2.6
websockets 11.0.3
Werkzeug 2.3.6
wheel 0.41.1
whisper 1.1.10
wikipedia 1.4.0
wsproto 1.2.0
yarl 1.9.2
zope.interface 6.0

提示HTTP 404 Not Found

执行 talkgpt4all 出错
Traceback (most recent call last):
File "D:\ProgramData\Anaconda3\envs\talkgpt4all\lib\runpy.py", line 196, in _run_module_as_main
return run_code(code, main_globals, None,
File "D:\ProgramData\Anaconda3\envs\talkgpt4all\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "D:\ProgramData\Anaconda3\envs\talkgpt4all\Scripts\talkgpt4all.exe_main
.py", line 7, in
File "D:\ProgramData\Anaconda3\envs\talkgpt4all\lib\site-packages\talkgpt4all_init
.py", line 40, in main
chat_bot = GPT4AllChatBot(
File "D:\ProgramData\Anaconda3\envs\talkgpt4all\lib\site-packages\talkgpt4all\chat.py", line 25, in init
self.gpt_model = GPT4All(gpt_model_name, allow_download=True)
File "D:\ProgramData\Anaconda3\envs\talkgpt4all\lib\site-packages\gpt4all\gpt4all.py", line 97, in init
self.config: ConfigType = self.retrieve_model(model_name, model_path=model_path, allow_download=allow_download, verbose=verbose)
File "D:\ProgramData\Anaconda3\envs\talkgpt4all\lib\site-packages\gpt4all\gpt4all.py", line 187, in retrieve_model
config["path"] = GPT4All.download_model(model_filename, model_path, verbose=verbose, url=url)
File "D:\ProgramData\Anaconda3\envs\talkgpt4all\lib\site-packages\gpt4all\gpt4all.py", line 234, in download_model
response = make_request()
File "D:\ProgramData\Anaconda3\envs\talkgpt4all\lib\site-packages\gpt4all\gpt4all.py", line 229, in make_request
raise ValueError(f'Request failed: HTTP {response.status_code} {response.reason}')
ValueError: Request failed: HTTP 404 Not Found

加上梯子也不行,也未显示是哪个URL 404了
谢谢

如何优雅的集成进新的gpt4all

已经星标,很棒的项目。Yunfeng Wang 👍

现在gpt4all 已经可以一键下载安装,普惠了普通人,talkgtp4all建议考虑这一点,如何高效的结合并运用起来

💻Playsound Missing .Wav + Extra Characters from Text Splitting🤖

Hi, after re install of everything, there's still these errors about Cannot specify extra characters after a string enclosed in quotation marks, Text splitted to sentences and strange characters? Then the missing .wav file playsound error. I wonder if you found anything more about it?

TEXT SPLIT AND STRANGE CHARACTERS
Listening...
==> answer: I am an AI language model and do not have the ability to feel emotions or experience physical sensations. However, I am here to assist you with any qu
estions or tasks you may have. How can I help you today?

Text splitted to sentences.
['I am an AI language model and do not have the ability to feel emotions or experience physical sensations.', 'However, I am here to assist you with any questions
or tasks you may have.', 'How can I help you today?']
aɪ æm ən aɪ læŋɡwɪd͡ʒ mɑdəl ænd du nɑt hæv ðə əbɪləti tə fil ɪmoʊʃənz ɔɹ ɪkspɪɹiəns fɪzɪkəl sɛnseɪʃənz. << STRANGE CHARACTERS
[!] Character '͡' not found in the vocabulary. Discarding it.
Processing time: 1.6647748947143555
Real-time factor: 0.10804181312824211

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

PS C:\talkGPT4All-main> & c:/talkGPT4All-main/.venv/Scripts/Activate.ps1
(.venv) PS C:\talkGPT4All-main> & c:/talkGPT4All-main/.venv/Scripts/python.exe c:/talkGPT4All-main/src/talkgpt4all/chat.py
==> GPT4All model: mistral-7b-instruct-v0.1.Q4_0.gguf, Whisper model: base

tts_models/en/ljspeech/glow-tts is already downloaded.
vocoder_models/en/ljspeech/multiband-melgan is already downloaded.
Using model: glow_tts
Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:0
| > fft_size:1024
| > power:1.1
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:50.0
| > mel_fmax:7600.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:1.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:10
| > hop_length:256
| > win_length:1024
Vocoder Model: multiband_melgan
Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:0
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:True
| > symmetric_norm:True
| > mel_fmin:50.0
| > mel_fmax:7600.0
| > pitch_fmin:0.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:C:\Users\S373NTH\AppData\Local\tts\vocoder_models--en--ljspeech--multiband-melgan\scale_stats.npy
| > base:10
| > hop_length:256
| > win_length:1024
Generator Model: multiband_melgan_generator
Discriminator Model: melgan_multiscale_discriminator
Listening...
==> answer: I am an AI language model and do not have the ability to feel emotions or experience physical sensations. However, I am here to assist you with any qu
estions or tasks you may have. How can I help you today?
Text splitted to sentences.
['I am an AI language model and do not have the ability to feel emotions or experience physical sensations.', 'However, I am here to assist you with any questions
or tasks you may have.', 'How can I help you today?']
aɪ æm ən aɪ læŋɡwɪd͡ʒ mɑdəl ænd du nɑt hæv ðə əbɪləti tə fil ɪmoʊʃənz ɔɹ ɪkspɪɹiəns fɪzɪkəl sɛnseɪʃənz.
[!] Character '͡' not found in the vocabulary. Discarding it.
Processing time: 1.6647748947143555
Real-time factor: 0.10804181312824211

Error 305 for command:
    open "C:\Users\S373NTH\AppData\Local\Temp\talkgpt4all-73qu1lv2.wav"
Cannot specify extra characters after a string enclosed in quotation marks.

Error 305 for command:
    close "C:\Users\S373NTH\AppData\Local\Temp\talkgpt4all-73qu1lv2.wav"
Cannot specify extra characters after a string enclosed in quotation marks.

Failed to close the file: "C:\Users\S373NTH\AppData\Local\Temp\talkgpt4all-73qu1lv2.wav"
Traceback (most recent call last):
File "c:\talkGPT4All-main\src\talkgpt4all\chat.py", line 126, in
chat_bot.run()
File "c:\talkGPT4All-main\src\talkgpt4all\chat.py", line 38, in run
self._text_to_voice(answer)
File "c:\talkGPT4All-main\src\talkgpt4all\chat.py", line 74, in _text_to_voice
playsound(tmp_file.name)
File "C:\talkGPT4All-main.venv\Lib\site-packages\playsound.py", line 72, in _playsoundWin
winCommand(u'open {}'.format(sound))
File "C:\talkGPT4All-main.venv\Lib\site-packages\playsound.py", line 64, in winCommand
raise PlaysoundException(exceptionMessage)
playsound.PlaysoundException:
Error 305 for command:
open "C:\Users\S373NTH\AppData\Local\Temp\talkgpt4all-73qu1lv2.wav"
Cannot specify extra characters after a string enclosed in quotation marks.
(.venv) PS C:\talkGPT4All-main>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.