Giter VIP home page Giter VIP logo

jianchang512 / chattts-ui Goto Github PK

View Code? Open in Web Editor NEW
5.5K 36.0 614.0 789 KB

一个简单的本地网页界面,使用ChatTTS将文字合成为语音,同时支持对外提供API接口。A simple native web interface that uses ChatTTS to synthesize text into speech, along with support for external API interfaces.

Home Page: https://pyvideotrans.com

License: Other

Python 90.74% HTML 8.73% Batchfile 0.04% Go 0.49%
tts chattts

chattts-ui's Introduction

English README | 打赏项目 | Discord Discussion Group

ChatTTS webUI & API

一个简单的本地网页界面,在网页使用 ChatTTS 将文字合成为语音,支持中英文、数字混杂,并提供API接口.

ChatTTS 项目. 0.96版起,源码部署必须先安装ffmpeg ,之前的音色文件csv和pt已不可用,请填写音色值重新生成.获取音色

[赞助商]

302.AI是一个汇集全球顶级品牌的AI超市,按需付费,零月费,零门槛使用各种类型AI。

功能全面: 将最好用的AI集成到在平台之上,包括不限于AI聊天,图片生成,图片处理,视频生成,全方位覆盖。

简单易用: 提供机器人,工具和API多种使用方法,可以满足从小白到开发者多种角色的需求。

按需付费零门槛: 不提供月付套餐,对产品不设任何门槛,按需付费,全部开放。充值余额永久有效。

管理者和使用者分离: 管理者一键分享,使用者无需登录。 界面预览

image

文字数字符号 控制符混杂效果

Chinese-number.mp4

Windows预打包版

  1. Releases中下载压缩包,解压后双击 app.exe 即可使用
  2. 某些安全软件可能报毒,请退出或使用源码部署
  3. 英伟达显卡大于4G显存,并安装了CUDA11.8+后,将启用GPU加速

Linux 下容器部署

安装

  1. 拉取项目仓库

    在任意路径下克隆项目,例如:

    git clone https://github.com/jianchang512/ChatTTS-ui.git chat-tts-ui
  2. 启动 Runner

    进入到项目目录:

    cd chat-tts-ui

    启动容器并查看初始化日志:

    gpu版本
    docker compose -f docker-compose.gpu.yaml up -d 
    
    cpu版本    
    docker compose -f docker-compose.cpu.yaml up -d
    
    docker compose logs -f --no-log-prefix
    
  3. 访问 ChatTTS WebUI

    启动:['0.0.0.0', '9966'],也即,访问部署设备的 IP:9966 即可,例如:

    • 本机:http://127.0.0.1:9966
    • 服务器: http://192.168.1.100:9966

更新

  1. Get the latest code from the main branch:

    git checkout main
    git pull origin main
  2. Go to the next step and update to the latest image:

    docker compose down
    
    gpu版本
    docker compose -f docker-compose.gpu.yaml up -d --build
    
    cpu版本
    docker compose -f docker-compose.cpu.yaml up -d --build
    
    docker compose logs -f --no-log-prefix

Linux 下源码部署

  1. 配置好 python3.9-3.11环境,安装 ffmpeg。 yum install ffmpegapt-get install ffmpeg

  2. 创建空目录 /data/chattts 执行命令 cd /data/chattts && git clone https://github.com/jianchang512/chatTTS-ui .

  3. 创建虚拟环境 python3 -m venv venv

  4. 激活虚拟环境 source ./venv/bin/activate

  5. 安装依赖 pip3 install -r requirements.txt

  6. 如果不需要CUDA加速,执行

    pip3 install torch==2.2.0 torchaudio==2.2.0

    如果需要CUDA加速,执行

    pip install torch==2.2.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118
    
    pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
    	
    

    另需安装 CUDA11.8+ ToolKit,请自行搜索安装方法 或参考 https://juejin.cn/post/7318704408727519270

    除CUDA外,也可以使用AMD GPU进行加速,这需要安装ROCm和PyTorch_ROCm版本。AMG GPU借助ROCm,在PyTorch开箱即用,无需额外修改代码。

    1. 请参考https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html 来安装AMD GPU Driver及ROCm.
    2. 再通过https://pytorch.org/ 安装PyTorch_ROCm版本。

    pip3 install torch==2.2.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/rocm6.0

    安装完成后,可以通过rocm-smi命令来查看系统中的AMD GPU。也可以用以下Torch代码(query_gpu.py)来查询当前AMD GPU Device.

    import torch
    
    print(torch.__version__)
    
    if torch.cuda.is_available():
        device = torch.device("cuda")          # a CUDA device object
        print('Using GPU:', torch.cuda.get_device_name(0))
    else:
        device = torch.device("cpu")
        print('Using CPU')
    
    torch.cuda.get_device_properties(0)
    
    

    使用以上代码,以AMD Radeon Pro W7900为例,查询设备如下。

    
    $ python ~/query_gpu.py
    
    2.4.0.dev20240401+rocm6.0
    
    Using GPU: AMD Radeon PRO W7900
    
    
  7. 执行 python3 app.py 启动,将自动打开浏览器窗口,默认地址 http://127.0.0.1:9966 (注意:默认从 modelscope 魔塔下载模型,不可使用代理下载,请关闭代理)

MacOS 下源码部署

  1. 配置好 python3.9-3.11 环境,安装git ,执行命令 brew install libsndfile git [email protected] 继续执行

    brew install ffmpeg
    
    export PATH="/usr/local/opt/[email protected]/bin:$PATH"
    
    source ~/.bash_profile 
    
    source ~/.zshrc
    
    
  2. 创建空目录 /data/chattts 执行命令 cd /data/chattts && git clone https://github.com/jianchang512/chatTTS-ui .

  3. 创建虚拟环境 python3 -m venv venv

  4. 激活虚拟环境 source ./venv/bin/activate

  5. 安装依赖 pip3 install -r requirements.txt

  6. 安装torch pip3 install torch==2.2.0 torchaudio==2.2.0

  7. 执行 python3 app.py 启动,将自动打开浏览器窗口,默认地址 http://127.0.0.1:9966 (注意:默认从 modelscope 魔塔下载模型,不可使用代理下载,请关闭代理)

Windows源码部署

  1. 下载python3.9-3.11,安装时注意选中Add Python to environment variables

  2. 下载 ffmpeg.exe 放在 软件目录下的ffmpeg文件夹内

  3. 下载并安装git,https://github.com/git-for-windows/git/releases/download/v2.45.1.windows.1/Git-2.45.1-64-bit.exe

  4. 创建空文件夹 D:/chattts 并进入,地址栏输入 cmd回车,在弹出的cmd窗口中执行命令 git clone https://github.com/jianchang512/chatTTS-ui .

  5. 创建虚拟环境,执行命令 python -m venv venv

  6. 激活虚拟环境,执行 .\venv\scripts\activate

  7. 安装依赖,执行 pip install -r requirements.txt

  8. 如果不需要CUDA加速,

    执行 pip install torch==2.2.0 torchaudio==2.2.0

    如果需要CUDA加速,执行

    pip install torch==2.2.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118

    另需安装 CUDA11.8+ ToolKit,请自行搜索安装方法或参考 https://juejin.cn/post/7318704408727519270

  9. 执行 python app.py 启动,将自动打开浏览器窗口,默认地址 http://127.0.0.1:9966 (注意:默认从 modelscope 魔塔下载模型,不可使用代理下载,请关闭代理)

源码部署注意 0.96版本起,必须安装ffmpeg

  1. 如果GPU显存低于4G,将强制使用CPU。

  2. Windows或Linux下如果显存大于4G并且是英伟达显卡,但源码部署后仍使用CPU,可尝试先卸载torch再重装,卸载pip uninstall -y torch torchaudio , 重新安装cuda版torch。pip install torch==2.2.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118 。必须已安装CUDA11.8+

  3. 默认检测 modelscope 是否可连接,如果可以,则从modelscope下载模型,否则从 huggingface.co下载模型

音色获取

0.96版本后,因ChatTTS内核升级,已无法直接使用从该站点下载的pt文件(https://modelscope.cn/studios/ttwwwaa/ChatTTS_Speaker)

因此增加转换脚本 cover-pt.py Win整合包可以直接下载 cover-pt.exe 文件,和 app.exe 放在同一目录下双击执行

执行 python cover-pt.py 后将把 speaker 目录下的,以 seed_ 开头,以 _emb.pt 结尾的文件,即下载后的默认文件名pt, 转换为可用的编码格式,转换后的pt将改名为以 _emb-covert.pt 结尾。

例:

假如 speaker/seed_2155_restored_emb.pt 存在这个文件,将被转换为 speaker/seed_2155_restored_emb-cover.pt, 然后删掉原pt文件,仅保留该转换后的文件即可

修改http地址

默认地址是 http://127.0.0.1:9966,如果想修改,可打开目录下的 .env文件,将 WEB_ADDRESS=127.0.0.1:9966改为合适的ip和端口,比如修改为WEB_ADDRESS=192.168.0.10:9966以便局域网可访问

使用API请求 v0.5+

请求方法: POST

请求地址: http://127.0.0.1:9966/tts

请求参数:

text: str| 必须, 要合成语音的文字

voice: 可选,默认 2222, 决定音色的数字, 2222 | 7869 | 6653 | 4099 | 5099,可选其一,或者任意传入将随机使用音色

prompt: str| 可选,默认 空, 设定 笑声、停顿,例如 [oral_2][laugh_0][break_6]

temperature: float| 可选, 默认 0.3

top_p: float| 可选, 默认 0.7

top_k: int| 可选, 默认 20

skip_refine: int| 可选, 默认0, 1=跳过 refine text,0=不跳过

custom_voice: int| 可选, 默认0,自定义获取音色值时的种子值,需要大于0的整数,如果设置了则以此为准,将忽略 voice

返回:json数据

成功返回: {code:0,msg:ok,audio_files:[dict1,dict2]}

其中 audio_files 是字典数组,每个元素dict为 {filename:wav文件绝对路径,url:可下载的wav网址}

失败返回:

{code:1,msg:错误原因}

# API调用代码

import requests

res = requests.post('http://127.0.0.1:9966/tts', data={
  "text": "若不懂无需填写",
  "prompt": "",
  "voice": "3333",
  "temperature": 0.3,
  "top_p": 0.7,
  "top_k": 20,
  "skip_refine": 0,
  "custom_voice": 0
})
print(res.json())

#ok
{code:0, msg:'ok', audio_files:[{filename: E:/python/chattts/static/wavs/20240601-22_12_12-c7456293f7b5e4dfd3ff83bbd884a23e.wav, url: http://127.0.0.1:9966/static/wavs/20240601-22_12_12-c7456293f7b5e4dfd3ff83bbd884a23e.wav}]}

#error
{code:1, msg:"error"}


在pyVideoTrans软件中使用

升级 pyVideoTrans 到 1.82+ https://github.com/jianchang512/pyvideotrans

  1. 点击菜单-设置-ChatTTS,填写请求地址,默认应该填写 http://127.0.0.1:9966
  2. 测试无问题后,在主界面中选择ChatTTS

image

chattts-ui's People

Contributors

jianchang512 avatar qin2dim avatar fuyuwei01 avatar cassianvale avatar codgi-123 avatar lzfxxx avatar zuyu avatar alexhegit avatar fengs2021 avatar louyongjiu avatar plexpt avatar ox0400 avatar

Stargazers

 avatar  avatar  avatar  avatar Will S. avatar  avatar  avatar JerryBlack6996 avatar  avatar jx&chm avatar  avatar CodeSlave1917 avatar 纳如 avatar Ocean Han avatar  avatar  avatar  avatar  avatar LoveYuzii3000 avatar Monte Nali avatar  avatar Zhang Zixin avatar  avatar  avatar  avatar  avatar  avatar  avatar Bo Hu avatar Fank Sid avatar 大禹 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar Rico Lee avatar  avatar EvenCu avatar  avatar  avatar 小砖 avatar Leon avatar  avatar Kerwin Wilson avatar qbmiller avatar Hoth White avatar fendouai avatar wzx avatar  avatar luckyL avatar Akise avatar Coke avatar  avatar Zijian Han avatar QiyuanChen avatar Krisxg avatar  avatar XIN MING avatar snowKing avatar  avatar  avatar  avatar LI,GEYANG avatar Baige avatar ShineTomorrow avatar  avatar Overbye avatar  avatar  avatar  avatar fsdgdsfg avatar  avatar Joephon avatar  avatar  avatar 悟玄墨客 avatar  avatar  avatar  avatar  avatar Mario avatar  avatar proni avatar  avatar Pama1234 avatar lovewhoilove avatar  avatar kun zheng avatar Cloud native avatar  avatar rainlighter avatar  avatar  avatar  avatar  avatar  avatar

Watchers

Unknown avatar  avatar Song avatar  avatar Vanson avatar  avatar  avatar xuanyuanaosheng avatar  avatar Jinhui Yuan avatar  avatar hhhaiai avatar chuzig avatar wangmingzhi avatar stone avatar  avatar tomjoan avatar  avatar  avatar  avatar  avatar Simon avatar  avatar  avatar BillyJR avatar ou zm avatar chage avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chattts-ui's Issues

No GPU found, use CPU instead

CUDA版本12.4
已参考教程安装Cuda工具、cudnn仍然无法使用GPU
日志:
2024-06-01 22:30:31,792 - modelscope - INFO - PyTorch version 2.3.0 Found.
2024-06-01 22:30:31,793 - modelscope - INFO - Loading ast index from C:\Users\diwei.cache\modelscope\ast_indexer
2024-06-01 22:30:31,922 - modelscope - INFO - Loading done! Current index file version is 1.14.0, with md5 bcc4b501ab7f96fbbf904df1563439ec and a total number of 976 components indexed
INFO:ChatTTS.core:Load from local: D:/Another APP/Chat-TTS/chatTTS-ui/models\pzc163\chatTTS
WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
INFO:ChatTTS.core:gpt loaded.
INFO:ChatTTS.core:decoder loaded.
INFO:ChatTTS.core:tokenizer loaded.
INFO:ChatTTS.core:All initialized.
启动:['127.0.0.1', '9966']

feat(app): async router

Do you consider switching to Fastapi asynchronous interface?

For example, you can use the coroutine queue to handle tasks. In this way, the page will not be blocked when generating audio files

windows 11 转换时发生内部错误

启动正常,转换时报错。

image

日志信息:

venv) E:\openSource_workspace\chatTTS-ui>python app.py
2024-05-31 17:09:13,660 - modelscope - INFO - PyTorch version 2.3.0 Found.
2024-05-31 17:09:13,662 - modelscope - INFO - Loading ast index from C:\Users\liujianglong\.cache\modelscope\ast_indexer2024-05-31 17:09:13,760 - modelscope - INFO - No valid ast index found from C:\Users\liujianglong\.cache\modelscope\ast_indexer, generating ast index from prebuilt!
2024-05-31 17:09:13,875 - modelscope - INFO - Loading done! Current index file version is 1.14.0, with md5 20d6d72d7c727847862295a469dcf2cf and a total number of 976 components indexed
Downloading: 100%|████████████████████████████████████████████████████████████████| 4.16k/4.16k [00:00<00:00, 4.23MB/s]
INFO:ChatTTS.core:Load from local: E:/openSource_workspace/chatTTS-ui/models\pzc163\chatTTS
WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
INFO:ChatTTS.core:gpt loaded.
INFO:ChatTTS.core:decoder loaded.
INFO:ChatTTS.core:tokenizer loaded.
INFO:ChatTTS.core:All initialized.
启动:['127.0.0.1', '9966']
  0%|                                                                                          | 0/384 [00:01<?, ?it/s]
[2024-05-31 17:09:25,237] ERROR in app: Exception on /tts [POST]
Traceback (most recent call last):
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\flask\app.py", line 1473, in wsgi_app
    response = self.full_dispatch_request()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\flask\app.py", line 882, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\flask\app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\flask\app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "E:\openSource_workspace\chatTTS-ui\app.py", line 118, in tts
    wavs = chat.infer([t for t in text.split("\n") if t.strip()], use_decoder=True,params_infer_code={'spk_emb': rand_spk} ,params_refine_text= {'prompt': prompt})
  File "E:\openSource_workspace\chatTTS-ui\ChatTTS\core.py", line 154, in infer
    text_tokens = refine_text(self.pretrain_models, text, **params_refine_text)['ids']
  File "E:\openSource_workspace\chatTTS-ui\ChatTTS\infer\api.py", line 114, in refine_text
    result = models['gpt'].generate(
  File "E:\openSource_workspace\chatTTS-ui\ChatTTS\model\gpt.py", line 203, in generate
    outputs = self.gpt.forward(**model_input, output_attentions=return_attn)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 451, in _fn
    return fn(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\transformers\models\llama\modeling_llama.py", line 940, in forward
    causal_mask = self._update_causal_mask(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 921, in catch_errors
    return callback(frame, cache_entry, hooks, frame_state, skip=1)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 786, in _convert_frame
    result = inner_convert(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 400, in _convert_frame_assert
    return _compile(
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 676, in _compile    guarded_code = compile_inner(code, one_graph, hooks, transform)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 535, in compile_inner
    out_code = transform_code_object(code, transform)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1036, in transform_code_object
    transformations(instructions, code_options)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 165, in _fn
    return fn(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 500, in transform
    tracer.run()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2149, in run
    super().run()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 810, in run
    and self.step()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 773, in step
    getattr(self, inst.opname)(inst)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2268, in RETURN_VALUE
    self.output.compile_subgraph(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 971, in compile_subgraph
    self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1168, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1241, in call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1222, in call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 117, in debug_wrapper
    compiled_gm = compiler_fn(gm, example_inputs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\__init__.py", line 1729, in __call__
    return compile_fx(model_, inputs_, config_patches=self.config)
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1330, in compile_fx
    return aot_autograd(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 58, in compiler_fn
    cg = aot_module_simplified(gm, example_inputs, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 903, in aot_module_simplified
    compiled_fn = create_aot_dispatcher_function(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 628, in create_aot_dispatcher_function
    compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\_aot_autograd\runtime_wrappers.py", line 443, in aot_wrapper_dedupe
    return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\_aot_autograd\runtime_wrappers.py", line 648, in aot_wrapper_synthetic_base
    return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 119, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1257, in fw_compiler_base
    return inner_compile(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\repro\after_aot.py", line 83, in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\debug.py", line 304, in inner
    return fn(*args, **kwargs)
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 438, in compile_fx_inner
    compiled_graph = fx_codegen_and_compile(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 714, in fx_codegen_and_compile
    compiled_fn = graph.compile_to_fn()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\graph.py", line 1307, in compile_to_fn    return self.compile_to_module().call
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\graph.py", line 1250, in compile_to_module
    self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\graph.py", line 1208, in codegen
    self.scheduler.codegen()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\scheduler.py", line 2339, in codegen
    self.get_backend(device).codegen_nodes(node.get_nodes())  # type: ignore[possibly-undefined]
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 3623, in codegen_nodes
    kernel_group.finalize_kernel(cpp_kernel_proxy, nodes)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 3661, in finalize_kernel
    new_kernel.codegen_loops(code, ws)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 3458, in codegen_loops
    self.codegen_loops_impl(self.loop_nest, code, worksharing)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 1832, in codegen_loops_impl
    gen_loops(loop_nest.root)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 1804, in gen_loops
    gen_loop(loop, in_reduction)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 1817, in gen_loop
    loop_lines = loop.lines()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 3922, in lines
    elif not self.reduction_var_map and codecache.is_gcc():
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codecache.py", line 1001, in is_gcc
    return bool(re.search(r"(gcc|g\+\+)", cpp_compiler()))
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codecache.py", line 944, in cpp_compiler
    return cpp_compiler_search(search)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codecache.py", line 971, in cpp_compiler_search
    raise exc.InvalidCxxCompiler()
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
InvalidCxxCompiler: No working C++ compiler found in torch._inductor.config.cpp.cxx: (None, 'g++')

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True

ERROR:app:Exception on /tts [POST]
Traceback (most recent call last):
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\flask\app.py", line 1473, in wsgi_app
    response = self.full_dispatch_request()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\flask\app.py", line 882, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\flask\app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\flask\app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "E:\openSource_workspace\chatTTS-ui\app.py", line 118, in tts
    wavs = chat.infer([t for t in text.split("\n") if t.strip()], use_decoder=True,params_infer_code={'spk_emb': rand_spk} ,params_refine_text= {'prompt': prompt})
  File "E:\openSource_workspace\chatTTS-ui\ChatTTS\core.py", line 154, in infer
    text_tokens = refine_text(self.pretrain_models, text, **params_refine_text)['ids']
  File "E:\openSource_workspace\chatTTS-ui\ChatTTS\infer\api.py", line 114, in refine_text
    result = models['gpt'].generate(
  File "E:\openSource_workspace\chatTTS-ui\ChatTTS\model\gpt.py", line 203, in generate
    outputs = self.gpt.forward(**model_input, output_attentions=return_attn)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 451, in _fn
    return fn(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\transformers\models\llama\modeling_llama.py", line 940, in forward
    causal_mask = self._update_causal_mask(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 921, in catch_errors
    return callback(frame, cache_entry, hooks, frame_state, skip=1)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 786, in _convert_frame
    result = inner_convert(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 400, in _convert_frame_assert
    return _compile(
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 676, in _compile    guarded_code = compile_inner(code, one_graph, hooks, transform)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 535, in compile_inner
    out_code = transform_code_object(code, transform)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1036, in transform_code_object
    transformations(instructions, code_options)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 165, in _fn
    return fn(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 500, in transform
    tracer.run()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2149, in run
    super().run()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 810, in run
    and self.step()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 773, in step
    getattr(self, inst.opname)(inst)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2268, in RETURN_VALUE
    self.output.compile_subgraph(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 971, in compile_subgraph
    self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1168, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1241, in call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1222, in call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 117, in debug_wrapper
    compiled_gm = compiler_fn(gm, example_inputs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\__init__.py", line 1729, in __call__
    return compile_fx(model_, inputs_, config_patches=self.config)
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1330, in compile_fx
    return aot_autograd(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 58, in compiler_fn
    cg = aot_module_simplified(gm, example_inputs, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 903, in aot_module_simplified
    compiled_fn = create_aot_dispatcher_function(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 628, in create_aot_dispatcher_function
    compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\_aot_autograd\runtime_wrappers.py", line 443, in aot_wrapper_dedupe
    return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\_aot_autograd\runtime_wrappers.py", line 648, in aot_wrapper_synthetic_base
    return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 119, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1257, in fw_compiler_base
    return inner_compile(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\repro\after_aot.py", line 83, in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\debug.py", line 304, in inner
    return fn(*args, **kwargs)
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "C:\Users\liujianglong\anaconda3\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 438, in compile_fx_inner
    compiled_graph = fx_codegen_and_compile(
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 714, in fx_codegen_and_compile
    compiled_fn = graph.compile_to_fn()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\graph.py", line 1307, in compile_to_fn    return self.compile_to_module().call
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\graph.py", line 1250, in compile_to_module
    self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\graph.py", line 1208, in codegen
    self.scheduler.codegen()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_dynamo\utils.py", line 262, in time_wrapper
    r = func(*args, **kwargs)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\scheduler.py", line 2339, in codegen
    self.get_backend(device).codegen_nodes(node.get_nodes())  # type: ignore[possibly-undefined]
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 3623, in codegen_nodes
    kernel_group.finalize_kernel(cpp_kernel_proxy, nodes)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 3661, in finalize_kernel
    new_kernel.codegen_loops(code, ws)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 3458, in codegen_loops
    self.codegen_loops_impl(self.loop_nest, code, worksharing)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 1832, in codegen_loops_impl
    gen_loops(loop_nest.root)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 1804, in gen_loops
    gen_loop(loop, in_reduction)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 1817, in gen_loop
    loop_lines = loop.lines()
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codegen\cpp.py", line 3922, in lines
    elif not self.reduction_var_map and codecache.is_gcc():
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codecache.py", line 1001, in is_gcc
    return bool(re.search(r"(gcc|g\+\+)", cpp_compiler()))
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codecache.py", line 944, in cpp_compiler
    return cpp_compiler_search(search)
  File "E:\openSource_workspace\chatTTS-ui\venv\lib\site-packages\torch\_inductor\codecache.py", line 971, in cpp_compiler_search
    raise exc.InvalidCxxCompiler()
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
InvalidCxxCompiler: No working C++ compiler found in torch._inductor.config.cpp.cxx: (None, 'g++')

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True

python3 app.py 提示/app.py", line 41, in <module>

File "/Users/eeejeeej/Documents/Github/chattts/app.py", line 41, in
chat.load_models(source="local",local_path=CHATTTS_DIR)
File "/Users/eeejeeej/Documents/Github/chattts/ChatTTS/core.py", line 58, in load_models
self._load(**{k: os.path.join(local_path, v) for k, v in OmegaConf.load(os.path.join(local_path, 'config', 'path.yaml')).items()})
File "/Users/eeejeeej/Documents/Github/chattts/ChatTTS/core.py", line 99, in _load
assert os.path.exists(spk_stat_path), f'Missing spk_stat.pt: {spk_stat_path}'
AssertionError: Missing spk_stat.pt: /Users/eeejeeej/Documents/Github/chattts/models/pzc163/chatTTS/asset/spk_stat.pt

"Dynamo is not supported on Python 3.12+"

要对python 版本降级么?

`INFO:ChatTTS.core:Load from local: D:/tools/100AIGC/chatTTS-ui/models\pzc163\chatTTS
WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
Traceback (most recent call last):
File "D:\tools\100AIGC\chatTTS-ui\app.py", line 45, in
chat.load_models(source="local",local_path=CHATTTS_DIR)
File "D:\tools\100AIGC\chatTTS-ui\ChatTTS\core.py", line 61, in load_models
self._load(**{k: os.path.join(download_path, v) for k, v in OmegaConf.load(os.path.join(download_path, 'config', 'path.yaml')).items()}, **kwargs)
File "D:\tools\100AIGC\chatTTS-ui\ChatTTS\core.py", line 102, in load
gpt.gpt.forward = torch.compile(gpt.gpt.forward, backend='inductor', dynamic=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\tools\100AIGC\chatTTS-ui\venv\Lib\site-packages\torch_init
.py", line 1866, in compile
raise RuntimeError("Dynamo is not supported on Python 3.12+")
RuntimeError: Dynamo is not supported on Python 3.12+

`

windows版本如何设置使用GPU?

windows版本如何设置使用GPU或是用CPU。
我这里打开后默认使用CPU[捂脸],是不是对GPU型号和参数有要求?我的显卡是笔记本Quadro P620

项目启动报错

Traceback (most recent call last):
File "D:\chattts\chatTTS-ui\app.py", line 10, in
import soundfile as sf
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\soundfile.py", line 17, in
from _soundfile import ffi as _ffi
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages_soundfile.py", line 2, in
import _cffi_backend
ModuleNotFoundError: No module named '_cffi_backend'

0.3win打包版好像不支持gpu加速了

0.2版速度正常 70it/s

Downloading: 100%|████████████████████████████████████████████████████████████████████████| 4.16k/4.16k [00:00<?, ?B/s]
INFO:ChatTTS.core:Load from local: E:/BaiduNetdiskDownload/ChatTTS-UI-0.3/models\pzc163\chatTTS
INFO:ChatTTS.core:use cuda:0
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
INFO:ChatTTS.core:gpt loaded.
INFO:ChatTTS.core:decoder loaded.
INFO:ChatTTS.core:tokenizer loaded.
INFO:ChatTTS.core:All initialized.
启动:['127.0.0.1', '9966']
0%| | 0/384 [00:00<?, ?it/s]torch_dynamo\utils.py:1764: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
return node.target(*args, **kwargs)
torch_inductor\compile_fx.py:124: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting torch.set_float32_matmul_precision('high') for better performance.
warnings.warn(
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] WON'T CONVERT forward transformers\models\llama\modeling_llama.py line 892
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] due to:
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] Traceback (most recent call last):
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\convert_frame.py", line 786, in _convert_frame
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] result = inner_convert(
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\convert_frame.py", line 400, in _convert_frame_assert
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] return _compile(
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "contextlib.py", line 79, in inner
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\convert_frame.py", line 676, in _compile
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] guarded_code = compile_inner(code, one_graph, hooks, transform)
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\utils.py", line 262, in time_wrapper
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] r = func(*args, **kwargs)
W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\convert_frame.py", line 535, in compile_inner

W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] inner_compiled_fn = compiler_fn(gm, example_inputs)
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\debug.py", line 304, in inner
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] return fn(*args, **kwargs)
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "contextlib.py", line 79, in inner
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "contextlib.py", line 79, in inner
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\utils.py", line 262, in time_wrapper
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] r = func(*args, **kwargs)
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\compile_fx.py", line 438, in compile_fx_inner
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] compiled_graph = fx_codegen_and_compile(
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\compile_fx.py", line 714, in fx_codegen_and_compile
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] compiled_fn = graph.compile_to_fn()
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\graph.py", line 1307, in compile_to_fn
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] return self.compile_to_module().call
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\utils.py", line 262, in time_wrapper
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] r = func(*args, **kwargs)
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\graph.py", line 1250, in compile_to_module
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\graph.py", line 1205, in codegen
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.scheduler = Scheduler(self.buffers)
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\utils.py", line 262, in time_wrapper
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] r = func(*args, **kwargs)
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 1267, in init
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.nodes = [self.create_scheduler_node(n) for n in nodes]
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 1267, in
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.nodes = [self.create_scheduler_node(n) for n in nodes]
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 1358, in create_scheduler_node
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] return SchedulerNode(self, node)
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 687, in init
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self._compute_attrs()
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 698, in _compute_attrs
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] group_fn = self.scheduler.get_backend(self.node.get_device()).group_fn
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 2276, in get_backend
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.backends[device] = self.create_backend(device)
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 2268, in create_backend
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] raise RuntimeError(
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] RuntimeError: Cannot find a working triton installation. More information on installing Triton can be found at https://github.com/openai/triton
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824]
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824]
3%|██ | 10/384 [00:21<13:15, 2.13s/it]
2%|█▉ | 51/2048 [00:08<05:33, 5.99it/s]
error.txt

feat(pyproject.toml): stable requirements

可以考虑使用 pyproject.toml 管理项目依赖,整个项目的依赖树看的我头都大了。

比如,vocos 的依赖项包括了 torch,那么直接 pip install -r req.txt 就会拉下来vocos 以及 CPU 版的 torch,那么后续再单独运行带有 --index-url flag 的指令就不起作用了(因为对应的依赖已经拉到当前环境里了),还需要额外的 --ignore-installed 才能覆盖下载。

同样的情况,如果使用 uv 或者 pdm 执行的指令也不一样,例如覆盖安装需要 uv pip install torch -i [url] --reinstall

所以不如一开始就用 pyproject.toml 管理依赖,头大。

macOS 生成的时候报错 Initializing libomp.dylib, but found libiomp5.dylib already initialized

先感谢作者提供支持

系统环境

  • macOS 12.6.7 Intel CPU
  • Python 3.9.13

报错截图

iShot_2024-05-31_21 36 41
2024-05-31 21:33:36,503 - modelscope - INFO - PyTorch version 2.2.2 Found.
2024-05-31 21:33:36,503 - modelscope - INFO - Loading ast index from /Users/meek/.cache/modelscope/ast_indexer
2024-05-31 21:33:36,539 - modelscope - INFO - Loading done! Current index file version is 1.14.0, with md5 54c70b20f857389f69e0735c6ef5281c and a total number of 976 components indexed
INFO:ChatTTS.core:Load from local: /Users/meek/my-software/chatTTS-ui/models/pzc163/chatTTS
WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
INFO:ChatTTS.core:gpt loaded.
INFO:ChatTTS.core:decoder loaded.
INFO:ChatTTS.core:tokenizer loaded.
INFO:ChatTTS.core:All initialized.
启动:['127.0.0.1', '9966']
  0%|                                                                                                                                                                                    | 0/384 [00:00<?, ?it/s]OMP: Error #15: Initializing libomp.dylib, but found libiomp5.dylib already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://openmp.llvm.org/
[1]    7798 abort      python3 app.py
/Users/meek/opt/anaconda3/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d

尝试过一些措施,但是都没有效果

export KMP_DUPLICATE_LIB_OK=FALSE

export KMP_DUPLICATE_LIB_OK=TRUE

export OMP_DYNAMIC=FALSE

DYLD_INSERT_LIBRARIES=/usr/local/Cellar/libomp/18.1.6/lib/libomp.dylib python3 app.py

克隆之后找不到requirements.txt

FVFHR4N9Q05N:chattts user$ pip3 install -r requirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

但在chattts下面已经有了chatTTS-ui的所有文件

mac 安装不了soundfile

import soundfile as sf

ModuleNotFoundError: No module named 'soundfile'

但是安装soundfile:
pip install soundfile  11:38:21

DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at Homebrew/homebrew-core#76621
Looking in indexes: https://mirrors.aliyun.com/pypi/simple/
Requirement already satisfied: soundfile in /Users/dingli/Library/Python/3.9/lib/python/site-packages (0.12.1)
Requirement already satisfied: cffi>=1.0 in /Users/dingli/Library/Python/3.9/lib/python/site-packages (from soundfile) (1.16.0)
Requirement already satisfied: pycparser in /Users/dingli/Library/Python/3.9/lib/python/site-packages (from cffi>=1.0->soundfile) (2.22)
DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at Homebrew/homebrew-core#76621

[notice] A new release of pip is available: 23.2.1 -> 24.0
[notice] To update, run: python3.9 -m pip install --upgrade pip

项目启动报错 Macos

eeejdeMac-mini:chattts eeejeeej$ python3 app.py
Traceback (most recent call last):
File "/Users/eeejeeej/Documents/Github/chattts/app.py", line 3, in
import ChatTTS
File "/Users/eeejeeej/Documents/Github/chattts/ChatTTS/init.py", line 1, in
from .core import Chat
File "/Users/eeejeeej/Documents/Github/chattts/ChatTTS/core.py", line 7, in
from vocos import Vocos
ModuleNotFoundError: No module named 'vocos'

windows预安装版启动报错

2024-05-31 13:33:30,015 - modelscope - INFO - PyTorch version 2.3.0+cu118 Found.
2024-05-31 13:33:30,016 - modelscope - INFO - Loading ast index from C:\Users\tstwt.cache\modelscope\ast_indexer
2024-05-31 13:33:30,017 - modelscope - INFO - Loading done! Current index file version is 1.14.0, with md5 d41d8cd98f00b204e9800998ecf8427e and a total number of 0 components indexed
WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=2, read=2, redirect=None, status=None)) after connection broken by 'FileNotFoundError(2, 'No such file or directory')': /api/v1/models/pzc163/chatTTS/revisions
WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=2, read=2, redirect=None, status=None)) after connection broken by 'FileNotFoundError(2, 'No such file or directory')': /api/v1/models/pzc163/chatTTS/revisions
Traceback (most recent call last):
File "urllib3\connectionpool.py", line 779, in urlopen
File "urllib3\connectionpool.py", line 1048, in _prepare_proxy
File "urllib3\connection.py", line 625, in connect
File "urllib3\connection.py", line 699, in connect_tls_proxy
File "urllib3\connection.py", line 806, in ssl_wrap_socket_and_match_hostname
File "urllib3\util\ssl
.py", line 465, in ssl_wrap_socket
File "urllib3\util\ssl
.py", line 509, in _ssl_wrap_socket_impl
File "ssl.py", line 512, in wrap_socket
File "ssl.py", line 1070, in _create
File "ssl.py", line 1341, in do_handshake
FileNotFoundError: [Errno 2] No such file or directory

The above exception was the direct cause of the following exception:

urllib3.exceptions.ProxyError: ('Unable to connect to proxy', FileNotFoundError(2, 'No such file or directory'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "requests\adapters.py", line 589, in send
File "urllib3\connectionpool.py", line 877, in urlopen
File "urllib3\connectionpool.py", line 877, in urlopen
File "urllib3\connectionpool.py", line 847, in urlopen
File "urllib3\util\retry.py", line 515, in increment
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.modelscope.cn', port=443): Max retries exceeded with url: /api/v1/models/pzc163/chatTTS/revisions (Caused by ProxyError('Unable to connect to proxy', FileNotFoundError(2, 'No such file or directory')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "app.py", line 43, in
File "modelscope\hub\snapshot_download.py", line 98, in snapshot_download
File "modelscope\hub\api.py", line 497, in get_valid_revision_detail
File "modelscope\hub\api.py", line 575, in get_model_branches_and_tags_details
File "requests\sessions.py", line 602, in get
File "requests\sessions.py", line 589, in request
File "requests\sessions.py", line 703, in send
File "requests\adapters.py", line 616, in send
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.modelscope.cn', port=443): Max retries exceeded with url: /api/v1/models/pzc163/chatTTS/revisions (Caused by ProxyError('Unable to connect to proxy', FileNotFoundError(2, 'No such file or directory')))
[34668] Failed to execute script 'app' due to unhandled exception!

空环境依赖提示这么一堆问题,哪不对呢

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
accelerate 0.29.3 requires psutil, which is not installed.
altair 5.3.0 requires jsonschema>=3.0, which is not installed.
bleach 6.1.0 requires webencodings, which is not installed.
goose3 3.1.17 requires beautifulsoup4, which is not installed.
goose3 3.1.17 requires lxml, which is not installed.
gradio 4.26.0 requires matplotlib~=3.0, which is not installed.
ipykernel 6.29.4 requires jupyter-client>=6.1.12, which is not installed.
ipykernel 6.29.4 requires jupyter-core!=5.0.,>=4.12, which is not installed.
ipykernel 6.29.4 requires psutil, which is not installed.
ipykernel 6.29.4 requires tornado>=6.1, which is not installed.
langchain 0.0.314 requires langsmith<0.1.0,>=0.0.43, which is not installed.
langchain 0.0.314 requires SQLAlchemy<3,>=1.4, which is not installed.
langchain-community 0.0.34 requires langchain-core<0.2.0,>=0.1.45, which is not installed.
langchain-community 0.0.34 requires langsmith<0.2.0,>=0.1.0, which is not installed.
langchain-community 0.0.34 requires SQLAlchemy<3,>=1.4, which is not installed.
langchain-openai 0.0.6 requires langchain-core<0.2,>=0.1.16, which is not installed.
nbclient 0.10.0 requires jupyter-client>=6.1.12, which is not installed.
nbclient 0.10.0 requires jupyter-core!=5.0.
,>=4.12, which is not installed.
nbclient 0.10.0 requires nbformat>=5.1, which is not installed.
openai 1.30.3 requires distro<2,>=1.7.0, which is not installed.
peft 0.10.0 requires psutil, which is not installed.
streamlit 1.33.0 requires toml<2,>=0.10.1, which is not installed.
streamlit 1.33.0 requires tornado<7,>=6.0.3, which is not installed.
terminado 0.18.1 requires tornado>=6.1.0, which is not installed.
timm 0.9.16 requires torchvision, which is not installed.
xoscar 0.3.0 requires psutil>=5.9.0, which is not installed.
botocore 1.31.64 requires urllib3<2.1,>=1.25.4; python_version >= "3.10", but you have urllib3 2.2.1 which is incompatible.

Mac Python3 app.py 运行报错

第一次运行安装到某个模块时下载速度为0,所以关掉开了全局又来一次,结果报错

/venv/lib/python3.9/site-packages/urllib3/init.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: urllib3/urllib3#3020
warnings.warn(
Traceback (most recent call last):
File "app.py", line 4, in
from dotenv import load_dotenv
ModuleNotFoundError: No module named 'dotenv'

中英文音色是分开的,这点太不符合逻辑了

中英文音色是分开的,这点太不符合逻辑了,希望改回去吧,本来中英文应该是同一个音色的,感觉就是一个人说的,现在同一段文本中,中英文音色不一致,感觉中间插入了一个英文语音,好尴尬,失去了chattts的一大特性。

生成音频时长

生成的音频时长最多就 30 秒,如何生成更长语音的视频

[Question]: tqdm progress

image

源码我还没时间细看,但进度条要么是统计的口径有问题,要么是提前跳出了。但似乎是上游项目的问题

Dockerfile

考虑在 linux 打包一个镜像吗 orz,头大

无法切换预设音色

预设的几个音色切换后无效,合成的都是同一个音色。这是什么bug吗?

使用笑声和停顿

在 prompt 里面填写了停顿和笑声的 shortcut,但是文本框那边还是要录入,不懂怎么工作,请指教。

mac启动python3 app.py 提示模块不存在 明明已经有这个模块了

Traceback (most recent call last):
File "/opt/chattts/app.py", line 3, in
import ChatTTS
File "/opt/chattts/ChatTTS/init.py", line 1, in
from .core import Chat
File "/opt/chattts/ChatTTS/core.py", line 9, in
from .model.gpt import GPT_warpper
File "/opt/chattts/ChatTTS/model/gpt.py", line 7, in
from transformers.cache_utils import Cache
ModuleNotFoundError: No module named 'transformers'

mac 源码部署,启动项目python3 app.py 提示 缺少 spk_stat.pt 文件

/Users/lizhihua/data/chattts/ChatTTS-ui/venv/lib/python3.9/site-packages/urllib3/init.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: urllib3/urllib3#3020
warnings.warn(
2024-05-31 14:27:55,950 - modelscope - INFO - PyTorch version 2.3.0 Found.
2024-05-31 14:27:55,950 - modelscope - INFO - Loading ast index from /Users/lizhihua/.cache/modelscope/ast_indexer
2024-05-31 14:27:56,037 - modelscope - INFO - Loading done! Current index file version is 1.14.0, with md5 0203f73c3dee56de722d0d1a0f3866af and a total number of 976 components indexed
INFO:ChatTTS.core:Load from local: /Users/lizhihua/data/chattts/ChatTTS-ui/models/pzc163/chatTTS
WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
Traceback (most recent call last):
File "/Users/lizhihua/data/chattts/ChatTTS-ui/app.py", line 45, in
chat.load_models(source="local",local_path=CHATTTS_DIR)
File "/Users/lizhihua/data/chattts/ChatTTS-ui/ChatTTS/core.py", line 61, in load_models
self._load(**{k: os.path.join(download_path, v) for k, v in OmegaConf.load(os.path.join(download_path, 'config', 'path.yaml')).items()}, **kwargs)
File "/Users/lizhihua/data/chattts/ChatTTS-ui/ChatTTS/core.py", line 105, in _load
assert os.path.exists(spk_stat_path), f'Missing spk_stat.pt: {spk_stat_path}'
AssertionError: Missing spk_stat.pt: /Users/lizhihua/data/chattts/ChatTTS-ui/models/pzc163/chatTTS/asset/spk_stat.pt

常见问题与报错解决方法

1. MacOS 报错 Initializing libomp.dylib, but found libiomp5.dylib already initialized

答:在app.py的 import os 的下一行,添加代码

os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'

2. MacOS 无报错但进度条一直百分之0 卡住不动

答:app.py 中

chat.load_models(source="local",local_path=CHATTTS_DIR)

改为

chat.load_models(source="local",local_path=CHATTTS_DIR,compile=False)

3. MacOS 报 libomp 相关错误

答:执行 brew install libomp

4. 报https相关错误 ProxyError: HTTPSConnectionPool(host='www.modelscope.cn', port=443)

答:从 modelscope 魔塔下载模型时不可使用代理,请关闭代理

5. 报错丢失文件 Missing spk_stat.pt

答:本项目(ChatTTS-ui)默认从 modelscope 即魔塔社区下载模型,但该库里的模型缺少 spk_stat.pt文件

请科学上网后从

https://huggingface.co/2Noise/ChatTTS/blob/main/asset/spk_stat.pt

下载 spk_stat.pt, 然后复制 spk_stat.pt 到报错提示的目录下,以本项目为例,需要复制到 models/pzc163/chatTTS/asset 文件夹内

6. 报错 Dynamo is not supported on Python 3.12

答:不支持python3.12+版本,降级到 python3.10

7. MacOS报错 NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+

答:执行 brew install [email protected]

执行 `pip install urllib3==1.26.15

8. Windows上报错:Windows not yet supported for torch.compile

答:chat.load_models(compile=False) 改为 chat.load_models(compile=False,device="cpu")

9. Windows上可以运行有GPU,但很慢

答:如果是英伟达显卡,请将cuda升级到11.8+

Mac OS安装报错

Traceback (most recent call last):
File "/Users/XXX/chattts/chatTTS-ui/app.py", line 41, in
chat.load_models(source="local",local_path=CHATTTS_DIR)
File "/Users/XXX/chattts/chatTTS-ui/ChatTTS/core.py", line 58, in load_models
self._load(**{k: os.path.join(local_path, v) for k, v in OmegaConf.load(os.path.join(local_path, 'config', 'path.yaml')).items()})
File "/Users/XXX/chattts/chatTTS-ui/ChatTTS/core.py", line 100, in _load
self.pretrain_models['spk_stat'] = torch.load(spk_stat_path).to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/XXX/chattts/chatTTS-ui/venv/lib/python3.12/site-packages/torch/serialization.py", line 1040, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/XXX/chattts/chatTTS-ui/venv/lib/python3.12/site-packages/torch/serialization.py", line 1258, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_pickle.UnpicklingError: invalid load key, '<'.

WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead

2024-06-01 00:48:11,055 - modelscope - INFO - PyTorch version 2.3.0 Found.
2024-06-01 00:48:11,056 - modelscope - INFO - Loading ast index from C:\Users\XXX.cache\modelscope\ast_indexer
2024-06-01 00:48:11,121 - modelscope - INFO - Loading done! Current index file version is 1.14.0, with md5 533195b467ca2616b63b56949f55fd59 and a total number of 976 components indexed
INFO:ChatTTS.core:Load from local: D:/XXX/chatTTS-ui/models\pzc163\chatTTS
WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
INFO:ChatTTS.core:gpt loaded.
INFO:ChatTTS.core:decoder loaded.
INFO:ChatTTS.core:tokenizer loaded.
INFO:ChatTTS.core:All initialized.
启动:['127.0.0.1', '9966']

Guys, any others met the same issue?

Error #15: Initializing libomp.dylib, but found libiomp5.dylib already initialized.

Env: Python 3.10

Issue summary:

P: Error #15: Initializing libomp.dylib, but found libiomp5.dylib already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://openmp.llvm.org/

Solution:

os.environ['KMP_DUPLICATE_LIB_OK']='True'
image

Dynamo is not supported on Python 3.12+

raceback (most recent call last):
File "/home/ease/server/chattts/app.py", line 49, in
chat.load_models(source="local",local_path=CHATTTS_DIR)
File "/home/ease/server/chattts/ChatTTS/core.py", line 61, in load_models
self._load(**{k: os.path.join(download_path, v) for k, v in OmegaConf.load(os.path.join(download_path, 'config', 'path.yaml')).items()}, **kwargs)
File "/home/ease/server/chattts/ChatTTS/core.py", line 102, in _load
gpt.gpt.forward = torch.compile(gpt.gpt.forward, backend='inductor', dynamic=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ease/venv/lib/python3.12/site-packages/torch/init.py", line 1866, in compile
raise RuntimeError("Dynamo is not supported on Python 3.12+")
RuntimeError: Dynamo is not supported on Python 3.12+

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.