Giter VIP home page Giter VIP logo

emotional-vits's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

emotional-vits's Issues

1500条英文语音训练了1000epoch,合成后是莫名其妙的英文语句

大佬,你好

我在1500条英文语音训练了1000epoch,

在训练时没有遇到问题,推理后生成语音有点奇怪,

不提示任何错误,但是生成的语音听着是英文,一点都不通顺,像是音素拼到一起的感觉。

请问有什么解决建议吗?

提前感谢~

对于错误音频,我上传了这个位置,https://github.com/zhanglina94/tts-v1/tree/main/emo_tts

感兴趣的朋友可查看~~

已自己解决 tts is not defined

使用音频文件作为感情输入

txt = "疲れた?甘ったれたこと言ってんじゃないわよ!"
txtr=get_roma(txt, hps)
tts(txtr, torch.LongTensor([0]), emotion="./short angry.wav", roma=True, length_scale = 1)

NameError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_18240\902133935.py in
2 txt = "疲れた?甘ったれたこと言ってんじゃないわよ!"
3 txtr=get_roma(txt, hps)
----> 4 tts(txtr, torch.LongTensor([0]), emotion="./short angry.wav", roma=True, length_scale = 1)

NameError: name 'tts' is not defined

刚全都装好的时候能用,重新打开之后就报错说tts没有被定义了,不知道是应该怎么办

——————

tts在第三段还是第四段什么的被定义,如果不用emotion.dict可以先注释掉,然后运行,把tts定义了

inputs变量为空

有人知道这是啥原因吗 我溯源发现是inputs为空,inputs为空是因为掩码全为false。
File "/share/home/ncu3/ly/proj/e-vits/transforms.py", line 114, in rational_quadratic_spline
if torch.min(inputs) < left or torch.max(inputs) > right:
RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument.

请问有人会这个吗

我移植了cj大佬的方言,但是只保留了粤语英语日语和普通话,训练出来,只有普通话是正常的,其他语言都是读不对。
config中的symbols 用了cj大佬方言模型的symbols;
然后我直接复制了symbols.py的symbols出来的,所有语言都胡说一通。

请问有人能告诉我,应该怎么样解决?
或者指一个方向,让我学习如何编写config中symbols。

非常感谢,已经自己尝试了好几次了,反反复复的修改训练。谢谢您。

初始化阶段不装载netD权重是有什么玄机?还是笔误啊

emotional-vits/train_ms.py

Lines 104 to 115 in 09e1654

if ckptG is not None:
_, _, _, epoch_str = utils.load_checkpoint(ckptG, net_g, optim_g, is_old=True)
print("加载原版VITS模型G记录点成功")
else:
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g,
optim_g)
if ckptD is not None:
_, _, _, epoch_str = utils.load_checkpoint(ckptG, net_g, optim_g, is_old=True)
print("加载原版VITS模型D记录点成功")
else:
_, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d,
optim_d)

defective codes at the latest commit

f6d9de5 seems to have several possible and definite bugs. First, train_ms.py's

_, _, _, epoch_str = utils.load_old_checkpoint(ckptD, net_d, optim_d)

part has been modified to:

_, _, _, epoch_str = utils.load_checkpoint(ckptG, net_g, optim_g, is_old=True)

, which is clearly incorrect since it's loading ckptG instead of ckptD, as well as other G-related arguments.
Second, this one is possibly just me, but single-speaker training train.py doesn't seem to work. This may be due to the checkpoint-loading defect, which is present in both train.py and train_ms.py; I only tried to train the model with the current code(unfixed the bugs aforementioned), so I don't know exactly why it didn't work.

采样率问题

w2v2项目提取音频特征时,需要采样率为16000。我们在玩该项目时,采样率可以用22050,或者其它的吗?

训练情感,音频应该要什么样的?

提取npy那个jupyter,既然不提供数据集,就是想问下,如果自己制作数据集,要什么样的?比如有没有bgm?有没有干扰、噪音?有没有高低音要求?有没有男女声音要求?之类的,目前能想起来的只有这些

ImportError: cannot import name 'CommitOperationAdd' from 'huggingface_hub'

在使用样例音频作为情感合成的那一步出错了
依赖包版本:
absl-py 1.4.0
aiohttp 3.8.1
aiosignal 1.3.1
alabaster 0.7.13
anyio 3.5.0
appdirs 1.4.4
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
arrow 1.2.3
asttokens 2.0.5
async-generator 1.10
async-timeout 4.0.2
atomicwrites 1.4.1
attrs 22.2.0
audioread 3.0.0
Babel 2.12.1
backcall 0.2.0
beautifulsoup4 4.11.1
black 23.1.0
bleach 4.1.0
Bottleneck 1.3.5
brotlipy 0.7.0
cachetools 5.3.0
cchardet 2.1.7
certifi 2022.12.7
cffi 1.15.1
chardet 5.1.0
charset-normalizer 3.1.0
click 8.1.3
clldutils 3.19.0
cn2an 0.5.19
colorama 0.4.6
colorlog 6.7.0
comm 0.1.2
contourpy 1.0.7
cryptography 39.0.1
csvw 3.1.3
curio 1.6
cycler 0.11.0
Cython 0.29.33
dataclasses 0.8
datasets 2.10.1
debugpy 1.5.1
decorator 5.1.1
defusedxml 0.7.1
dill 0.3.6
dlinfo 1.2.1
docrepr 0.2.0
docutils 0.18.1
entrypoints 0.4
exceptiongroup 1.1.1
executing 0.8.3
fastjsonschema 2.16.2
filelock 3.10.6
flit_core 3.8.0
fonttools 4.39.2
fqdn 1.5.1
frozenlist 1.3.3
fsspec 2023.3.0
future 0.18.3
google-auth 2.16.3
google-auth-oauthlib 0.4.6
grpcio 1.51.3
huggingface-hub 0.11.0
idna 3.4
imagesize 1.4.1
importlib-metadata 6.1.0
importlib-resources 5.12.0
iniconfig 2.0.0
ipykernel 6.19.2
ipyparallel 8.5.0
ipython 8.11.0
ipython-genutils 0.2.0
ipywidgets 8.0.4
isodate 0.6.1
isoduration 20.11.0
jedi 0.18.1
jieba 0.42.1
Jinja2 3.1.2
joblib 1.2.0
json5 0.9.6
jsonpointer 2.3
jsonschema 4.17.3
jupyter 1.0.0
jupyter_client 8.1.0
jupyter-console 6.6.3
jupyter_core 5.3.0
jupyter-events 0.6.3
jupyter_server 2.5.0
jupyter_server_terminals 0.4.4
jupyterlab-pygments 0.2.2
jupyterlab-widgets 3.0.6
kiwisolver 1.4.4
language-tags 1.2.0
lazy_loader 0.2
librosa 0.10.0.post2
llvmlite 0.39.1
lxml 4.9.2
Markdown 3.4.3
MarkupSafe 2.1.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
mistune 0.8.4
mkl-service 2.4.0
mpmath 1.3.0
msgpack 1.0.5
multidict 6.0.2
multiprocess 0.70.14
mypy-extensions 1.0.0
nbclassic 0.5.2
nbclient 0.5.13
nbconvert 6.5.4
nbformat 5.7.0
nest-asyncio 1.5.6
networkx 3.0
notebook 6.5.2
notebook_shim 0.2.2
numba 0.56.4
numexpr 2.8.4
numpy 1.23.5
oauthlib 3.2.2
outcome 1.2.0
packaging 23.0
pandas 1.5.3
pandocfilters 1.5.0
parso 0.8.3
pathspec 0.11.1
phonemizer 3.2.1
pickleshare 0.7.5
Pillow 9.4.0
pip 23.0.1
pkgutil_resolve_name 1.3.10
platformdirs 2.5.2
pluggy 1.0.0
ply 3.11
pooch 1.6.0
proces 0.1.4
prometheus-client 0.14.1
prompt-toolkit 3.0.36
protobuf 4.22.1
psutil 5.9.0
pure-eval 0.2.2
py 1.11.0
pyarrow 8.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
Pygments 2.14.0
pylatexenc 2.10
pyopenjtalk 0.3.0
pyOpenSSL 23.0.0
pyparsing 3.0.9
pypinyin 0.48.0
PyQt5 5.15.7
PyQt5-sip 12.11.0
pyrsistent 0.19.3
PySocks 1.7.1
pytest 6.2.5
pytest-asyncio 0.20.3
python-dateutil 2.8.2
python-json-logger 2.0.7
pytz 2023.2
pywin32 305.1
pywinpty 2.0.10
PyYAML 6.0
pyzmq 25.0.2
qtconsole 5.4.0
QtPy 2.2.0
rdflib 6.3.1
regex 2023.3.23
requests 2.28.2
requests-oauthlib 1.3.1
responses 0.18.0
rfc3339-validator 0.1.4
rfc3986 1.5.0
rfc3986-validator 0.1.1
rsa 4.9
sacremoses 0.0.53
scikit-learn 1.2.2
scipy 1.10.1
segments 2.2.1
Send2Trash 1.8.0
setuptools 65.6.3
sip 6.6.2
six 1.16.0
sniffio 1.2.0
snowballstemmer 2.2.0
sortedcontainers 2.4.0
soundfile 0.12.1
soupsieve 2.3.2.post1
soxr 0.3.4
Sphinx 6.1.3
sphinx-rtd-theme 1.2.0
sphinxcontrib-applehelp 1.0.4
sphinxcontrib-devhelp 1.0.2
sphinxcontrib-htmlhelp 2.0.1
sphinxcontrib-jquery 4.1
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.3
sphinxcontrib-serializinghtml 1.1.5
stack-data 0.2.0
sympy 1.11.1
tabulate 0.9.0
tensorboard 2.12.0
tensorboard-data-server 0.7.0
tensorboard-plugin-wit 1.8.1
terminado 0.17.1
testpath 0.6.0
threadpoolctl 3.1.0
tinycss2 1.2.1
tokenizers 0.13.2
toml 0.10.2
tomli 2.0.1
torch 2.0.0+cu118
torchaudio 2.0.1+cu118
torchvision 0.15.1
tornado 6.2
tqdm 4.65.0
traitlets 5.7.1
transformers 4.27.3
trio 0.22.0
typing_extensions 4.5.0
Unidecode 1.3.6
uri-template 1.2.0
uritemplate 4.1.1
urllib3 1.26.15
wcwidth 0.2.5
webcolors 1.12
webencodings 0.5.1
websocket-client 0.58.0
Werkzeug 2.2.3
wheel 0.38.4
widgetsnbextension 4.0.5
win-inet-pton 1.1.0
wincertstore 0.2
xxhash 0.0.0
yarl 1.7.2
zipp 3.15.0
完整报错:
Cell In[41], line 4
2 txt = "疲れた?甘ったれたこと言ってんじゃないわよ!"
3 txtr=get_roma(txt, hps)
----> 4 tts(txtr, torch.LongTensor([1]), emotion="./short normal.wav", roma=True, length_scale = 1)

Cell In[38], line 13, in tts(txt, sid, emotion, roma, length_scale)
11 x_tst = stn_tst.unsqueeze(0)
12 x_tst_lengths = torch.LongTensor([stn_tst.size(0)])
---> 13 import emotion_extract
14 emo = torch.FloatTensor(emotion_extract.extract_wav(emotion))
15 # sid = torch.LongTensor([0])
16 # if type(emotion) ==int:
17 # emo = torch.FloatTensor(all_emotions[emotion]).unsqueeze(0)
(...)
27 # else:
28 # emo = torch.FloatTensor(all_emotions[emotion_dict[emotion]]).unsqueeze(0)

File D:\VITS\emotional-vits-main\emotion_extract.py:3
1 import torch
2 import torch.nn as nn
----> 3 from transformers import Wav2Vec2Processor
4 from transformers.models.wav2vec2.modeling_wav2vec2 import (
5 Wav2Vec2Model,
6 Wav2Vec2PreTrainedModel,
7 )
8 import os

File D:\Anaconda3\envs\vits\lib\site-packages\transformers_init_.py:26
23 from typing import TYPE_CHECKING
25 # Check the dependencies satisfy the minimal versions required.
---> 26 from . import dependency_versions_check
27 from .utils import (
28 OptionalDependencyNotAvailable,
29 _LazyModule,
(...)
42 logging,
43 )
46 logger = logging.get_logger(name) # pylint: disable=invalid-name

File D:\Anaconda3\envs\vits\lib\site-packages\transformers\dependency_versions_check.py:36
33 if pkg in deps:
34 if pkg == "tokenizers":
35 # must be loaded here, or else tqdm check may fail
---> 36 from .utils import is_tokenizers_available
38 if not is_tokenizers_available():
39 continue # not required, check version only if installed

File D:\Anaconda3\envs\vits\lib\site-packages\transformers\utils_init_.py:56
22 from .doc import (
23 add_code_sample_docstrings,
24 add_end_docstrings,
(...)
28 replace_return_docstrings,
29 )
30 from .generic import (
31 ContextManagers,
32 ExplicitEnum,
(...)
54 working_or_temp_dir,
55 )
---> 56 from .hub import (
57 CLOUDFRONT_DISTRIB_PREFIX,
58 DISABLE_TELEMETRY,
59 HF_MODULES_CACHE,
60 HUGGINGFACE_CO_PREFIX,
61 HUGGINGFACE_CO_RESOLVE_ENDPOINT,
62 PYTORCH_PRETRAINED_BERT_CACHE,
63 PYTORCH_TRANSFORMERS_CACHE,
64 S3_BUCKET_PREFIX,
65 TRANSFORMERS_CACHE,
66 TRANSFORMERS_DYNAMIC_MODULE_NAME,
67 EntryNotFoundError,
68 PushToHubMixin,
69 RepositoryNotFoundError,
70 RevisionNotFoundError,
71 cached_file,
72 default_cache_path,
73 define_sagemaker_information,
74 download_url,
75 extract_commit_hash,
76 get_cached_models,
77 get_file_from_repo,
78 get_full_repo_name,
79 has_file,
80 http_user_agent,
81 is_offline_mode,
82 is_remote_url,
83 move_cache,
84 send_example_telemetry,
85 )
86 from .import_utils import (
87 ENV_VARS_TRUE_AND_AUTO_VALUES,
88 ENV_VARS_TRUE_VALUES,
(...)
166 torch_version,
167 )
170 WEIGHTS_NAME = "pytorch_model.bin"

File D:\Anaconda3\envs\vits\lib\site-packages\transformers\utils\hub.py:32
30 import huggingface_hub
31 import requests
---> 32 from huggingface_hub import (
33 CommitOperationAdd,
34 create_commit,
35 create_repo,
36 get_hf_file_metadata,
37 hf_hub_download,
38 hf_hub_url,
39 whoami,
40 )
41 from huggingface_hub.file_download import REGEX_COMMIT_HASH, http_get
42 from huggingface_hub.utils import (
43 EntryNotFoundError,
44 LocalEntryNotFoundError,
(...)
48 hf_raise_for_status,
49 )

ImportError: cannot import name 'CommitOperationAdd' from 'huggingface_hub' (D:\Anaconda3\envs\vits\lib\site-packages\huggingface_hub_init_.py)

Multiple GPU training bug

Training on multiple GPUs results in error:

Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/root/emotional-vits/train_ms.py", line 134, in run
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
File "/root/emotional-vits/train_ms.py", line 153, in train_and_evaluate
for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, emo) in enumerate(train_loader):
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 441, in iter
return self._get_iterator()
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 388, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 994, in init
super().init(loader)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 603, in init
self._sampler_iter = iter(self._index_sampler)
File "/root/emotional-vits/data_utils.py", line 372, in iter
ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
ZeroDivisionError: integer division or modulo by zero

wav2vec 模型是针对英文的?

wav2vec2-large-robust-12-ft-emotion-msp-dim 是从英文dataset训练而来的吧?会不会不适合中文的音频?是不是用从中文音频训练而来的模型效果会更好一点?

数据集

请问作者使用的数据集可以提供一下么

DataLoader IndexError: Dimension out of range during training

``Hello,

I am currently encountering an issue during the training process of my model. The error message that I receive is as follows:

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

This error occurs when enumerating over the DataLoader in my training loop:

for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, emo) in enumerate(train_loader):

The batch data seems to be of the correct shape when I print it out just before the loop:

batch[0][0].shape: torch.Size([17])
batch[0][1].shape: torch.Size([513])
batch[0][2].shape: torch.Size([1, 66150])
batch[0][3].shape: torch.Size([1])
batch[0][4].shape: torch.Size([1024])

The problem seems to occur when the collate_fn function of the DataLoader tries to create a LongTensor from the sizes of the batch data:

torch.LongTensor([x[1].size(1) for x in batch])

I have been trying to debug this issue, but I am currently stuck. Any help or pointers would be greatly appreciated.

Regard

IndexError: Dimension out of range

所有依賴已正確安裝

運行
python preprocess.py --text_index 2 --filelists filelists/train.txt filelists/val.txt --text_cleaners chinese_cleaners
python emotion_extract.py --filelists filelists/train.txt filelists/val.txt
均無問題

在實際上train的時候報錯
python train_ms.py -c configs/test.json -m test

File "G:\emotional-vits\data_utils.py", line 264, in <listcomp>
    torch.LongTensor([x[1].size(1) for x in batch]),
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

嘗試依照ChatGpt建議添加
for i, x in enumerate(batch):
if len(x[1].shape) < 2:
print(f"Item {i} in batch has unexpected shape {x[1].shape}")
得到大量輸出

Item 1 in batch has unexpected shape torch.Size([513])
Item 2 in batch has unexpected shape torch.Size([513])
Item 3 in batch has unexpected shape torch.Size([513])
Item 4 in batch has unexpected shape torch.Size([513])
Item 5 in batch has unexpected shape torch.Size([513])
Item 6 in batch has unexpected shape torch.Size([513])
Item 7 in batch has unexpected shape torch.Size([513])
Item 8 in batch has unexpected shape torch.Size([513])
Item 9 in batch has unexpected shape torch.Size([513])
Item 10 in batch has unexpected shape torch.Size([513])
Item 11 in batch has unexpected shape torch.Size([513])
Item 12 in batch has unexpected shape torch.Size([513])
Item 13 in batch has unexpected shape torch.Size([513])
Item 14 in batch has unexpected shape torch.Size([513])
Item 15 in batch has unexpected shape torch.Size([513])
Item 16 in batch has unexpected shape torch.Size([513])
Item 17 in batch has unexpected shape torch.Size([513])
Item 18 in batch has unexpected shape torch.Size([513])
Item 19 in batch has unexpected shape torch.Size([513])
Item 20 in batch has unexpected shape torch.Size([513])
Item 21 in batch has unexpected shape torch.Size([513])
Item 22 in batch has unexpected shape torch.Size([513])
Item 23 in batch has unexpected shape torch.Size([513])```

运行train_ms.py后报ValueError错

在正确地执行完预处理和情感embedding提取的步骤(其中的报错已自行解决)后,运行

python train_ms.py -c configs/mako.json -m mako

随后,在输出

INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes.

后出现报错,在实例化TextAudioSpeakerLoader对象时出现了valueerror错误,报错全文如下:

Traceback (most recent call last): File "train_ms.py", line 314, in main() File "train_ms.py", line 49, in main mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) File "C:\Users\Henry\anaconda3\envs\e-vits\lib\site-packages\torch\multiprocessing\spawn.py", line 240, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "C:\Users\Henry\anaconda3\envs\e-vits\lib\site-packages\torch\multiprocessing\spawn.py", line 198, in start_processes while not context.join(): File "C:\Users\Henry\anaconda3\envs\e-vits\lib\site-packages\torch\multiprocessing\spawn.py", line 160, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "C:\Users\Henry\anaconda3\envs\e-vits\lib\site-packages\torch\multiprocessing\spawn.py", line 69, in _wrap
fn(i, *args)
File "D:\DL\emotional-vits\train_ms.py", line 77, in run
eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data)
File "D:\DL\emotional-vits\data_utils.py", line 183, in init
self._filter()
File "D:\DL\emotional-vits\data_utils.py", line 195, in _filter
for audiopath, sid, text in self.audiopaths_sid_text:
ValueError: too many values to unpack (expected 3)

系统为Windows10,python版本为3.7,所有依赖都已经正确的安装。如果需要更多的上下文信息,我会尽力提供。
image

值得一提的是,在此前执行

python emotion_extract.py --filelists filelists/train.txt filelists/val.txt

这一命令时,遇到了UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 38: illegal multibyte sequence报错。为解决这个问题,我对emotion_extract.py的第132行进行了修改,修改后的代码为

with open(filelist,'r',encoding='UTF-8') as f:

image

此后,命令能够正常执行。不清楚这个修改是否与valueerror报错有关。如果有恰当的解决方案,请及时提出和讨论,万分感激🙏

编码错误

C:\Users\Jason\anaconda3\envs\vocal\lib\site-packages\numpy_distributor_init.py:32: UserWarning: loaded more than 1 DLL from .libs:
C:\Users\Jason\anaconda3\envs\vocal\lib\site-packages\numpy.libs\libopenblas.QVLO2T66WEPI7JZ63PS3HMOHFEY472BC.gfortran-win_amd64.dll
C:\Users\Jason\anaconda3\envs\vocal\lib\site-packages\numpy.libs\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll
stacklevel=1)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
filelists/train.txt ----start emotion extract-------
Traceback (most recent call last):
File "emotion_extract.py", line 133, in
for idx, line in enumerate(f.readlines(),encoding = "utf-8"):
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 20: illegal multibyte sequence
请问txt文件已经改为utf-8编码了为什么仍然报错?
求解

读取检查点部分写错了

读取原版模型那里D Net仍然读的是G Net的状态,估计是复制了没改。我的fork里改的太多,我就不提pull req了

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.